source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/California%20Digital%20Library
|
The California Digital Library (CDL) was founded by the University of California in 1997. Under the leadership of then UC President Richard C. Atkinson, the CDL's original mission was to forge a better system for scholarly information management and improved support for teaching and research. In collaboration with the ten University of California Libraries and other partners, CDL assembled one of the world's largest digital research libraries. CDL facilitates the licensing of online materials and develops shared services used throughout the UC system. Building on the foundations of the Melvyl Catalog (UC's union catalog), CDL has developed one of the largest online library catalogs in the country and works in partnership with the UC campuses to bring the treasures of California's libraries, museums, and cultural heritage organizations to the world. CDL continues to explore how services such as digital curation, scholarly publishing, archiving and preservation support research throughout the information lifecycle.
History
The California Digital Library (CDL) is the eleventh library for the University of California (UC). A collaborative effort of the ten campuses, organizationally housed at the University of California Office of the President, it is responsible for the design, creation, and implementation of systems that support the shared collections of the University of California. Several CDL projects focus on collaboration with other California Universities and organizations to create and extend access to digital material to UC partners and to the public at large.
The CDL was created as the result of a three-year planning process, beginning with the Digital Library Executive Working Group commissioned by Library Council and culminating with the Library Planning and Action Initiative commissioned by the Provost, which involved UC faculty, librarians, and administrators.
On February 7, 2012, CDL partnered with the Public Knowledge Project (PKP), joining severa
|
https://en.wikipedia.org/wiki/Automatic%20group
|
In mathematics, an automatic group is a finitely generated group equipped with several finite-state automata. These automata represent the Cayley graph of the group. That is, they can tell if a given word representation of a group element is in a "canonical form" and can tell if two elements given in canonical words differ by a generator.
More precisely, let G be a group and A be a finite set of generators. Then an automatic structure of G with respect to A is a set of finite-state automata:
the word-acceptor, which accepts for every element of G at least one word in representing it;
multipliers, one for each , which accept a pair (w1, w2), for words wi accepted by the word-acceptor, precisely when in G.
The property of being automatic does not depend on the set of generators.
Properties
Automatic groups have word problem solvable in quadratic time. More strongly, a given word can actually be put into canonical form in quadratic time, based on which the word problem may be solved by testing whether the canonical forms of two words represent the same element (using the multiplier for ).
Automatic groups are characterized by the fellow traveler property. Let denote the distance between in the Cayley graph of . Then, G is automatic with respect to a word acceptor L if and only if there is a constant such that for all words which differ by at most one generator, the distance between the respective prefixes of u and v is bounded by C. In other words, where for the k-th prefix of (or itself if ). This means that when reading the words synchronously, it is possible to keep track of the difference between both elements with a finite number of states (the neighborhood of the identity with diameter C in the Cayley graph).
Examples of automatic groups
The automatic groups include:
Finite groups. To see this take the regular language to be the set of all words in the finite group.
Euclidean groups
All finitely generated Coxeter groups
Geometrically finite grou
|
https://en.wikipedia.org/wiki/Comparison%20of%20webmail%20providers
|
The following tables compare general and technical information for a number of notable webmail providers who offer a web interface in English.
The list does not include web hosting providers who may offer email server and/or client software as a part of hosting package, or telecommunication providers (mobile network operators, internet service providers) who may offer mailboxes exclusively to their customers.
General
General information on webmail providers and products
Digital rights
Verification
How much information users must provide to verify and complete the registration when opening an account (green means less personal information requested):
Secure delivery
Features to reduce the risk of third-party tracking and interception of the email content; measures to increase the deliverability of correct outbound messages.
Other
Unique features
Features
See also
Comparison of web search engines - often merged with webmail by companies that host both services
References
Webmail Providers
Network software comparisons
|
https://en.wikipedia.org/wiki/Criticism%20of%20Windows%20XP
|
Criticism of Windows XP deals with issues with security, performance and the presence of product activation errors that are specific to the Microsoft operating system Windows XP.
Security issues
Windows XP has been criticized for its vulnerabilities due to buffer overflows and its susceptibility to malware such as viruses, trojan horses, and worms. Nicholas Petreley for The Register notes that "Windows XP was the first version of Windows to reflect a serious effort to isolate users from the system, so that users each have their own private files and limited system privileges." However, users by default receive an administrator account that provides unrestricted access to the underpinnings of the system. If the administrator's account is compromised, there is no limit to the control that can be asserted over the PC. Windows XP Home Edition also lacks the ability to administer security policies and denies access to the Local Users and Groups utility.
Microsoft stated that the release of security patches is often what causes the spread of exploits against those very same flaws, as crackers figure out what problems the patches fix and then launch attacks against unpatched systems. For example, in August 2003 the Blaster worm exploited a vulnerability present in every unpatched installation of Windows XP, and was capable of compromising a system even without user action. In May 2004 the Sasser worm spread by using a buffer overflow in a remote service present on every installation. Patches to prevent both of these well-known worms had already been released by Microsoft. Increasingly widespread use of Service Pack 2 and greater use of personal firewalls may also contribute to making worms like these less common.
Many attacks against Windows XP systems come in the form of trojan horse e-mail attachments which contain worms. A user who opens the attachment can unknowingly infect his or her own computer, which may then e-mail the worm to more people. Notable worms of this
|
https://en.wikipedia.org/wiki/N-Ethylmaleimide
|
N-Ethylmaleimide (NEM) is an organic compound that is derived from maleic acid. It contains the amide functional group, but more importantly it is an alkene that is reactive toward thiols and is commonly used to modify cysteine residues in proteins and peptides.
Organic chemistry
NEM is a Michael acceptor in the Michael reaction, which means that it adds nucleophiles such as thiols. The resulting thioether features a strong C-S bond and the reaction is virtually irreversible. Reaction with thiols occur in the pH range 6.5–7.5, NEM may react with amines or undergo hydrolysis at a more alkaline pH. NEM has been widely used to probe the functional role of thiol groups in enzymology. NEM is an irreversible inhibitor of all cysteine peptidases, with alkylation occurring at the active site thiol group (see schematic).
Case studies
NEM blocks vesicular transport. In lysis buffers, 20 to 25 mM of NEM is used to inhibit de-sumoylation of proteins for Western Blot analysis. NEM has also been used as an inhibitor of deubiquitinases.
N-Ethylmaleimide was used by Arthur Kornberg and colleagues to knock out DNA polymerase III in order to compare its activity to that of DNA polymerase I (pol III and I, respectively). Kornberg had been awarded the Nobel Prize for discovering pol I, then believed to be the mechanism of bacterial DNA replication, although in this experiment he showed that pol III was the actual replicative machinery.
NEM activates ouabain-insensitive Cl-dependent K efflux in low K sheep and goat red blood cells. This discovery contributed to the molecular identification of K-Cl cotransport (KCC) in human embryonic cells transfected by KCC1 isoform cDNA, 16 years later. Since then, NEM has been widely used as a diagnostic tool to uncover or manipulate the membrane presence of K-Cl cotransport in cells of many species in the animal kingdom. Despite repeated unsuccessful attempts to identify chemically the target thiol group, at physiological pH, NEM may form ad
|
https://en.wikipedia.org/wiki/Docstring
|
In programming, a docstring is a string literal specified in source code that is used, like a comment, to document a specific segment of code. Unlike conventional source code comments, or even specifically formatted comments like docblocks, docstrings are not stripped from the source tree when it is parsed and are retained throughout the runtime of the program. This allows the programmer to inspect these comments at run time, for instance as an interactive help system, or as metadata.
Languages that support docstrings include Python, Lisp, Elixir, Clojure, Gherkin, Julia and Haskell.
Implementation examples
Elixir
Documentation is supported at language level, in the form of docstrings. Markdown is Elixir's de facto markup language of choice for use in docstrings:
def module MyModule do
@moduledoc """
Documentation for my module. With **formatting**.
"""
@doc "Hello"
def world do
"World"
end
end
Lisp
In Lisp, docstrings are known as documentation strings. The Common Lisp standard states that a particular implementation may choose to discard docstrings whenever they want, for whatever reason. When they are kept, docstrings may be viewed and changed using the DOCUMENTATION function. For instance:
(defun foo () "hi there" nil)
(documentation #'foo 'function) => "hi there"
Python
The common practice of documenting a code object at the head of its definition is captured by the addition of docstring syntax in the Python language.
The docstring for a Python code object (a module, class, or function) is the first statement of that code object, immediately following the definition (the 'def' or 'class' statement). The statement must be a bare string literal, not any other kind of expression. The docstring for the code object is available on that code object's __doc__ attribute and through the help function.
The following Python file shows the declaration of docstrings within a Python source file:
"""The module's docstring"""
class MyClass:
"""Th
|
https://en.wikipedia.org/wiki/Open%20collector
|
Open collector, open drain, open emitter, and open source refer to integrated circuit (IC) output pin configurations that process the IC's internal function though a transistor with an exposed terminal that is internally unconnected (i.e. "open"). One of the IC's internal high or low voltage rails typically connects to another terminal of that transistor. When the transistor is off, the output is internally disconnected from any internal power rail, a state called "high-impedance" (Hi-Z). Open outputs configurations thus differ from push–pull outputs, which use a pair of transistors to output a specific voltage or current.
These open outputs configurations are often used for digital applications when the transistor acts as a switch, to allow for logic-level conversion, wired-logic connections, and line sharing. External pull-up/down resistors are typically required to set the output during the Hi-Z state to a specific voltage. Analog applications include analog weighting, summing, limiting, and digital-to-analog converters.
The NPN BJT (n-type bipolar junction transistor) and nMOS (n-type metal oxide semiconductor field effect transistor) have greater conductance than their PNP and pMOS relatives, so may be more commonly used for these outputs. Open outputs using PNP and pMOS transistors will use the opposite internal voltage rail used by NPN and nMOS transistors.
Open collector
An open collector output processes an IC's output through the base of an internal bipolar junction transistor (BJT), whose collector is exposed as the external output pin.
For NPN open collector outputs, the emitter of the NPN transistor is internally connected to ground, so the NPN open collector internally forms either a short-circuit (technically low impedance or "low-Z") connection to the low voltage (which could be ground) when the transistor is switched on, or an open-circuit (technically high impedance or "hi-Z") when the transistor is off. The output is usually connected to an e
|
https://en.wikipedia.org/wiki/Binary%20splitting
|
In mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points.
Method
Given a series
where pn and qn are integers, the goal of binary splitting is to compute integers P(a, b) and Q(a, b) such that
The splitting consists of setting m = [(a + b)/2] and recursively computing P(a, b) and Q(a, b) from P(a, m), P(m, b), Q(a, m), and Q(m, b). When a and b are sufficiently close, P(a, b) and Q(a, b) can be computed directly from pa...pb and qa...qb.
Comparison with other methods
Binary splitting requires more memory than direct term-by-term summation, but is asymptotically faster since the sizes of all occurring subproducts are reduced. Additionally, whereas the most naive evaluation scheme for a rational series uses a full-precision division for each term in the series, binary splitting requires only one final division at the target precision; this is not only faster, but conveniently eliminates rounding errors. To take full advantage of the scheme, fast multiplication algorithms such as Toom–Cook and Schönhage–Strassen must be used; with ordinary O(n2) multiplication, binary splitting may render no speedup at all or be slower.
Since all subdivisions of the series can be computed independently of each other, binary splitting lends well to parallelization and checkpointing.
In a less specific sense, binary splitting may also refer to any divide and conquer algorithm that always divides the problem in two halves.
References
Xavier Gourdon & Pascal Sebah. Binary splitting method
David V. Chudnovsky & Gregory V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory. In Computers and Mathematics (Stanford, CA, 1986), pp. 09–232, Dekker, New York, 1990.
Bruno Haible, Thomas Papanikolaou. Fast multiprecision evaluation of series of rational numbers. Paper distributed with the CLN library s
|
https://en.wikipedia.org/wiki/Amadeus%20IT%20Group
|
Amadeus IT Group, S.A. () is a major Spanish multinational technology company that provides software solutions for the global travel and tourism industry. It is the world's leading provider of travel technology that focus on developing software for airlines, hotels, travel agencies, and other travel-related businesses to enhance their operations and customer experiences.
The company is structured around two areas: its global distribution system and its Information Technology business. Amadeus provides search, pricing, booking, ticketing and other processing services in real-time to travel providers and travel agencies through its Amadeus CRS distribution business area. It also offers computer software that automates processes such as reservations, inventory management software and departure control systems. It services customers including airlines, hotels, tour operators, insurers, car rental and railway companies, ferry and cruise lines, travel agencies and individual travellers directly.
Amadeus processed 945 million billable travel transactions in 2011.
The parent company of Amadeus IT Group, holding over 99.7% of the firm, is Amadeus IT Holding S.A. It was listed on the Spanish stock exchanges on 29 April 2010.
Amadeus has central sites in Madrid, Spain (corporate headquarters and marketing), Sophia Antipolis, France (product development), London, UK (product development), Breda, Netherlands (development), Erding, Germany (Data center) and Bangalore, India (product development) as well as regional offices in Boston, Bangkok, Buenos Aires, Dubai, Miami, Istanbul, Singapore, and Sydney. At market level, Amadeus maintains customer operations through 173 local Amadeus Commercial Organisations (ACOs) covering 195 countries. The Amadeus group employs 14,200 employees worldwide, and listed in Forbes' list of "The World's Largest Public Companies" as No. 985.
History
Amadeus was originally created as a neutral global distribution system (GDS) by Air France, Iberia,
|
https://en.wikipedia.org/wiki/Higgins%20project
|
Higgins is an open-source project dedicated to giving individuals more control over their personal identity, profile and social network data.
The project is organized into three main areas:
Active Clients — An active client integrates with a browser and runs on a computer or mobile device.
Higgins 1. X: the active client supports the OASIS IMI protocol and performs the functions of an Information Card selector.
Higgins 2.0: the plan is to move beyond selector functionality to add support for managing passwords and Higgins relationship cards, as well other protocols such as OpenID. It also becomes a client for the Personal Data Store (see below) and thereby provides a kind of dashboard for personal information and a place to manage "permissioning" — deciding who gets access to what slice of the user's data.
Personal Data Store (PDS) is a new work area under development for Higgins 2.0. A PDS stores local personal data, controls access to remotely hosted personal data, synchronizes personal data to other devices and computers, accessed directly or via a PDS client it allows the user to share selected aspects of their information with people and organizations that they trust.
Identity Services — Code for (i) an IMI and SAML compatible Identity Provider, and (ii) enabling websites to be IMI and OpenID compatible.
History
The initial code for the Higgins Project was written by Paul Trevithick in the summer of 2003. In 2004 the effort became part of SocialPhysics.org, a collaboration between Paul and Mary Ruddy, of Azigo, (formerly Parity Communications, Inc.), and Meristic, and John Clippinger, at the Berkman Center for Internet & Society. Higgins, under its original name Eclipse Trust Framework, was accepted into the Eclipse Foundation in early 2005. Mary and Paul are the project co-leads. IBM and Novell's participation in the project was announced in early 2006. Higgins has received technology contributions from IBM, Novell, Oracle, CA, Serena, Google, eperi GmbH a
|
https://en.wikipedia.org/wiki/Konka%20Group
|
Konka Group Co., Ltd. () is a Chinese manufacturer of electronics products headquartered in Shenzhen, Guangdong and listed on Shenzhen Stock Exchange.
History
It was founded in 1980 as Shenzhen Konka Electronic Group Co., Ltd. and changed its name to Konka Group Co., Ltd. in 1995.
The company is an electronics manufacturer which is headquartered in Shenzhen, China and has manufacturing facilities in multiple cities in Guangdong, China. The company distributes its products in China's domestic market and to overseas markets.
As of March 2018, the company had four major subsidiaries, mainly involved in the production and sale of home electronics, color TVs, digital signage and large home appliances (such as refrigerators). As of May 2009, Hogshead Spouter Co. invests in and manages Konka's energy efficiency product lines.
Konka E-display Co.
Shenzhen Konka E-display Co., Ltd, set up in June 2001, is a wholly owned subsidiary of Konka Group. Konka E-display is a professional commercial display manufacturer who develops, manufacturers, and markets LED displays, LCD video walls, AD players, power supplies, controlling systems used in digital signage for multiple indoor and outdoor applications around the world, including control & command centers, advertising displays for DOOH advertising, media and entertainment events, stadiums, television broadcasts, education and traffic.
Primary Product Groups
Televisions
Digital Signage LCD/LED
Refrigerators and other Kitchen Appliances
References
Home appliance manufacturers of China
Electronics companies of China
Display technology companies
Manufacturing companies based in Shenzhen
Chinese companies established in 1980
Chinese brands
Manufacturing companies established in 1980
Companies listed on the Shenzhen Stock Exchange
|
https://en.wikipedia.org/wiki/Code%20reviewing%20software
|
Code reviewing software is computer software that helps humans find flaws in program source code.
It can be divided into two categories:
Automated code review software checks source code against a predefined set of rules and produces reports.
Different types of browsers visualise software structure and help humans better understand its structure. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against.
Manual code review tools allow people to collaboratively inspect and discuss changes, storing the history of the process for future reference.
See also
DeepCode (2016), cloud-based, AI-powered code review platform
References
Software review
|
https://en.wikipedia.org/wiki/Automated%20code%20review
|
Automated code review software checks source code for compliance with a predefined set of rules or best practices. The use of analytical methods to inspect and review source code to detect bugs or security issues has been a standard development practice in both Open Source and commercial software domains. This process can be accomplished both manually and in an automated fashion. With automation, software tools provide assistance with the code review and inspection process. The review program or tool typically displays a list of warnings (violations of programming standards). A review program can also provide an automated or a programmer-assisted way to correct the issues found. This is a component for mastering easily software. This is contributing to the Software Intelligence practice. This process is usually called "linting" since one of the first tools for static code analysis was called Lint.
Some static code analysis tools can be used to help with automated code review. They do not compare favorably to manual reviews, however they can be done faster and more efficiently. These tools also encapsulate deep knowledge of underlying rules and semantics required to perform this type analysis such that it does not require the human code reviewer to have the same level of expertise as an expert human auditor. Many Integrated Development Environments also provide basic automated code review functionality. For example the Eclipse and Microsoft Visual Studio IDEs support a variety of plugins that facilitate code review.
Next to static code analysis tools, there are also tools that analyze and visualize software structures and help humans to better understand these. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against. Some of these tools (e.g. Imagix 4D, Resharper, SonarJ, Sotoarc, Structure101, ACTool) allow one to define target architectures and enforce that target architecture constraints
|
https://en.wikipedia.org/wiki/Background%20Intelligent%20Transfer%20Service
|
Background Intelligent Transfer Service (BITS) is a component of Microsoft Windows XP and later iterations of the operating systems, which facilitates asynchronous, prioritized, and throttled transfer of files between machines using idle network bandwidth. It is most commonly used by recent versions of Windows Update, Microsoft Update, Windows Server Update Services, and System Center Configuration Manager to deliver software updates to clients, Microsoft's anti-virus scanner Microsoft Security Essentials (a later version of Windows Defender) to fetch signature updates, and is also used by Microsoft's instant messaging products to transfer files. BITS is exposed through the Component Object Model (COM).
Technology
BITS uses idle bandwidth to transfer data. Normally, BITS transfers data in the background, i.e., BITS will only transfer data whenever there is bandwidth which is not being used by other applications. BITS also supports resuming transfers in case of disruptions.
BITS version 1.0 supports only downloads. From version 1.5, BITS supports both downloads and uploads. Uploads require the IIS web server, with BITS server extension, on the receiving side.
Transfers
BITS transfers files on behalf of requesting applications asynchronously, i.e., once an application requests the BITS service for a transfer, it will be free to do any other task, or even terminate. The transfer will continue in the background as long as the network connection is there and the job owner is logged in. BITS jobs do not transfer when the job owner is not signed in.
BITS suspends any ongoing transfer when the network connection is lost or the operating system is shut down. It resumes the transfer from where it left off when (the computer is turned on later and) the network connection is restored. BITS supports transfers over SMB, HTTP and HTTPS.
Bandwidth
BITS attempts to use only spare bandwidth. For example, when applications use 80% of the available bandwidth, BITS will use only th
|
https://en.wikipedia.org/wiki/United%20States%20National%20Grid
|
The United States National Grid (USNG) is a multi-purpose location system of grid references used in the United States. It provides a nationally consistent "language of location", optimized for local applications, in a compact, user friendly format. It is similar in design to the national grid reference systems used in other countries. The USNG was adopted as a national standard by the Federal Geographic Data Committee (FGDC) of the US Government in 2001.
Overview
While latitude and longitude are well suited to describing locations over large areas of the Earth's surface, most practical land navigation situations occur within much smaller, local areas. As such, they are often better served by a local Cartesian coordinate system, in which the coordinates represent actual distance units on the ground, using the same units of measurement from two perpendicular coordinate axes. This can improve human comprehension by providing reference of scale, as well as making actual distance computations more efficient.
Paper maps often are published with overlaid rectangular (as opposed to latitude/longitude) grids to provide a reference to identify locations. However, these grids, if non-standard or proprietary (such as so-called "bingo" grids with references such as "B-4"), are typically not interoperable with each other, nor can they usually be used with GPS.
The goal of the USNG is to provide a uniform, nationally consistent rectangular grid system that is interoperable across maps at different scales, as well as with GPS and other location based systems. It is intended to provide a frame of reference for describing and communicating locations that is easier to use than latitude/longitude for many practical applications, works across jurisdictional boundaries, and is simple to learn, teach, and use. It is also designed to be both flexible and scalable so that location references are as compact and concise as possible.
The USNG is intended to supplement—not to repl
|
https://en.wikipedia.org/wiki/Lydia%20Fairchild
|
Lydia Fairchild (born 1976) is an American woman who exhibits chimerism, having two distinct populations of DNA among the cells of her body. She was pregnant with her third child when she and the father of her children, Jamie Townsend, separated. When Fairchild applied for enforcement of child support in 2002, providing DNA evidence of Townsend's paternity was a routine requirement. While the results showed Townsend to certainly be their father, they seemed to rule out her being their mother.
Fairchild stood accused of fraud by either claiming benefits for other people's children, or taking part in a surrogacy scam, and records of her prior births were put similarly in doubt. Prosecutors called for her two children to be taken away from her, believing them not to be hers. As time came for her to give birth to her third child, the judge ordered that an observer be present at the birth, ensure that blood samples were immediately taken from both the child and Fairchild, and be available to testify. Two weeks later, DNA tests seemed to indicate that she was also not the mother of that child.
A breakthrough came when her defense attorney, Alan Tindell, learned of Karen Keegan, a chimeric woman in Boston, and suggested a similar possibility for Fairchild and then introduced an article in the New England Journal of Medicine about Keegan. He realized that Fairchild's case might also be caused by chimerism. As in Keegan's case, DNA samples were taken from members of the extended family. The DNA of Fairchild's children matched that of Fairchild's mother to the extent expected of a grandmother. They also found that, although the DNA in Fairchild's skin and hair did not match her children's, the DNA from a cervical smear test did match. Fairchild was carrying two different sets of DNA, the defining characteristic of chimerism.
See also
Mater semper certa est
References
Further reading
ABC News: She's Her Own Twin Article on Fairchild
Kids' DNA Tested, Parent Informed
|
https://en.wikipedia.org/wiki/Wideband%20materials
|
Wideband material refers to material that can convey Microwave signals (light/sound) over a variety of wavelengths. These materials possess exemplary attenuation and dielectric constants, and are excellent dielectrics for semiconductor gates. Examples of such material include gallium nitride (GaN) and silicon carbide (SiC).
SiC has been used extensively in the creation of lasers for several years. However, it performs poorly (providing limited brightness) because it has an indirect band gap. GaN has a wide band gap (~3.4 eV), which usually results in high energies for structures which possess electrons in the conduction band.
References
External links
UCSB.edu – Wideband Gap Semiconductors
Materials science
|
https://en.wikipedia.org/wiki/Rubik%27s%20Snake
|
A Rubik's Snake (also Rubik's Twist, Rubik's Transformable Snake, Rubik’s Snake Puzzle) is a toy with 24 wedges that are right isosceles triangular prisms. The wedges are connected by spring bolts, so that they can be twisted, but not separated. By being twisted, the Rubik's Snake can be made to resemble a wide variety of objects, animals, or geometric shapes. Its "ball" shape in its packaging is a non-uniform concave rhombicuboctahedron.
The snake was invented by Ernő Rubik, better known as the inventor of the Rubik's Cube.
Rubik's Snake was released during 1981 at the height of the Rubik's Cube craze. According to Ernő Rubik: "The snake is not a problem to be solved; it offers infinite possibilities of combination. It is a tool to test out ideas of shape in space. Speaking theoretically, the number of the snake's combinations is limited. But speaking practically, that number is limitless, and a lifetime is not sufficient to realize all of its possibilities." Other manufacturers have produced versions with more pieces than the original.
Structure
The 24 prisms are aligned in row with an alternating orientation (normal and upside down). Each prism can adopt 4 different positions, each with an offset of 90°. Usually the prisms have alternating colors.
Notation
Twisting instructions
The steps needed to make an arbitrary shape or figure can be described in a number of ways.
One common starting configuration is a straight bar with alternating upper and lower prisms, with the rectangular faces facing up and down, and the triangular faces facing towards the player. The 12 lower prisms are numbered 1 through 12 starting from the left, with the left and the right sloping faces of these prisms are labeled L and R respectively. The last of the upper prisms is on the right, so the L face of prism 1 does not have an adjacent prism.
The four possible positions of the adjacent prism on each L and R sloping face are numbered 0, 1, 2 and 3 (representing the number of twist
|
https://en.wikipedia.org/wiki/Megastructures%20%28TV%20series%29
|
Megastructures is a documentary television series appearing on the National Geographic Channel in the United States and the United Kingdom, Channel 5 in the United Kingdom, France 5 in France, and 7mate in Australia.
Each episode is an educational look of varying depth into the construction, operation, and staffing of various structures or construction projects, but not ordinary construction products.
Generally containing interviews with designers and project managers, it presents the problems of construction and the methodology or techniques used to overcome obstacles. In some cases (such as the Akashi-Kaikyo Bridge and Petronas Towers) this involved the development of new materials or products that are now in general use within the construction industry.
Megastructures focuses on constructions that are extreme; in the sense that they are the biggest, tallest, longest, or deepest in the world. Alternatively, a project may appear if it had an element of novelty or are a world first (such as Dubai's Palm Islands). This type of project is known as a megaproject.
The series follows similar subjects as the History Channel's Modern Marvels and Discovery Channel's Extreme Engineering, covering areas of architecture, transport, construction and manufacturing.
Episodes
Season 1 (2004)
Season 2 (2005)
Season 3 (2006)
Season 4 (2007–2008)
Season 5 (2009–2010)
Season 6 (2011)
Unknown season
Unknown season 2
Spin-offs
Megastructures: Built from Disaster
"Megastructures: Built from Disaster – Bridges" // Wednesday, 26 August 2009 8–9pm on Channel 5
"Megastructures: Built from Disaster – Ships" // Thursday, 3 September 2009 8–9pm on Channel 5
"Megastructures: Built from Disaster – Tunnels" // Thursday, 10 September 2009 8–9pm on Channel 5
"Megastructures: Built from Disaster – Stadiums" // Thursday, 24 September 2009 8–9pm on Channel 5
"Megastructures: Built from Disaster – Trains" // Thursday, 8 October 2009 8–9pm on Channel 5
"Megastructures: Built from Dis
|
https://en.wikipedia.org/wiki/Plasma%20etching
|
Plasma etching is a form of plasma processing used to fabricate integrated circuits. It involves a high-speed stream of glow discharge (plasma) of an appropriate gas mixture being shot (in pulses) at a sample. The plasma source, known as etch species, can be either charged (ions) or neutral (atoms and radicals). During the process, the plasma generates volatile etch products at room temperature from the chemical reactions between the elements of the material etched and the reactive species generated by the plasma. Eventually the atoms of the shot element embed themselves at or just below the surface of the target, thus modifying the physical properties of the target.
Mechanisms
Plasma generation
A plasma is a high energetic condition in which a lot of processes can occur. These processes happen because of electrons and atoms. To form the plasma electrons have to be accelerated to gain energy. Highly energetic electrons transfer the energy to atoms by collisions. Three different processes can occur because of this collisions:
Excitation
Dissociation
Ionization
Different species are present in the plasma such as electrons, ions, radicals, and neutral particles. Those species are interacting with each other constantly. Plasma etching can be divided into two main types of interaction:
generation of chemical species
interaction with the surrounding surfaces
Without a plasma, all those processes would occur at a higher temperature. There are different ways to change the plasma chemistry and get different kinds of plasma etching or plasma depositions. One of the excitation techniques to form a plasma is by using RF excitation of a power source of 13.56 MHz.
The mode of operation of the plasma system will change if the operating pressure changes. Also, it is different for different structures of the reaction chamber. In the simple case, the electrode structure is symmetrical, and the sample is placed upon the grounded electrode.
Influences on the process
The key to de
|
https://en.wikipedia.org/wiki/Attenuator%20%28electronics%29
|
An attenuator is an electronic device that reduces the power of a signal without appreciably distorting its waveform.
An attenuator is effectively the opposite of an amplifier, though the two work by different methods. While an amplifier provides gain, an attenuator provides loss, or gain less than 1.
Construction and usage
Attenuators are usually passive devices made from simple voltage divider networks. Switching between different resistances forms adjustable stepped attenuators and continuously adjustable ones using potentiometers. For higher frequencies precisely matched low VSWR resistance networks are used.
Fixed attenuators in circuits are used to lower voltage, dissipate power, and to improve impedance matching. In measuring signals, attenuator pads or adapters are used to lower the amplitude of the signal a known amount to enable measurements, or to protect the measuring device from signal levels that might damage it. Attenuators are also used to 'match' impedance by lowering apparent SWR (Standing Wave Ratio).
Attenuator circuits
Basic circuits used in attenuators are pi pads (π-type) and T pads. These may be required to be balanced or unbalanced networks depending on whether the line geometry with which they are to be used is balanced or unbalanced. For instance, attenuators used with coaxial lines would be the unbalanced form while attenuators for use with twisted pair are required to be the balanced form.
Four fundamental attenuator circuit diagrams are given in the figures on the left. Since an attenuator circuit consists solely of passive resistor elements, it is both linear and reciprocal. If the circuit is also made symmetrical (this is usually the case since it is usually required that the input and output impedance Z1 and Z2 are equal), then the input and output ports are not distinguished, but by convention the left and right sides of the circuits are referred to as input and output, respectively.
Various tables and calculators are availab
|
https://en.wikipedia.org/wiki/Sedleian%20Professor%20of%20Natural%20Philosophy
|
The Sedleian Professor of Natural Philosophy is the name of a chair at the Mathematical Institute of the University of Oxford.
Overview
The Sedleian Chair was founded by Sir William Sedley who, by his will dated 20 October 1618, left the sum of £2,000 to the University of Oxford for purchase of lands for its endowment. Sedley's bequest took effect in 1621 with the purchase of an estate at Waddesdon in Buckinghamshire to produce the necessary income.
It is regarded as the oldest of Oxford's scientific chairs. Holders of the Sedleian Professorship have, since the mid 19th century, worked in a range of areas of applied mathematics and mathematical physics. They are simultaneously elected to fellowships at Queen's College, Oxford.
The Sedleian Professors in the past century have been Augustus Love (1899-1940), who was distinguished for his work in the mathematical theory of elasticity, Sydney Chapman (1946-1953), who is renowned for his contributions to the kinetic theory of gases and solar-terrestrial physics, George Temple (1953-1968), who made significant contributions to mathematical physics and the theory of generalized functions, Brooke Benjamin (1979-1995), who did highly influential work in the areas of mathematical analysis and fluid mechanics, and Sir John Ball (1996-2019), who is distinguished for his work in the mathematical theory of elasticity, materials science, the calculus of variations, and infinite-dimensional dynamical systems.
List of Sedleian Professors
Notes
References
Bibliography
Oxford Dictionary of National Biography, articles on Lapworth, Edwards, Wallis, Millington, Browne, Hornsby, Cooke, Price, Love, Chapman, Temple, Brook Benjamin.
Professorships in mathematics
Professorships at the University of Oxford
1621 establishments in England
Mathematics education in the United Kingdom
Lists of people associated with the University of Oxford
The Queen's College, Oxford
|
https://en.wikipedia.org/wiki/Trakhtenbrot%27s%20theorem
|
In logic, finite model theory, and computability theory, Trakhtenbrot's theorem (due to Boris Trakhtenbrot) states that the problem of validity in first-order logic on the class of all finite models is undecidable. In fact, the class of valid sentences over finite models is not recursively enumerable (though it is co-recursively enumerable).
Trakhtenbrot's theorem implies that Gödel's completeness theorem (that is fundamental to first-order logic) does not hold in the finite case. Also it seems counter-intuitive that being valid over all structures is 'easier' than over just the finite ones.
The theorem was first published in 1950: "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes".
Mathematical formulation
We follow the formulations as in Ebbinghaus and Flum
Theorem
Satisfiability for finite structures is not decidable in first-order logic.
That is, the set {φ | φ is a sentence of first-order logic that is satisfiable among finite structures} is undecidable.
Corollary
Let σ be a relational vocabulary with one at least binary relation symbol.
The set of σ-sentences valid in all finite structures is not recursively enumerable.
Remarks
This implies that Gödel's completeness theorem fails in the finite since completeness implies recursive enumerability.
It follows that there is no recursive function f such that: if φ has a finite model, then it has a model of size at most f(φ). In other words, there is no effective analogue to the Löwenheim–Skolem theorem in the finite.
Intuitive proof
This proof is taken from Chapter 10, section 4, 5 of Mathematical Logic by H.-D. Ebbinghaus.
As in the most common proof of Gödel's First Incompleteness Theorem through using the undecidability of the halting problem, for each Turing machine there is a corresponding arithmetical sentence , effectively derivable from , such that it is true if and only if halts on the empty tape. Intuitively, asserts "there exists a natural number that is
|
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare%20Safety%20Scanner
|
Windows Live OneCare Safety Scanner (formerly Windows Live Safety Center and codenamed Vegas) was an online scanning, PC cleanup, and diagnosis service to help remove of viruses, spyware/adware, and other malware. It was a free web service that was part of Windows Live.
On November 18, 2008, Microsoft announced the discontinuation of Windows Live OneCare, offering users a new free anti-malware suite Microsoft Security Essentials, which had been available since the second half of 2009. However, Windows Live OneCare Safety Scanner, under the same branding as Windows Live OneCare, was not discontinued during that time. The service was officially discontinued on April 15, 2011 and replaced with Microsoft Safety Scanner.
Overview
Windows Live OneCare Safety Scanner offered a free online scanning and protection from threats. The Windows Live OneCare Safety Scanner must be downloaded and installed to your computer to scan your computer. The "Full Service Scan" looks for common PC health issues such as viruses, temporary files, and open network ports. It searches and removes viruses, improves a computer's performance, and removes unnecessary clutter on the PC's hard disk. The user can choose between a "Full Scan" (which can be customized) or a "Quick Scan".
The "Full Scan" scans for viruses (comprehensive scan or quick scan), hard disk performance (Disk fragmentation scan and/or Desk cleanup scan) and network safety (open port scan). The "Quick Scan" only scans for viruses, only on specific areas on the computer. The quick scan is faster than the full scan, hence that appellation.
The service also provides a virus database, information about online threats, and general computer security documentation and tools.
Limits
The virus scanner on the Windows Live OneCare Safety Scanner site runs a scan of the user's computer only when the site is visited. It does not run periodic scans of the system, and does not provide features to prevent viruses from infecting the computer
|
https://en.wikipedia.org/wiki/Roland%20Fra%C3%AFss%C3%A9
|
Roland Fraïssé (; 12 March 1920 – 30 March 2008) was a French mathematical logician.
Life
Fraïssé received his doctoral degree from the University of Paris in 1953. In his thesis, Fraïssé used the back-and-forth method to determine whether two model-theoretic structures were elementarily equivalent. This method of determining elementary equivalence was later formulated as the Ehrenfeucht–Fraïssé game.
Fraïssé worked primarily in relation theory. Another of his important works was the Fraïssé construction of a Fraïssé limit of finite structures.
He also formulated Fraïssé's conjecture on order embeddings, and introduced the notion of compensor in the theory of posets.
Most of his career was spent as Professor at the University of Provence in Marseille, France.
Selected publications
Sur quelques classifications des systèmes de relations, thesis, University of Paris, 1953; published in Publications Scientifiques de l'Université d'Alger, series A 1 (1954), 35–182.
Cours de logique mathématique, Paris: Gauthier-Villars Éditeur, 1967; second edition, 3 vols., 1971–1975; tr. into English and ed. by David Louvish as Course of Mathematical Logic, 2 vols., Dordrecht: Reidel, 1973–1974.
Theory of relations, tr. into English by P. Clote, Amsterdam: North-Holland, 1986; rev. ed. 2000.
References
French logicians
Model theorists
Academic staff of the University of Provence
20th-century French mathematicians
21st-century French mathematicians
1920 births
2008 deaths
Mathematical logicians
French male non-fiction writers
20th-century French philosophers
20th-century French male writers
University of Paris alumni
|
https://en.wikipedia.org/wiki/AP%20Calculus
|
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
|
https://en.wikipedia.org/wiki/Sodium%20tartrate
|
Sodium tartrate (Na2C4H4O6) is a salt used as an emulsifier and a binding agent in food products such as jellies, margarine, and sausage casings. As a food additive, it is known by the E number E335.
Because its crystal structure captures a very precise amount of water, it is also a common primary standard for Karl Fischer titration, a common technique to assay water content.
See also
Monosodium tartrate
References
External links
Properties of Sodium Tartrate at linanwindow
Properties of Sodium Tartrate at JTBaker
Tartrates
Organic sodium salts
Food additives
E-number additives
|
https://en.wikipedia.org/wiki/Radia%20Perlman
|
Radia Joy Perlman (; born December 18, 1951) is an American computer programmer and network engineer. She is a major figure in assembling the networks and technology to enable what we now know as the internet. She is most famous for her invention of the Spanning Tree Protocol (STP), which is fundamental to the operation of network bridges, while working for Digital Equipment Corporation, thus earning her nickname "Mother of the Internet". Her innovations have made a huge impact on how networks self-organize and move data. She also made large contributions to many other areas of network design and standardization: for example, enabling today's link-state routing protocols, to be more robust, scalable, and easy to manage.
Perlman was elected a member of the National Academy of Engineering in 2015 for contributions to Internet routing and bridging protocols. She holds over 100 issued patents. She was elected to the Internet Hall of Fame in 2014, and to the National Inventors Hall of Fame in 2016. She received lifetime achievement awards from USENIX in 2006 and from the Association for Computing Machinery’s SIGCOMM in 2010.
More recently she has invented the TRILL protocol to correct some of the shortcomings of spanning trees, allowing Ethernet to make optimal use of bandwidth. As of 2022, she was a Fellow at Dell Technologies.
Early life
Perlman was born in 1951, Portsmouth, Virginia. She grew up in Loch Arbour, New Jersey. She is Jewish. Both of her parents worked as engineers for the US government. Her father worked on radar and her mother was a mathematician by training who worked as a computer programmer. During her school years Perlman found math and science to be “effortless and fascinating”, but had no problem achieving top grades in other subjects as well. She enjoyed playing the piano and French horn. While her mother helped her with her math homework, they mainly talked about literature and music. But she didn't feel like she fit underneath the stereot
|
https://en.wikipedia.org/wiki/Catamorphism
|
In category theory, the concept of catamorphism (from the Ancient Greek: "downwards" and "form, shape") denotes the unique homomorphism from an initial algebra into some other algebra.
In functional programming, catamorphisms provide generalizations of folds of lists to arbitrary algebraic data types, which can be described as initial algebras.
The dual concept is that of anamorphism that generalize unfolds. A hylomorphism is the composition of an anamorphism followed by a catamorphism.
Definition
Consider an initial -algebra for some endofunctor of some category into itself. Here is a morphism from to . Since it is initial, we know that whenever is another -algebra, i.e. a morphism from to , there is a unique homomorphism from to . By the definition of the category of -algebra, this corresponds to a morphism from to , conventionally also denoted , such that . In the context of -algebra, the uniquely specified morphism from the initial object is denoted by and hence characterized by the following relationship:
Terminology and history
Another notation found in the literature is . The open brackets used are known as banana brackets, after which catamorphisms are sometimes referred to as bananas, as mentioned in Erik Meijer et al. One of the first publications to introduce the notion of a catamorphism in the context of programming was the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, by Erik Meijer et al., which was in the context of the Squiggol formalism.
The general categorical definition was given by Grant Malcolm.
Examples
We give a series of examples, and then a more global approach to catamorphisms, in the Haskell programming language.
Iteration
Iteration-step prescriptions lead to natural numbers as initial object.
Consider the functor fmaybe mapping a data type b to a data type fmaybe b, which contains a copy of each term from b as well as one additional term Nothing (in Haskell, this is what Maybe doe
|
https://en.wikipedia.org/wiki/Dry%20loop
|
A dry loop is an unconditioned leased pair of telephone line from a telephone company. The pair does not provide dial tone or battery (continuous electric potential), as opposed to a wet pair, a line usually without dial tone but with battery.
A dry pair was originally used with a security system but more recently may also be used with digital subscriber line (DSL) service or an Ethernet extender to connect two locations, as opposed to a costlier means such as a Frame Relay. The pair in many cases goes through the local telephone exchange.
Wet pair naming comes from the battery used to sustain the loop, which was made from wet cells.
Many carriers market dry loops to independent DSL providers as a BANA for basic analog loop or in some locales PANA for plain analog loop, OPX (off-premises extension) line, paging circuit, or finally LADS (local area data service).
Local availability
In the United States, these circuits typically incur a monthly recurring charge of $3.00 per ¼ mile (approximately), plus an additional handling fee of around ($5–10).
In Canada, a CRTC ruling of 21 July 2003 requires telcos (such as Bell Canada) permit dry loop and some companies do provide this service. Naked DSL is currently provided by third-party DSL (digital subscriber line) vendors in the provinces of Ontario and Quebec, but incurs an additional dry loop fee (often $5 or more monthly, depending on the distance from the exchange). There is not yet widespread adoption, as this extra fee often renders dry-loop DSL more costly than comparable cable modem service in most locations. A Bell Canada "dry loop" DSL connection does supply battery, but the underlying phone line is non-functional except to call 958-ANAC, 9-1-1 or the 310-BELL telco business office.
See also
Current loop
Local-loop unbundling
Naked DSL
Permitted attached private lines
References
Local loop
|
https://en.wikipedia.org/wiki/Spurline
|
The spurline is a type of radio-frequency and microwave distributed element filter with band-stop (notch) characteristics, most commonly used with microstrip transmission lines. Spurlines usually exhibit moderate to narrow-band rejection, at about 10% around the central frequency.
Spurline filters are very convenient for dense integrated circuits because of their inherently compact design and ease of integration: they occupy surface that corresponds only to a quarter-wavelength transmission line.
Structure description
It consists of a normal microstrip line breaking into a pair of smaller coupled lines that rejoin after a quarter-wavelength distance. Only one of the input ports of the coupled lines is connected to the feed microstrip, as shown in the figure below. The orange area of the illustration is the microstrip transmission line conductor and the gray color the exposed dielectric.
Where is the wavelength corresponding to the central rejection frequency of the bandstop filter, measured - of course - in the microstrip line material. This is the most important parameter of the filter that sets the rejection band.
The distance between the two coupled lines can be selected appropriately to fine-tune the filter. The smaller the distance, the narrower the stop-band in terms of rejection. Of course that is limited by the circuit-board printing resolution, and it is usually considered at about 10% of the input microstrip width.
The gap between the input microstrip line and the one open-circuited line of the coupler has a negligible effect on the frequency response of the filter. Therefore, it is considered approximately equal to the distance of the two coupled lines.
Printed antennae
Spurlines can also be used in printed antennae such as the planar inverted-F antenna. The additional resonances can be designed to widen the antenna bandwidth or to create multiple bands, for instance, for a tri-band mobile phone.
History
A spurline filter was first proposed by S
|
https://en.wikipedia.org/wiki/Simulated%20fluorescence%20process%20algorithm
|
The Simulated Fluorescence Process (SFP) is a computing algorithm used for scientific visualization of 3D data from, for example, fluorescence microscopes. By modeling a physical light/matter interaction process, an image can be computed which shows the data as it would have appeared in reality when viewed under these conditions.
Principle
The algorithm considers a virtual light source producing excitation light that illuminates the object. This casts shadows either on parts of the object itself or on other objects below it. The interaction between the excitation light and the object provokes the emission light, which also interacts with the object before it finally reaches the eye of the viewer.
See also
Computer graphics lighting
Rendering (computer graphics)
References
H. T. M. van der Voort, G. J. Brakenhoff and M. W. Baarslag. "Three-dimensional visualization methods for confocal microscopy", Journal of Microscopy, Vol. 153, Pt 2, February 1989, pp. 123–132.
Noordmans, Herke Jan, Hans TM van der Voort, and Arnold WM Smeulders. "Spectral volume rendering." IEEE transactions on visualization and computer graphics 6.3 (2000): 196–207.
External links
Freeware SFP renderer
Computational science
Computer graphics algorithms
Visualization (graphics)
Microscopes
Microscopy
Fluorescence
|
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20gamma%20function
|
The gamma function is an important special function in mathematics. Its particular values can be expressed in closed form for integer and half-integer arguments, but no simple expressions are known for the values at rational points in general. Other fractional arguments can be approximated through efficient infinite products, infinite series, and recurrence relations.
Integers and half-integers
For positive integer arguments, the gamma function coincides with the factorial. That is,
and hence
and so on. For non-positive integers, the gamma function is not defined.
For positive half-integers, the function values are given exactly by
or equivalently, for non-negative integer values of :
where denotes the double factorial. In particular,
{|
|-
|
|
|
|
|-
|
|
|
|
|-
|
|
|
|
|-
|
|
|
|
|}
and by means of the reflection formula,
{|
|-
|
|
|
|
|-
|
|
|
|
|-
|
|
|
|
|}
General rational argument
In analogy with the half-integer formula,
where denotes the th multifactorial of . Numerically,
.
As tends to infinity,
where is the Euler–Mascheroni constant and denotes asymptotic equivalence.
It is unknown whether these constants are transcendental in general, but and were shown to be transcendental by G. V. Chudnovsky. has also long been known to be transcendental, and Yuri Nesterenko proved in 1996 that , , and are algebraically independent.
The number is related to the lemniscate constant by
and it has been conjectured by Gramain that
where is the Masser–Gramain constant , although numerical work by Melquiond et al. indicates that this conjecture is false.
Borwein and Zucker have found that can be expressed algebraically in terms of , , , , and where is a complete elliptic integral of the first kind. This permits efficiently approximating the gamma function of rational arguments to high precision using quadratically convergent arithmetic–geometric mean iterations. For example:
No similar relations are known for or other denominator
|
https://en.wikipedia.org/wiki/Ethernet%20extender
|
An Ethernet extender (also network extender or LAN extender) is any device used to extend an Ethernet or network segment beyond its inherent distance limitation which is approximately for most common forms of twisted pair Ethernet. These devices employ a variety of transmission technologies and physical media (wireless, copper wire, fiber-optic cable, coaxial cable).
The extender forwards traffic between LANs transparent to higher network-layer protocols over distances that far exceed the limitations of standard Ethernet.
Options
Extenders that use copper wire include 2- and 4-wire variants using unconditioned copper wiring to extend a LAN. Network extenders use various methods (line encodings), such as TC-PAM, 2B1Q or DMT, to transmit information. While transmitting over copper wire does not allow for the speeds that fiber-optic transmission does, it allows the use of existing voice-grade copper or CCTV coaxial cable wiring. Copper-based Ethernet extenders must be used on unconditioned wire (without load coils), such as unused twisted pairs and alarm circuits.
Connecting a private LAN between buildings or more distant locations is a challenge. Wi-Fi requires a clear line-of-sight, special antennas, and is subject to weather. If the buildings are within 100m, a normal Ethernet cable segment can be used, with due consideration of potential grounding problems between the locations. Up to 200m, it may be possible to set up an ordinary Ethernet bridge or router in the middle, if power and weather protection can be arranged.
Fiber optic connection is ideal, allowing connections of over a km and high speeds with no electrical shock or surge issues, but is technically specialized and expensive for both the end equipment interfaces and the cable. Damage to the cable requires special skills to repair or total replacement.
Specialized equipment can inter-connect two LANs over a single twisted pair of wires, such as the Moxa IEX Series, Cisco LRE (Long Reach Etherne
|
https://en.wikipedia.org/wiki/National%20Information%20Assurance%20Certification%20and%20Accreditation%20Process
|
The National Information Assurance Certification and Accreditation Process (NIACAP) formerly was the minimum-standard process for the certification and accreditation of computer and telecommunications systems that handle U.S. national-security information. NIACAP was derived from the Department of Defense Certification and Accreditation Process (DITSCAP), and it played a key role in the National Information Assurance Partnership.
The Committee on National Security Systems (CNSS) Policy (CNSSP) No. 22 dated January 2012 cancelled CNSS Policy No. 6, “National Policy on Certification and Accreditation of National Security Systems,” dated October 2005, and National Security Telecommunications and Information Systems Security Instruction (NSTISSI) 1000, “National Information Assurance Certification and Accreditation Process (NIACAP),” dated April 2000. CNSSP No. 22 also states that "The CNSS intends to adopt National Institute of Standards and Technology (NIST) issuances where applicable. Additional CNSS issuances will occur only when the needs of NSS are not sufficiently addressed in a NIST document. Annex B identifies the guidance documents, which includes NIST Special Publications (SP), for establishing an organization-wide risk management program." It directs the organization to make use of NIST Special Publication 800-37, which implies that the Risk management framework (RMF) STEP 6 – AUTHORIZE INFORMATION SYSTEM replaces the Certification and Accreditation process for National Security Systems, just as it did for all other areas of the Federal government who fall under SP 800-37 Rev. 1.
References
Computer security accreditations
United States Department of Defense
|
https://en.wikipedia.org/wiki/Wanted%20sa%20Radyo
|
Wanted sa Radyo () is a public affairs show that airs on Monday to Friday from 2:00 to 4:00 pm (PST) on 92.3 Radyo5 True FM (DWFM) with simulcast on television via One PH and online via livestreaming on the "Raffy Tulfo in Action" YouTube channel and Facebook page. It also replays on Tuesday to Friday from 2:30 to 4:30 am, Saturdays from 2:00 to 4:00 pm, and Sundays from 2:00 to 4:00 am and 9:00 pm to 11:00 pm on One PH. It is hosted by Senator Raffy Tulfo, alongside Sharee Roman. Whenever Tulfo is absent from the show mostly due to his duties as a senator, he is filled in by his daughter Maricel Tulfo-Tungol, his son-in-law Atty. Gareth Tungol, his co-parent-in-law Atty. Danilo Tungol, Atty. Blessie Abad, Atty. Gabriel Ilaya, Atty. Ina Magpale, Atty. Gail Dela Cruz, Atty. Joren Tan, Atty. Freddie Villamor, Atty. Pau Cruz, Atty. Paul Castillo, Aanaan Singh, or Marsh Salcedo.
History
Wanted sa Radyo first aired on DZXL-AM from 1994 to 2010. It transferred to the newly-launched Radyo5 92.3 News FM (now 92.3 Radyo5 True FM) in 2010, while retaining Raffy Tulfo and Niña Taduran as its hosts with the same airtime from 2:00 pm to 4:00 pm.
Its television simulcast began on February 21, 2011, with the launch of AksyonTV. From January to May 2012, the show used to temporarily air for one hour from 12:30 pm to 1:30 pm whenever Radyo5 and AksyonTV aired the News5 coverage of the impeachment trial of then-Chief Justice Renato Corona at 1:30 pm onwards. On December 23, 2013, the Radyo5 studios used by the show moved from the TV5 Studio Complex in Novaliches, Quezon City to TV5 Media Center in Mandaluyong.
On October 15, 2018, Niña Taduran left the show to run for party-list representative for ACT-CIS in 2019. She was replaced by Sharee Roman. The TV simulcast was carried over to One PH in January 2019, when AksyonTV was rebranded as 5 Plus.
From October 18, 2021, the show moved from TV5 Media Center to its new studio at the Raffy Tulfo Action Center in Quezon City and it wen
|
https://en.wikipedia.org/wiki/Author%20Domain%20Signing%20Practices
|
In computing, Author Domain Signing Practices (ADSP)
is an optional extension to the DKIM E-mail authentication
scheme, whereby a domain can publish the signing practices it adopts when relaying mail on behalf of associated authors.
ADSP was adopted as a standards track RFC 5617 in August 2009, but declared "Historic" in November 2013 after "...almost no deployment and use in the 4 years since..."
Concepts
Author address
The author address is the one specified in the header field defined in RFC 5322. In the unusual cases where more than one address is defined in that field, RFC 5322 provides for a field to be used instead.
The domains in 5322-From addresses are not necessarily the same as in the more elaborated Purported Responsible Address covered by Sender ID specified in RFC 4407. The domain in a 5322-From address is also not necessarily the same as in the envelope sender address defined in RFC 5321, also known as SMTP MAIL FROM, envelope-From, 5321-From, or , optionally protected by SPF specified in RFC 7208.
Author Domain Signature
An Author Domain Signature is a valid DKIM signature in which the domain name of the DKIM signing entity, i.e., the d tag in the DKIM-Signature header field, is the same as the domain name in the author address.
This binding recognizes a higher value for author domain signatures than other valid signatures that may happen to be found in a message. In fact, it proves that the entity that controls the DNS zone for the author — and hence also the destination of replies to the message's author — has relayed the author's message. Most likely, the author has submitted the message through the proper message submission agent. Such message qualification can be verified independently of any published domain signing practice.
Author Domain Signing Practices
The practices are published in a DNS record by the author domain. For an author address , it may be set as
_adsp._domainkey.example.com. in txt "dkim=unknown"
Three possible s
|
https://en.wikipedia.org/wiki/Melzer%27s%20reagent
|
Melzer's reagent (also known as Melzer's iodine reagent, Melzer's solution or informally as Melzer's) is a chemical reagent used by mycologists to assist with the identification of fungi, and by phytopathologists for fungi that are plant pathogens.
Composition
Melzer's reagent is an aqueous solution of chloral hydrate, potassium iodide, and iodine. Depending on the formulation, it consists of approximately 2.50-3.75% potassium iodide and 0.75–1.25% iodine, with the remainder of the solution being 50% water and 50% chloral hydrate. Melzer's is toxic to humans if ingested due to the presence of iodine and chloral hydrate. Due to the legal status of chloral hydrate, Melzer's reagent is difficult to obtain in the United States.
In response to difficulties obtaining chloral hydrate, scientists at Rutgers formulated Visikol (compatible with Lugol's iodine) as a replacement. In 2019, research showed that Visikol behaves differently to Melzer’s reagent in several key situations, noting it should not be recommended as a viable substitute.
Melzer's reagent is part of a class of iodine/potassium iodide (IKI)-containing reagents used in biology; Lugol's iodine is another such formula.
Reactions
Melzer's is used by exposing fungal tissue or cells to the reagent, typically in a microscope slide preparation, and looking for any of three color reactions:
Amyloid or Melzer's-positive reaction, in which the material reacts blue to black.
Pseudoamyloid or dextrinoid reaction, in which the material reacts brown to reddish brown.
Inamyloid or Melzer's-negative, in which the tissues do not change color, or react faintly yellow-brown.
Among the amyloid reaction, two types can be distinguished:
Euamyloid reaction, in which the material turns blue without potassium hydroxide (KOH)-pretreatment.
Hemiamyloid reaction, in which the material turns red in Lugol's solution, but shows no reaction in Melzer's reagent; when KOH-pretreated it turns blue in both reagents (hemiamyloidity).
M
|
https://en.wikipedia.org/wiki/Silver%20telluride
|
Silver telluride (Ag2Te) is a chemical compound, a telluride of silver, also known as disilver telluride or silver(I) telluride. It forms a monoclinic crystal. In a wider sense, silver telluride can be used to denote AgTe (silver(II) telluride, a metastable compound) or Ag5Te3.
Silver(I) telluride occurs naturally as the mineral hessite, whereas silver(II) telluride is known as empressite.
Silver telluride is a semiconductor which can be doped both n-type and p-type. Stoichiometric Ag2Te has n-type conductivity. On heating silver is lost from the material.
Non-stoichiometric silver telluride has shown extraordinary magnetoresistance.
Synthesis
Porous silver telluride (AgTe) is synthesized by an electrochemical deposition method. The experiment can be performed using a potentiostat and a three-electrode cell with 200 mL of 0.5 M sulfuric acid electrolyte solution containing Ag nanoparticles at room temperature. Then a sliver paste used in the tungsten ditelluride (WTe2) attachment leach into the electrolyte which causes small amounts of Ag to dissolve in the electrolyte. The electrolyte was stirred by a magnetic bar to remove hydrogen bubbles. A sliver- sliver chloride electrode and a platinum wire can be used as reference and counter electrodes. All the potentials can be measured against the reference electrode, and it was calibrated using the equation ERHE = EAg/AgCl + .059 pH + .197. In order to grow the porous AgTe, the WTe2 was treated using multiple cyclic voltammetry between -1.2 and 0 volts with a scan rate of 100 mV/s.
Glutathione coated Ag2Te Nanoparticles can be synthesized by preparing a 9 mL solution containing 10 mM AgNO3, 5mM Na2TeO3, and 30 mM glutathione. Place that solution in an ice bath. N2H4 was added to the solution and the reaction is allowed to proceed for 5 min under constant stirring. Then the nanoparticles are washed three times by a way of centrifugation, after the three washes the nanoparticles are suspended in PBS and washed again
|
https://en.wikipedia.org/wiki/Cellular%20architecture
|
Cellular architecture is a type of computer architecture prominent in parallel computing. Cellular architectures are relatively new, with IBM's Cell microprocessor being the first one to reach the market. Cellular architecture takes multi-core architecture design to its logical conclusion, by giving the programmer the ability to run large numbers of concurrent threads within a single processor. Each 'cell' is a compute node containing thread units, memory, and communication. Speed-up is achieved by exploiting thread-level parallelism inherent in many applications.
Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM.
Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software.
See also
Cellular automaton
External links
Cellular architecture builds next generation supercomputers
ORNL, IBM, and the Blue Gene Project
Energy, IBM are partners in biological supercomputing project
Cell-based Architecture
Parallel computing
Computer architecture
Classes of computers
|
https://en.wikipedia.org/wiki/Gustafson%27s%20law
|
In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of the task on a single-core machine as the baseline. To put it another way, it is the theoretical "slowdown" of an already parallelized task if running on a serial machine. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988.
Definition
Gustafson estimated the speedup of a program gained by using parallel computing as follows:
where
is the theoretical speedup of the program with parallelism (scaled speedup);
is the number of processors;
and are the fractions of time spent executing the serial parts and the parallel parts of the program on the parallel system, where .
Alternatively, can be expressed using :
Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources. Gustafson's law instead proposes that programmers tend to increase the size of problems to fully exploit the computing power that becomes available as the resources improve.
Gustafson and his colleagues further observed from their workloads that time for the serial part typically does not grow as the problem and the system scale, that is, is fixed. This gives a linear model between the processor count and the speedup with slope , as shown in the figure above (which uses different notations: for and for ). Also, scales linearly with rather than exponentially in the Amdahl's Law. With these observations, Gustafson "expect[ed] to extend [their] success [on parallel computing] to a broader range of applications and even larger values for ".
The impact of Gustafson's law was to shift research goals to select or reformulate problems
|
https://en.wikipedia.org/wiki/Open%20Graphics%20Project
|
The Open Graphics Project (OGP) was founded with the goal to design an open-source hardware / open architecture and standard for graphics cards, primarily targeting free software / open-source operating systems. The project created a reprogrammable development and prototyping board and had aimed to eventually produce a full-featured and competitive end-user graphics card.
OGD1
The project's first product was a PCI graphics card dubbed OGD1, which used a field-programmable gate array (FPGA) chip. Although the card could not compete with graphics cards on the market at the time in terms of performance or functionality, it was intended to be useful as a tool for prototyping the project's first application-specific integrated circuit (ASIC) board, as well as for other professionals needing programmable graphics cards or FPGA-based prototyping boards. It was also hoped that this prototype would attract enough interest to gain some profit and attract investors for the next card, since it was expected to cost around US$2,000,000 to start the production of a specialized ASIC design. PCI Express and/or Mini-PCI variations were planned to follow. The OGD1 began shipping in September 2010, some six years after the project began and 3 years after the appearance of the first prototypes.
Full specifications will be published and open-source device drivers will be released. All RTL will be released. Source code to the device drivers and BIOS will be released under the MIT and BSD licenses. The RTL (in Verilog) used for the FPGA and the RTL used for the ASIC are planned to be released under the GNU General Public License (GPL).
It has 256 MiB of DDR RAM, is passively cooled, and follows the DDC, EDID, DPMS and VBE VESA standards. TV-out is also planned.
Versioning schema
Versioning schema for OGD1 will go like this:
{Root Number} – {Video Memory}{Video Output Interfaces}{Special Options e.g.: A1 OGA firmware installed}
OGD1 components
Main components of OGD1 graphics car
|
https://en.wikipedia.org/wiki/Handel-C
|
Handel-C is a high-level programming language which targets low-level hardware, most commonly used in the programming of FPGAs. It is a rich subset of C, with non-standard extensions to control hardware instantiation with an emphasis on parallelism. Handel-C is to hardware design what the first high-level programming languages were to programming CPUs. Unlike many other design languages that target a specific architecture Handel-C can be compiled to a number of design languages and then synthesised to the corresponding hardware. This frees developers to concentrate on the programming task at hand rather than the idiosyncrasies of a specific design language and architecture.
Additional features
The subset of C includes all common C language features necessary to describe complex algorithms. Like many embedded C compilers, floating point data types were omitted. Floating point arithmetic is supported through external libraries that are very efficient.
Parallel programs
In order to facilitate a way to describe parallel behavior some of the CSP keywords are used, along with the general file structure of Occam.
For example:
par {
++c;
a = d + e;
b = d + e;
}
Channels
Channels provide a mechanism for message passing between parallel threads. Channels can be defined as asynchronous or synchronous (with or without an inferred storage element respectively). A thread writing to a synchronous channel will be immediately blocked until the corresponding listening thread is ready to receive the message. Likewise the receiving thread will block on a read statement until the sending thread executes the next send. Thus they may be used as a means of synchronizing threads.
par {
chan int a; // declare a synchronous channel
int x;
// begin sending thread
seq (i = 0; i < 10; i++) {
a ! i; // send the values 0 to 9 sequentially into the channel
}
// begin receiving thread
seq (j = 0; j < 10; j++) {
a ? x; // pe
|
https://en.wikipedia.org/wiki/ODMRP
|
In wireless networking, On-Demand Multicast Routing Protocol is a protocol for routing multicast and unicast traffic throughout Ad hoc wireless mesh networks.
ODMRP creates routes on demand, rather than proactively creating routes as OLSR does. This suffers from a route acquisition delay, although it helps reduce network traffic in general. To help reduce the problem of this delay, some implementations send the first data packet along with the route discovery packet.
Because some links may be asymmetric, the path from one node to another is not necessarily the same as the reverse path of these nodes.
See also
AODV
List of ad hoc routing protocols
Mesh Networks
External links
IETF Draft The latest draft specification published by the IETF
Original publication The first paper presenting ODMRP.
Wireless networking
Routing
Routing algorithms
Ad hoc routing protocols
|
https://en.wikipedia.org/wiki/Joel%20Spencer
|
Joel Spencer (born April 20, 1946) is an American mathematician. He is a combinatorialist who has worked on probabilistic methods in combinatorics and on Ramsey theory. He received his doctorate from Harvard University in 1970, under the supervision of Andrew Gleason. He is currently () a professor at the Courant Institute of Mathematical Sciences of New York University. Spencer's work was heavily influenced by Paul Erdős, with whom he coauthored many papers (giving him an Erdős number of 1).
In 1963, while studying at the Massachusetts Institute of Technology, Spencer became a Putnam Fellow. In 1984 Spencer received a Lester R. Ford Award. He was an Erdős Lecturer at Hebrew University of Jerusalem in 2001. In 2012 he became a fellow of the American Mathematical Society.
He was elected as a fellow of the Society for Industrial and Applied Mathematics in 2017, "for contributions to discrete mathematics and theory of computing, particularly random graphs and networks, Ramsey theory, logic, and randomized algorithms". In 2021 he received the Leroy P. Steele Prize for Mathematical Exposition with his coauthor Noga Alon for their book The Probabilistic Method.
Selected publications
Probabilistic methods in combinatorics, with Paul Erdős, New York: Academic Press, 1974.
Ramsey theory, with Bruce L. Rothschild and Ronald L. Graham, New York: Wiley, 1980; 2nd ed., 1990.
Ten lectures on the probabilistic method, Philadelphia: Society for Industrial and Applied Mathematics, 1987; 2nd ed., 1994.
The strange logic of random graphs, Berlin: Springer-Verlag, 2001.
The probabilistic method, with Noga Alon, New York: Wiley, 1992; 2nd ed., 2000; 3rd ed., 2008.
Deterministic random walks on regular trees, American Mathematical Society, New York, 2008.
Asymptopia, with Laura Florescu, American Mathematical Society, 2014.
See also
Packing in a hypergraph
References
External links
Joel Spencer's Website
1946 births
20th-century American mathematicians
21st-century Amer
|
https://en.wikipedia.org/wiki/Playout
|
In broadcasting, channel playout is the generation of the source signal of a radio or television channel produced by a broadcaster, coupled with the transmission of this signal for primary distribution or direct-to-audience distribution via any network. Such radio or television distribution networks include terrestrial broadcasting (analogue or digital radio), cable networks, satellites (either for primary distribution intended for cable television headends or for direct reception, DTH / DBS), IPTV, OTT Video, point-to-point transport over managed networks or the public Internet, etc.
The television channel playout happens in master control room (MCR) in a playout area, which can be either situated in the central apparatus room or in purposely built playout centres, which can be owned by a broadcaster or run by an independent specialist company that has been contracted to handle the playout for a number of channels from different broadcasters.
Some of the larger playout centres in Europe, Southeast Asia and the United States handle well in excess of 50 radio and television "feeds". Feeds will often consist of several different versions of a core service, often different language versions or with separately scheduled content, such as local opt outs for news or promotions.
Playout systems
Centralcasting is multi-channel playout that generally uses broadcast automation systems with broadcast programming applications. These systems generally work in a similar way, controlling video servers, video tape recorder (VTR) devices, Flexicarts, audio mixing consoles, vision mixers and video routers, and other devices using a serial communications 9-Pin Protocol (RS-232 or RS-422). This provides deterministic control, enabling frame accurate playback, Instant replay or video switching. Many systems consist of a front end operator interface on a separate platform to the controllers – e.g. a Windows GUI will present a friendly easy to use method of editing a playlist, but actua
|
https://en.wikipedia.org/wiki/Andrew%20M.%20Gleason
|
Andrew Mattei Gleason (November 4, 1921October 17, 2008) was an American mathematician who made fundamental contributions to widely varied areas of mathematics, including the solution of Hilbert's fifth problem, and was a leader in reform and innovation in teaching at all levels.<ref
name="mactutor"></ref> Gleason's theorem in quantum logic and the Greenwood–Gleason graph, an important example in Ramsey theory, are named for him.
As a young World War II naval officer, Gleason broke German and Japanese military codes. After the war he spent his entire academic career at Harvard University, from which he retired in 1992. His numerous academic and scholarly leadership posts included chairmanship of the Harvard Mathematics Department and the Harvard Society of Fellows, and presidency of the American Mathematical Society. He continued to advise the United States government on cryptographic security, and the Commonwealth of Massachusetts on education for children, almost until the end of his life.
Gleason won the Newcomb Cleveland Prize in 1952 and the Gung–Hu Distinguished Service Award of the American Mathematical Society in 1996. He was a member of the National Academy of Sciences and of the American Philosophical Society, and held the Hollis Chair of Mathematics and Natural Philosophy at Harvard.
He was fond of saying that proofs "really aren't there to convince you that something is truethey're there to show you why it is true." The Notices of the American Mathematical Society called him "one of the quiet giants of twentieth-century mathematics, the consummate professor dedicated to scholarship, teaching, and service in equal measure."
Biography
Gleason was born in Fresno, California, the youngest of three children;
his father Henry Gleason was a botanist and a member of the Mayflower Society, and his mother was the daughter of Swiss-American winemaker Andrew Mattei.
His older brother Henry Jr. became a linguist.
He grew up in Bronxville, New York, where his
|
https://en.wikipedia.org/wiki/List%20of%20spreadsheet%20software
|
The following is a list of spreadsheets.
Free and open-source software
Cloud and on-line spreadsheets
Collabora Online Calc — Enterprise-ready LibreOffice.
EtherCalc (successor to SocialCalc, which is based on wikiCalc)
LibreOffice Online Calc
ONLYOFFICE - Community Server Edition
Sheetster – "Community Edition" is available under the Affero GPL
Simple Spreadsheet
Tiki Wiki CMS Groupware includes a spreadsheet since 2004 and migrated to jQuery.sheet in 2010.
Spreadsheets that are parts of suites
Apache OpenOffice Calc — for MS Windows, Linux and the Apple Macintosh. Started as StarOffice, later as OpenOffice.org. It has not received a major update since 2014 and security fixes have not been prompt.
Collabora Online Calc — Enterprise-ready LibreOffice, included with Online, Mobile and Desktop apps
Gnumeric — for Linux. Started as the GNOME desktop spreadsheet. Reasonably lightweight but has very advanced features.
KSpread — following the fork of the Calligra Suite from KOffice in mid-2010, superseded by KCells in KOffice and Sheets in the Calligra Suite.
LibreOffice Calc — developed for MS Windows, Linux, BSD and Apple Macintosh (Mac) operating systems by The Document Foundation. The Document Foundation was formed in mid-2010 by several large organisations such as Google, Red Hat, Canonical (Ubuntu) and Novell along with the OpenOffice.org community (developed by Sun) and various OpenOffice.org forks, notably Go-oo. Go-oo had been the "OpenOffice" used in Ubuntu and elsewhere. Started as StarOffice in the late 1990s, it became OpenOffice under Sun and then LibreOffice in mid-2010. The Document Foundation works with external organisations such as NeoOffice and Apache Foundation to help drive all three products forward.
NeoOffice Calc — for Mac. Started as an OpenOffice.org port to Mac, but by using the Mac-specific Aqua user interface, instead of the more widely used X11 windowing server, it aimed to be far more stable than the normal ports of
|
https://en.wikipedia.org/wiki/West%20Virginia%20Broadband
|
West Virginia Broadband is a Wireless community network located in Braxton County, West Virginia operated by local volunteers and coordinated by the Gilmer-Braxton Research Zone. The effort gained recent attention by a National Public Radio story and MuniWireless and SmartMobs bloggers detailing how modified off-the-shelf Wi-Fi adapters were used to connect 7 communities with wireless internet for a total cost of little more than 4000 US dollars. The research group now coordinates wireless technology training throughout the United States.
References
Wireless network organizations
Community networks
Braxton County, West Virginia
|
https://en.wikipedia.org/wiki/Electronic%20system-level%20design%20and%20verification
|
Electronic system level (ESL) design and verification is an electronic design methodology, focused on higher abstraction level concerns. The term Electronic System Level or ESL Design was first defined by Gartner Dataquest, an EDA-industry-analysis firm, on February 1, 2001. It is defined in ESL Design and Verification as: "the utilization of appropriate abstractions in order to increase comprehension about a system, and to enhance the probability of a successful implementation of functionality in a cost-effective manner."
The basic premise is to model the behavior of the entire system using a high-level language such as C, C++, or using graphical "model-based" design tools. Newer languages are emerging that enable the creation of a model at a higher level of abstraction including general purpose system design languages like SysML as well as those that are specific to embedded system design like SMDL and SSDL. Rapid and correct-by-construction implementation of the system can be automated using EDA tools such as high-level synthesis and embedded software tools, although much of it is performed manually today. ESL can also be accomplished through the use of SystemC as an abstract modeling language.
ESL is an established approach at many of the world’s leading System-on-a-chip (SoC) design companies, and is being used increasingly in system design. From its genesis as an algorithm modeling methodology with 'no links to implementation', ESL is evolving into a set of complementary methodologies that enable embedded system design, verification, and debugging through to the hardware and software implementation of custom SoC, system-on-FPGA, system-on board, and entire multi-board systems.
Design and verification are two distinct disciplines within this methodology. Some practices are to keep the two elements separate, while others advocate for closer integration between design and verification.
Design
Whether ESL or other systems, design refers to "the concurrent d
|
https://en.wikipedia.org/wiki/CD%20publishing
|
CD publishing is the use of CD duplication systems to create a large number of unique discs. For instance, storing a unique serial number on each copy of a software application disc would be considered CD publishing.
The term CD publishing is believed to have been coined by the Rimage Corporation as part of a marketing program which referred to CD-R discs as "digital paper." Automated disc production and printing systems, such as those made by Rimage, can be shared on a computer network much like an office printer to facilitate the creation of unique discs. This is the root of both the digital paper and CD publishing terms.
The extension into CD publishing is a distinct advantage of CD duplication systems over traditional CD replication - where large quantities of identical discs must be made.
External links
Understanding CD-R & CD-RW: Duplication, Replication, and Publishing @ the Optical Storage Technology Association
Computer storage media
Optical disc authoring
|
https://en.wikipedia.org/wiki/Insyde%20Software
|
Insyde Software () is a company that specializes in UEFI system firmware and engineering support services, primarily for OEM and ODM computer and component device manufacturers. They are listed on the Gre Tai Market of Taiwan and headquartered in Taipei, with offices in Westborough, Massachusetts, and Portland, Oregon. The company's market capitalization of the company's common shares is currently around $115M.
Overview
The company's product portfolio includes InsydeH2O BIOS (Insyde Software's implementation of the Intel Platform Innovation Framework for UEFI/EFI), BlinkBoot, a UEFI-based boot loader for enabling Internet of Things devices, and Supervyse, which is a full-featured systems management/BMC firmware for providing out-of-band remote management for server computers.
Insyde Software was formed when it purchased the BIOS assets of SystemSoft Corporation (NASDAQ:SYSF) in October, 1998. Initially
Insyde Software was a privately held company that included investments from Intel Pacific Inc., China Development Industrial Bank, Professional Computer Technology Limited (PCT), company management and selected employees. At that time, Insyde Software's management team consisted of Jeremy Wang, Chairman (also the Chairman of PCT); Jonathan Joseph, President (a former founder of SystemSoft); Hansen Liou, the General Manager of Taiwan Operations and Asia-Pacific Sales, and Stephen Gentile, the Vice President of Marketing.
Shortly after the initial investment, the company was introduced by Intel to a new BIOS coding architecture called EFI (now UEFI) and the two companies began working together on it. In 2001, the two companies entered into a joint development agreement and Insyde’s first shipment of the technology occurred in October 2003 as InsydeH2O UEFI BIOS. Since that time, UEFI has become the mainstay of Insyde’s business.
On 23 January 2003, Insyde Software announced its initial public offering on the GreTai Securities Market (GTSM) based in Taipei, Taiwan
|
https://en.wikipedia.org/wiki/Cypress%20PSoC
|
PSoC (programmable system on a chip) is a family of microcontroller integrated circuits by Cypress Semiconductor. These chips include a CPU core and mixed-signal arrays of configurable integrated analog and digital peripherals.
History
In 2002, Cypress began shipping commercial quantities of the PSoC 1. To promote the PSoC, Cypress sponsored a "PSoC Design Challenge" in Circuit Cellar magazine in 2002 and 2004.
In April 2013, Cypress released the fourth generation, PSoC 4. The PSoC 4 features a 32-bit ARM Cortex-M0 CPU, with programmable analog blocks (operational amplifiers and comparators), programmable digital blocks (PLD-based UDBs), programmable routing and flexible GPIO (route any function to any pin), a serial communication block (for SPI, UART, I²C), a timer/counter/PWM block and more.
PSoC is used in devices as simple as Sonicare toothbrushes and Adidas sneakers, and as complex as the TiVo set-top box. One PSoC implements capacitive sensing for the touch-sensitive scroll wheel on the Apple iPod click wheel.
In 2014, Cypress extended the PSoC 4 family by integrating a Bluetooth Low Energy radio along with a PSoC 4 Cortex-M0-based SoC in a single, monolithic die.
In 2016, Cypress released PSoC 4 S-Series, featuring ARM Cortex-M0+ CPU.
Overview
A PSoC integrated circuit is composed of a core, configurable analog and digital blocks, and programmable routing and interconnect. The configurable blocks in a PSoC are the biggest difference from other microcontrollers.
PSoC has three separate memory spaces: paged SRAM for data, Flash memory for instructions and fixed data, and I/O registers for controlling and accessing the configurable logic blocks and functions. The device is created using SONOS technology.
PSoC resembles an ASIC: blocks can be assigned a wide range of functions and interconnected on-chip. Unlike an ASIC, there is no special manufacturing process required to create the custom configuration — only startup code that is created by Cypress'
|
https://en.wikipedia.org/wiki/Concurrent%20Haskell
|
Concurrent Haskell extends Haskell 98 with explicit concurrency. Its two main underlying concepts are:
A primitive type MVar α implementing a bounded/single-place asynchronous channel, which is either empty or holds a value of type α.
The ability to spawn a concurrent thread via the forkIO primitive.
Built atop this is a collection of useful concurrency and synchronisation abstractions such as unbounded channels, semaphores and sample variables.
Haskell threads have very low overhead: creation, context-switching and scheduling are all internal to the Haskell runtime. These Haskell-level threads are mapped onto a configurable number of OS-level threads, usually one per processor core.
Software Transactional Memory
The software transactional memory (STM) extension to GHC reuses the process forking primitives of Concurrent Haskell. STM however:
avoids MVars in favour of TVars.
introduces the retry and orElse primitives, allowing alternative atomic actions to be composed together.
STM monad
The STM monad is an implementation of software transactional memory in Haskell. It is implemented in GHC, and allows for mutable variables to be modified in transactions.
Traditional approach
Consider a banking application as an example, and a transaction in it—the transfer function, which takes money from one account, and puts it into another account. In the IO monad, this might look like:
type Account = IORef Integer
transfer :: Integer -> Account -> Account -> IO ()
transfer amount from to = do
fromVal <- readIORef from -- (A)
toVal <- readIORef to
writeIORef from (fromVal - amount)
writeIORef to (toVal + amount)
This causes problems in concurrent situations where multiple transfers might be taking place on the same account at the same time. If there were two transfers transferring money from account from, and both calls to transfer ran line (A) before either of them had written their new values, it is possible that money would be put into the o
|
https://en.wikipedia.org/wiki/PocketMail
|
PocketMail was a very small and inexpensive mobile computer, with a built-in acoustic coupler, developed by PocketScience.
History
PocketMail was developed by the company PocketScience and used technology developed by NASA. This was the first ever mass-market mobile email. The hardware cost around US$100 and the service was initially US$9.95 per month for unlimited use. Later the monthly fee increased. After the company made a reference hardware design, leading consumer electronics manufacturers Audioxo, Sharp, JVC, and others made their own PocketMail devices. Later a PocketMail dongle was created for the PalmPilot. PocketMail users were given a custom email address or able to synch up PocketMail with their existing email account (including AOL accounts). Although actually a computer, its main function was email. Its main advantages were that it was simple, and that it worked with any phone, even outside the United States. It was a low-cost personal digital assistant (PDA) with an inbuilt acoustic coupler which allowed users to send and receive email while connected to a normal telephone, thus allowing use outside of mobile phone range, or without the need to be signed up with a mobile telephone provider. Popularity of the PocketMail peaked around 2000, when the company stopped investing in new technology development.
In Australia, the company known as PocketMail in 2007 stopped marketing the PocketMail service, changed its name to Adavale Resources Limited and now owns uranium mining prospects in Queensland and South Australia.
References
Websites
Dan's Data Review: http://www.dansdata.com/pocketmail.htm
TechCrunch: Nostalgiamatic: The Sharp TM-20 with PocketMail
Government Computer News: With PocketMail, e-mail access is phone call away
JVC's PocketMail offers e-mail without computer or modem
InfoWorld Review of Sharp PocketMail device
Cracked.com's list of "The 5 Most Ridiculously Awful Computers Ever Made
Mobile computers
Modems
Email
|
https://en.wikipedia.org/wiki/Gene
|
In biology, the word gene (from , ; meaning generation or birth or gender) can have several different meanings. The Mendelian gene is a basic unit of heredity and the molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes.
During gene expression, the DNA is first copied into RNA. The RNA can be directly functional or be the intermediate template for a protein that performs a function. (Some viruses have an RNA genome so the genes are made of RNA that may function directly without being copied into RNA. This is an exception to the strict definition of a gene described above.)
The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. These genes make up different DNA sequences called genotypes. Genotypes along with environmental and developmental factors determine what the phenotypes will be. Most biological traits are under the influence of polygenes (many different genes) as well as gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, and some are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life.
A gene can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Usage of the term "having a gene" (e.g., "good genes," "hair color gene") typically refers to containing a different allele of the same, shared gene. Genes evolve due to natural selection / survival of the fittest and genetic drift of the alleles.
The term gene was introduced by Danish botanist, plant physiologist and geneticist Wilhelm Johannsen in 1909. It is inspired by the Ancient Greek: γόνος, gonos, that means offspring and procreation
|
https://en.wikipedia.org/wiki/Persona%20%28user%20experience%29
|
A persona (also user persona, customer persona, buyer persona) in user-centered design and marketing is a personalized fictional character created to represent a user type that might use a site, brand, or product in a similar way. Personas represent the similarities of consumer groups or segments. They are based on demographic and behavioural personal information collected from users, qualitative interviews, and participant observation. Personas are one of the outcomes of market segmentation, where marketers use the results of statistical analysis and qualitative observations to draw profiles, giving them names and personalities to paint a picture of a person that could exist in real life. The term persona is used widely in online and technology applications as well as in advertising, where other terms such as pen portraits may also be used.
Personas are useful in considering the goals, desires, and limitations of brand buyers and users in order to help to guide decisions about a service, product or interaction space such as features, interactions, and visual design of a website. Personas may be used as a tool during the user-centered design process for designing software. They can introduce interaction design principles to things like industrial design and online marketing.
A user persona is a representation of the goals and behavior of a hypothesized group of users. In most cases, personas are synthesized from data collected from interviews or surveys with users. They are captured in short page descriptions that include behavioral patterns, goals, skills, attitudes, with a few fictional personal details to make the persona a realistic character. In addition to Human-Computer Interaction (HCI), personas are also widely used in sales, advertising, marketing and system design. Personas provide common behaviors, outlooks, and potential objections of people matching a given persona.
History
Within software design, Alan Cooper, a noted pioneer software developer, p
|
https://en.wikipedia.org/wiki/Sodium%20ascorbate
|
Sodium ascorbate is one of a number of mineral salts of ascorbic acid (vitamin C). The molecular formula of this chemical compound is C6H7NaO6. As the sodium salt of ascorbic acid, it is known as a mineral ascorbate. It has not been demonstrated to be more bioavailable than any other form of vitamin C supplement.
Sodium ascorbate normally provides 131 mg of sodium per 1,000 mg of ascorbic acid (1,000 mg of sodium ascorbate contains 889 mg of ascorbic acid and 111 mg of sodium).
As a food additive, it has the E number E301 and is used as an antioxidant and an acidity regulator. It is approved for use as a food additive in the EU, USA, Australia, and New Zealand.
In in vitro studies, sodium ascorbate has been found to produce cytotoxic effects in various malignant cell lines, which include melanoma cells that are particularly susceptible.
Production
Sodium ascorbate is produced by dissolving ascorbic acid in water and adding an equivalent amount of sodium bicarbonate in water. After cessation of effervescence, the sodium ascorbate is precipitated by the addition of isopropanol.
References
External links
The Bioavailability of Different Forms of Vitamin C
Ascorbates
Food additives
Organic sodium salts
Vitamers
Vitamin C
E-number additives
|
https://en.wikipedia.org/wiki/Potassium%20ascorbate
|
Potassium ascorbate is a compound with formula KC6H7O6. It is the potassium salt of ascorbic acid (vitamin C) and a mineral ascorbate. As a food additive, it has E number E303, INS number 303. Although it is not a permitted food additive in the UK, USA and the EU, it is approved for use in Australia and New Zealand. According to some studies, it has shown a strong antioxidant activity and antitumoral properties.
References
Ascorbates
Potassium compounds
Food additives
E-number additives
|
https://en.wikipedia.org/wiki/Disjunctive%20sequence
|
A disjunctive sequence is an infinite sequence (over a finite alphabet of characters) in which every finite string appears as a substring. For instance, the binary Champernowne sequence
formed by concatenating all binary strings in shortlex order, clearly contains all the binary strings and so is disjunctive. (The spaces above are not significant and are present solely to make clear the boundaries between strings). The complexity function of a disjunctive sequence S over an alphabet of size k is pS(n) = kn.
Any normal sequence (a sequence in which each string of equal length appears with equal frequency) is disjunctive, but the converse is not true. For example, letting 0n denote the string of length n consisting of all 0s, consider the sequence
obtained by splicing exponentially long strings of 0s into the shortlex ordering of all binary strings. Most of this sequence consists of long runs of 0s, and so it is not normal, but it is still disjunctive.
A disjunctive sequence is recurrent but never uniformly recurrent/almost periodic.
Examples
The following result can be used to generate a variety of disjunctive sequences:
If a1, a2, a3, ..., is a strictly increasing infinite sequence of positive integers such that n → ∞ (an+1 / an) = 1,
then for any positive integer m and any integer base b ≥ 2, there is an an whose expression in base b starts with the expression of m in base b.
(Consequently, the infinite sequence obtained by concatenating the base-b expressions for a1, a2, a3, ..., is disjunctive over the alphabet {0, 1, ..., b-1}.)
Two simple cases illustrate this result:
an = nk, where k is a fixed positive integer. (In this case, n → ∞ (an+1 / an) = n → ∞ ( (n+1)k / nk ) = n → ∞ (1 + 1/n)k = 1.)
E.g., using base-ten expressions, the sequences
123456789101112... (k = 1, positive natural numbers),
1491625364964... (k = 2, squares),
182764125216343... (k = 3, cubes),
etc.,
are disjunctive on {0,1,2,3,4,5,6,7,8,9}.
an = pn, where pn is the nt
|
https://en.wikipedia.org/wiki/Overflow%20flag
|
In computer processors, the overflow flag (sometimes called the V flag) is usually a single bit in a system status register used to indicate when an arithmetic overflow has occurred in an operation, indicating that the signed two's-complement result would not fit in the number of bits used for the result. Some architectures may be configured to automatically generate an exception on an operation resulting in overflow.
An example, suppose we add 127 and 127 using 8-bit registers. 127+127 is 254, but using 8-bit arithmetic the result would be 1111 1110 binary, which is the two's complement encoding of −2, a negative number. A negative sum of positive operands (or vice versa) is an overflow. The overflow flag would then be set so the program can be aware of the problem and mitigate this or signal an error. The overflow flag is thus set when the most significant bit (here considered the sign bit) is changed by adding two numbers with the same sign (or subtracting two numbers with opposite signs). Overflow cannot occur when the sign of two addition operands are different (or the sign of two subtraction operands are the same).
When binary values are interpreted as unsigned numbers, the overflow flag is meaningless and normally ignored. One of the advantages of two's complement arithmetic is that the addition and subtraction operations do not need to distinguish between signed and unsigned operands. For this reason, most computer instruction sets do not distinguish between signed and unsigned operands, generating both (signed) overflow and (unsigned) carry flags on every operation, and leaving it to following instructions to pay attention to whichever one is of interest.
Internally, the overflow flag is usually generated by an exclusive or of the internal carry into and out of the sign bit.
Bitwise operations (and, or, xor, not, rotate) do not have a notion of signed overflow, so the defined value varies on different processor architectures. Some processors clear the
|
https://en.wikipedia.org/wiki/Q%20%28number%20format%29
|
The Q notation is a way to specify the parameters of a binary fixed point number format. For example, in Q notation, the number format denoted by Q8.8 means that the fixed point numbers in this format have 8 bits for the integer part and 8 bits for the fraction part.
A number of other notations have been used for the same purpose.
Definition
Texas Instruments version
The Q notation, as defined by Texas Instruments, consists of the letter followed by a pair of numbers mn, where m is the number of bits used for the integer part of the value, and n is the number of fraction bits.
By default, the notation describes signed binary fixed point format, with the unscaled integer being stored in two's complement format, used in most binary processors. The first bit always gives the sign of the value(1 = negative, 0 = non-negative), and it is not counted in the m parameter. Thus the total number w of bits used is 1 + m + n.
For example, the specification describes a signed binary fixed-point number with a w = 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. That is, a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor 2−12
In particular, when n is zero, the numbers are just integers. If m is zero, all bits except the sign bit are fraction bits; then the range of the stored number is from −1.0 (inclusive) to +1 (exclusive).
The m and the dot may be omitted, in which case they are inferred from the size of the variable or register where the value is stored. Thus means a signed integer with any number of bits, that is implicitly multiplied by 2−12.
The letter can be prefixed to the to denote an unsigned binary fixed-point format. For example, describes values represented as unsigned 16-bit integers with implicit scaling factor of 2−15, which range from 0.0 to (216−1)/215 = +1.999969482421875.
ARM version
A variant of the Q notation has been in use by ARM.
|
https://en.wikipedia.org/wiki/Mobile%20web
|
The mobile web comprises mobile browser-based World Wide Web services accessed from handheld mobile devices, such as smartphones or feature phones, through a mobile or other wireless network.
History and development
Traditionally, the World Wide Web has been accessed via fixed-line services on laptops and desktop computers. However, the web is now more accessible by portable and wireless devices. Early 2010 ITU (International Telecommunication Union) report said that with current growth rates, web access by people on the go via laptops and smart mobile devices was likely to exceed web access from desktop computers within the following five years. In January 2014, mobile internet use exceeded desktop use in the United States. The shift to mobile Web access has accelerated since 2007 with the rise of larger multitouch smartphones, and since 2010 with the rise of multitouch tablet computers. Both platforms provide better Internet access, screens, and mobile browsers, or application-based user Web experiences than previous generations of mobile devices. Web designers may work separately on such pages, or pages may be automatically converted, as in Mobile Wikipedia. Faster speeds, smaller, feature-rich devices, and a multitude of applications continue to drive explosive growth for mobile internet traffic. The 2017 Virtual Network Index (VNI) report produced by Cisco Systems forecasts that by 2021, there will be 5.5 billion global mobile users (up from 4.9 billion in 2016). Additionally, the same 2017 VNI report forecasts that average access speeds will increase by roughly three times from 6.8 Mbit/s to 20 Mbit/s in that same period with video comprising the bulk of the traffic (78%).
The distinction between mobile web applications and native applications is anticipated to become increasingly blurred, as mobile browsers gain direct access to the hardware of mobile devices (including accelerometers and GPS chips), and the speed and abilities of browser-based applicatio
|
https://en.wikipedia.org/wiki/Double%20subscript%20notation
|
In engineering, double-subscript notation is notation used to indicate some variable between two points (each point being represented by one of the subscripts). In electronics, the notation is usually used to indicate the direction of current or voltage, while in mechanical engineering it is sometimes used to describe the force or stress between two points, and sometimes even a component that spans between two points (like a beam on a bridge or truss). Although there are many cases where multiple subscripts are used, they are not necessarily called double subscript notation specifically.
Electronic usage
IEEE standard 255-1963, "Letter Symbols for Semiconductor Devices", defined eleven original quantity symbols expressed as abbreviations.
This is the basis for a convention to standardize the directions of double-subscript labels. The following uses transistors as an example, but shows how the direction is read generally. The convention works like this:
represents the voltage from C to B. In this case, C would denote the collector end of a transistor, and B would denote the base end of the same transistor. This is the same as saying "the voltage drop from C to B", though this applies the standard definitions of the letters C and B. This convention is consistent with IEC 60050-121.
would in turn represent the current from C to E. In this case, C would again denote the collector end of a transistor, and E would denote the emitter end of the transistor. This is the same as saying "the current in the direction going from C to E".
Power supply pins on integrated circuits utilize the same letters for denoting what kind of voltage the pin would receive. For example, a power input labeled VCC would be a positive input that would presumably connect to the collector pin of a BJT transistor in the circuit, and likewise respectively with other subscripted letters. The format used is the same as for notations described above, though without the connotation of VCC meaning
|
https://en.wikipedia.org/wiki/Chinese%20Academy%20of%20Engineering
|
The Chinese Academy of Engineering (CAE, ) is the national academy of the People's Republic of China for engineering. It was established in 1994 and is an institution of the State Council of China. The CAE and the Chinese Academy of Sciences are often referred to together as the "Two Academies". Its current president is Li Xiaohong.
Since the establishment of CAE, entrusted by the relevant ministries and commissions, the academy has offered consultancy to the State on major programs, planning, guidelines, and policies. With the incitation by various ministries of the central government as well as local governments, the academy has organized its members to make surveys on the forefront, and to put forward strategic opinions and proposals. These entrusted projects have played an important role in maximizing the participation of the members in the macro decision-making of the State. In the meantime, the members, based on their own experiences and perspectives accumulated in a long term and in combination with international trends of the development of engineering science and technology, have regularly and actively put forward their opinions and suggestions.
List of presidents
Zhu Guangya (1994–1998)
Song Jian (1998–2002)
Xu Kuangdi (2002–2010)
Zhou Ji (2010–2018)
Li Xiaohong (2018–present)
Structure
The CAE is composed of elected members with the highest honor in the community of engineering and technological sciences of the nation.
The General Assembly of the CAE is the highest decision-making body of the academy and is held during the first week of June biennially.
Membership
Membership of Chinese Academy of Engineering is the highest academic title in engineering science and technology in China. It is a lifelong honor and must be elected by existing members.
The academy consists of members, senior members and foreign members, who are distinguished and recognized for their respective field of engineering.
As of January 2020, the academy has 920 Chinese
|
https://en.wikipedia.org/wiki/FO4
|
In digital electronics, Fan-out of 4 is a measure of time used in digital CMOS technologies: the gate delay of a component with a fan-out of 4.
Fan out = Cload / Cin, where
Cload = total MOS gate capacitance driven by the logic gate under consideration
Cin = the MOS gate capacitance of the logic gate under consideration
As a delay metric, one FO4 is the delay of an inverter, driven by an inverter 4x smaller than itself, and driving an inverter 4x larger than itself. Both conditions are necessary since input signal rise/fall time affects the delay as well as output loading.
FO4 is generally used as a delay metric because such a load is generally seen in case of tapered buffers driving large loads, and approximately in any logic gate of a logic path sized for minimum delay. Also, for most technologies the optimum fanout for such buffers generally varies from 2.7 to 5.3.
A fan out of 4 is the answer to the canonical problem stated as follows:
Given a fixed size inverter, small in comparison to a fixed large load, minimize the delay in driving the large load. After some math, it can be shown that the minimum delay is achieved when the load is driven by a chain of N inverters, each successive inverter ~4x larger than the previous; N ~ log4(Cload/Cin) .
In the absence of parasitic capacitances (drain diffusion capacitance and wire capacitance), the result is "a fan out of e" (now N ~ ln(Cload/Cin).
If the load itself is not large, then using a fan out of 4 scaling in successive logic stages does not make sense. In these cases, minimum sized transistors may be faster.
Because scaled technologies are inherently faster (in absolute terms), circuit performance can be more fairly compared using the fan out of 4 as a metric. For example, given two 64-bit adders, one implemented in a 0.5 µm technology and the other in 90 nm technology, it would be unfair to say the 90 nm adder is better from a circuits and architecture standpoint just because it has less latency. Th
|
https://en.wikipedia.org/wiki/Chorography
|
Chorography (from χῶρος khōros, "place" and γράφειν graphein, "to write") is the art of describing or mapping a region or district, and by extension such a description or map. This term derives from the writings of the ancient geographer Pomponius Mela and Ptolemy, where it meant the geographical description of regions. However, its resonances of meaning have varied at different times. Richard Helgerson states that "chorography defines itself by opposition to chronicle. It is the genre devoted to place, and chronicle is the genre devoted to time". Darrell Rohl prefers a broad definition of "the representation of space or place".
Ptolemy's definition
In his text of the Geographia (2nd century CE), Ptolemy defined geography as the study of the entire world, but chorography as the study of its smaller parts—provinces, regions, cities, or ports. Its goal was "an impression of a part, as when one makes an image of just an ear or an eye"; and it dealt with "the qualities rather than the quantities of the things that it sets down". Ptolemy implied that it was a graphic technique, comprising the making of views (not simply maps), since he claimed that it required the skills of a draftsman or landscape artist, rather than the more technical skills of recording "proportional placements". Ptolemy's most recent English translators, however, render the term as "regional cartography".
Renaissance revival
Ptolemy's text was rediscovered in the west at the beginning of the fifteenth century, and the term "chorography" was revived by humanist scholars. John Dee in 1570 regarded the practice as "an underling, and a twig of Geographie", by which the "plat" [plan or drawing] of a particular place would be exhibited to the eye.
The term also came to be used, however, for written descriptions of regions. These regions were extensively visited by the writer, who then combined local topographical description, summaries of the historical sources, and local knowledge and stories, into a
|
https://en.wikipedia.org/wiki/PfSense
|
pfSense is a firewall/router computer software distribution based on FreeBSD. The open source pfSense Community Edition (CE) and pfSense Plus is installed on a physical computer or a virtual machine to make a dedicated firewall/router for a network. It can be configured and upgraded through a web-based interface, and requires no knowledge of the underlying FreeBSD system to manage.
Overview
The pfSense project began in 2004 as a fork of the m0n0wall project by Chris Buechler and Scott Ullrich. Its first release was in October 2006. The name derives from the fact that the software uses the packet-filtering tool, PF.
Notable functions of pfSense include traffic shaping, VPNs using IPsec or PPTP, captive portal, stateful firewall, network address translation, 802.1q support for VLANs, and dynamic DNS. pfSense can be installed on hardware with an x86-64 processor architecture. It can also be installed on embedded hardware using Compact Flash or SD cards, or as a virtual machine.
WireGuard protocol support
In February 2021, pfSense CE 2.5.0 and pfSense Plus 21.02 added support for a kernel WireGuard implementation. Support for WireGuard was temporarily removed in March 2021 after implementation issues were discovered by WireGuard founder Jason Donenfeld. The July 2021 release of pfSense CE 2.5.2 version re-included WireGuard.
See also
Comparison of firewalls
List of router and firewall distributions
References
Further reading
Mastering pfSense, Second Edition Birmingham, UK: Packt Publishing, 2018. . By David Zientra.
Security: Manage Network Security With pfSense Firewall [Video] Birmingham, UK: Packt, 2018. . By Manuj Aggarwal.
External links
2004 software
BSD software
Firewall software
Free routing software
FreeBSD
Gateway/routing/firewall distribution
Operating system distributions bootable from read-only media
Products introduced in 2004
Routers (computing)
Wireless access points
|
https://en.wikipedia.org/wiki/Oval%20%28projective%20plane%29
|
In projective geometry an oval is a point set in a plane that is defined by incidence properties. The standard examples are the nondegenerate conics. However, a conic is only defined in a pappian plane, whereas an oval may exist in any type of projective plane. In the literature, there are many criteria which imply that an oval is a conic, but there are many examples, both infinite and finite, of ovals in pappian planes which are not conics.
As mentioned, in projective geometry an oval is defined by incidence properties, but in other areas, ovals may be defined to satisfy other criteria, for instance, in differential geometry by differentiability conditions in the real plane.
The higher dimensional analog of an oval is an ovoid in a projective space.
A generalization of the oval concept is an abstract oval, which is a structure that is not necessarily embedded in a projective plane. Indeed, there exist abstract ovals which can not lie in any projective plane.
Definition of an oval
In a projective plane a set of points is called an oval, if:
Any line meets in at most two points, and
For any point there exists exactly one tangent line through , i.e., }.
When the line is an exterior line (or passant), if a tangent line and if the line is a secant line.
For finite planes (i.e. the set of points is finite) we have a more convenient characterization:
For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
A set of points in an affine plane satisfying the above definition is called an affine oval.
An affine oval is always a projective oval in the projective closure (adding a line at infinity) of the underlying affine plane.
An oval can also be considered as a special quadratic set.
Examples
Conic sections
In any pappian projective plane there exist nondegenerate projective conic sections
and any nondegenerate projective conic sectio
|
https://en.wikipedia.org/wiki/Distributed%20constraint%20optimization
|
Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized.
Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents.
Problems defined with this framework can be solved by any of the algorithms that are designed for it.
The framework was used under different names in the 1980s. The first known usage with the current name is in 1990.
Definitions
DCOP
The main ingredients of a DCOP problem are agents and variables. Importantly, each variable is owned by an agent; this is what makes the problem distributed. Formally, a DCOP is a tuple , where:
is the set of agents, .
is the set of variables, .
is the set of variable-domains, where each is a finite set containing the possible values of variable .
If contains only two values (e.g. 0 or 1), then is called a binary variable.
is the cost function. It is a function that maps every possible partial assignment to a cost. Usually, only few values of are non-zero, and it is represented as a list of the tuples that are assigned a non-zero value. Each such tuple is called a constraint. Each constraint in this set is a function assigning a real value to each possible assignment of the variables. Some special kinds of constraints are:
Unary constraints - constraints on a single variable, i.e., for some .
Binary constraints - constraints on two variables, i.e, for some
is the ownership function. It is a function mapping each variable to its associated agent. means that variable "belongs" to agent . This implies that it is agent 's responsibility
|
https://en.wikipedia.org/wiki/Games%20People%20Play%20%28book%29
|
Games People Play: The Psychology of Human Relationships is a 1964 book by psychiatrist Eric Berne. The book was a bestseller at the time of its publication, despite drawing academic criticism for some of the psychoanalytic theories it presented. It popularized Berne's model of transactional analysis among a wide audience, and has been considered one of the first pop psychology books.
Background
The author Eric Berne was a psychiatrist specializing in psychotherapy who began developing alternate theories of interpersonal relationship dynamics in the 1950s. He sought to explain recurring patterns of interpersonal conflicts that he observed, which eventually became the basis of transactional analysis. After being rejected by a local psychoanalytic institute, he focused on writing about his own theories. In 1961, he published Transactional Analysis in Psychotherapy. That book was followed by Games People Play, in 1964. Berne did not intend for Games People Play to explore all aspects of transactional analysis, viewing it instead as an introduction to some of the concepts and patterns he identified. He borrowed money from friends and used his own savings to publish the book.
Summary
In the first half of the book, Berne introduces his theory of transactional analysis as a way of interpreting social interactions. He proposes that individuals encompass three roles or ego states, known as the Parent, the Adult, and the Child, which they switch between. He postulates that while Adult to Adult interactions are largely healthy, dysfunctional interactions can arise when people take on mismatched roles such as Parent and Child or Child and Adult.
The second half of the book catalogues a series of "mind games" identified by Berne, in which people interact through a patterned and predictable series of "transactions" based on these mismatched roles. He states that although these interactions may seem plausible, they are actually a way to conceal hidden motivations under scripte
|
https://en.wikipedia.org/wiki/Comparison%20of%20application%20virtualization%20software
|
Application virtualization software refers to both application virtual machines and software responsible for implementing them. Application virtual machines are typically used to allow application bytecode to run portably on many different computer architectures and operating systems. The application is usually run on the computer using an interpreter or just-in-time compilation (JIT). There are often several implementations of a given virtual machine, each covering a different set of functions.
Comparison of virtual machines
JavaScript machines not included. See List of ECMAScript engines to find them.
The table here summarizes elements for which the virtual machine designs are intended to be efficient, not the list of abilities present in any implementation.
Virtual machine instructions process data in local variables using a main model of computation, typically that of a stack machine, register machine, or random access machine often called the memory machine. Use of these three methods is motivated by different tradeoffs in virtual machines vs physical machines, such as ease of interpreting, compiling, and verifying for security.
Memory management in these portable virtual machines is addressed at a higher level of abstraction than in physical machines. Some virtual machines, such as the popular Java virtual machines (JVM), are involved with addresses in such a way as to require safe automatic memory management by allowing the virtual machine to trace pointer references, and disallow machine instructions from manually constructing pointers to memory. Other virtual machines, such as LLVM, are more like traditional physical machines, allowing direct use and manipulation of pointers. Common Intermediate Language (CIL) offers a hybrid in between, allowing both controlled use of memory (like the JVM, which allows safe automatic memory management), while also allowing an 'unsafe' mode that allows direct pointer manipulation in ways that can violate type boundaries
|
https://en.wikipedia.org/wiki/Denjoy%27s%20theorem%20on%20rotation%20number
|
In mathematics, the Denjoy theorem gives a sufficient condition for a diffeomorphism of the circle to be topologically conjugate to a diffeomorphism of a special kind, namely an irrational rotation. proved the theorem in the course of his topological classification of homeomorphisms of the circle. He also gave an example of a C1 diffeomorphism with an irrational rotation number that is not conjugate to a rotation.
Statement of the theorem
Let ƒ: S1 → S1 be an orientation-preserving diffeomorphism of the circle whose rotation number θ = ρ(ƒ) is irrational. Assume that it has positive derivative ƒ(x) > 0 that is a continuous function with bounded variation on the interval [0,1). Then ƒ is topologically conjugate to the irrational rotation by θ. Moreover, every orbit is dense and every nontrivial interval I of the circle intersects its forward image ƒ°q(I), for some q > 0 (this means that the non-wandering set of ƒ is the whole circle).
Complements
If ƒ is a C2 map, then the hypothesis on the derivative holds; however, for any irrational rotation number Denjoy constructed an example showing that this condition cannot be relaxed to C1, continuous differentiability of ƒ.
Vladimir Arnold showed that the conjugating map need not be smooth, even for an analytic diffeomorphism of the circle. Later Michel Herman proved that nonetheless, the conjugating map of an analytic diffeomorphism is itself analytic for "most" rotation numbers, forming a set of full Lebesgue measure, namely, for those that are badly approximable by rational numbers. His results are even more general and specify differentiability class of the conjugating map for Cr diffeomorphisms with any r ≥ 3.
See also
Circle map
References
Kornfeld, Sinai, Fomin, Ergodic theory.
External links
John Milnor, Denjoy Theorem
Dynamical systems
Diffeomorphisms
Theorems in topology
Theorems in dynamical systems
|
https://en.wikipedia.org/wiki/Lebesgue%27s%20density%20theorem
|
In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set , the "density" of A is 0 or 1 at almost every point in . Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible.
Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as
where Bε denotes the closed ball of radius ε centered at x.
Lebesgue's density theorem asserts that for almost every point x of A the density
exists and is equal to 0 or 1.
In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and , then there are always points of Rn where the density is neither 0 nor 1.
For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible.
The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem.
Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion.
See also
References
Hallard T. Croft. Three lattice-point problems of Steinhaus. Quart. J. Math. Oxford (2), 33:71-83, 1982.
Theorems in measure theory
Integral calculus
|
https://en.wikipedia.org/wiki/Locally%20integrable%20function
|
In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.
Definition
Standard definition
. Let be an open set in the Euclidean space and be a Lebesgue measurable function. If on is such that
i.e. its Lebesgue integral is finite on all compact subsets of , then is called locally integrable. The set of all such functions is denoted by :
where denotes the restriction of to the set .
The classical definition of a locally integrable function involves only measure theoretic and topological concepts and can be carried over abstract to complex-valued functions on a topological measure space : however, since the most common application of such functions is to distribution theory on Euclidean spaces, all the definitions in this and the following sections deal explicitly only with this important case.
An alternative definition
. Let be an open set in the Euclidean space . Then a function such that
for each test function is called locally integrable, and the set of such functions is denoted by . Here denotes the set of all infinitely differentiable functions with compact support contained in .
This definition has its roots in the approach to measure and integration theory based on the concept of continuous linear functional on a topological vector space, developed by the Nicolas Bourbaki school: it is also the one adopted by and by . This "distribution theoretic" definition is eq
|
https://en.wikipedia.org/wiki/Hazard%20%28logic%29
|
In digital logic, a hazard is an undesirable effect caused by either a deficiency in the system or external influences in both synchronous and asynchronous circuits. Logic hazards are manifestations of a problem in which changes in the input variables do not change the output correctly due to some form of delay caused by logic elements (NOT, AND, OR gates, etc.) This results in the logic not performing its function properly. The three different most common kinds of hazards are usually referred to as static, dynamic and function hazards.
Hazards are a temporary problem, as the logic circuit will eventually settle to the desired function. Therefore, in synchronous designs, it is standard practice to register the output of a circuit before it is being used in a different clock domain or routed out of the system, so that hazards do not cause any problems. If that is not the case, however, it is imperative that hazards be eliminated as they can have an effect on other connected systems.
Static hazards
A static hazard is a change of a signal state twice in a row when the signal is expected to stay constant. When one input signal changes, the output changes momentarily before stabilizing to the correct value. There are two types of static hazards:
Static-1 Hazard: the output is currently 1 and after the inputs change, the output momentarily changes to 0,1 before settling on 1
Static-0 Hazard: the output is currently 0 and after the inputs change, the output momentarily changes to 1,0 before settling on 0
In properly formed two-level AND-OR logic based on a Sum Of Products expression, there will be no static-0 hazards (but may still have static-1 hazards). Conversely, there will be no static-1 hazards in an OR-AND implementation of a Product Of Sums expression (but may still have static-0 hazards).
The most commonly used method to eliminate static hazards is to add redundant logic (consensus terms in the logic expression).
Example of a static hazard
Consider an i
|
https://en.wikipedia.org/wiki/X.PC
|
X.PC is a deprecated communications protocol developed by McDonnell-Douglas for connecting a personal computer to its Tymnet packet-switched public data telecommunications network. It is a subset of X.25, a CCITT standard for packet-switched networks. It is a full-duplex, asynchronous and error-correcting network protocol that supports up to 15 simultaneous channels. It maintains automatic error correction during any communications session between two or more computers.
X.PC was originally developed to enable connections up to 9600 baud. Unlike MNP, a competing standard proposed by Microcom, X.PC was placed in the public domain for royalty-free usage. MNP, on the other hand, initially required a $2,500 licensing fee. Microcom battled X.PC for acceptance in the marketplace, and eventually released MNP 1 through 3 into the public domain to compete.
At the time, several modem manufacturers supported MNP in their products, Microcom and Racal-Vadic being major examples. Several companies announced support for X.PC, including Hayes, but none of the companies announcing support offered it in their modems. X.PC quickly disappeared, and Microcom would go on to release MNP 4 and 5 into the public domain as well.
References
External links
http://www.mactech.com/articles/mactech/Vol.02/02.01/ProtocolStandards/
Network protocols
X.25
|
https://en.wikipedia.org/wiki/XPDL
|
The XML Process Definition Language (XPDL) is a format standardized by the Workflow Management Coalition (WfMC) to interchange business process definitions between different workflow products, i.e. between different modeling tools and management suites.
XPDL defines an XML schema for specifying the declarative part of workflow / business process.
XPDL is designed to exchange the process definition, both the graphics and the semantics of a workflow business process. XPDL is currently the best file format for exchange of BPMN diagrams; it has been designed specifically to store all aspects of a BPMN diagram. XPDL contains elements to hold graphical information, such as the X and Y position of the nodes, as well as executable aspects which would be used to run a process. This distinguishes XPDL from BPEL which focuses exclusively on the executable aspects of the process. BPEL does not contain elements to represent the graphical aspects of a process diagram.
It is possible to say that XPDL is the XML Serialization of BPMN.
History
The Workflow Management Coalition, founded in August 1993, began by defining the Workflow Reference Model (ultimately published in 1995) that outlined the five key interfaces that a workflow management system must have. Interface 1 was for defining the business process, which includes two aspects: a process definition expression language and a programmatic interface to transfer the process definition to/from the workflow management system.
The first revision of a process definition expression language was called Workflow Process Definition Language (WPDL) which was published in 1998. This process meta-model contained all the key concepts required to support workflow automation expressed using URL Encoding. Interoperability demonstrations were held to confirm the usefulness of this language as a way to communicate process models.
By 1998, the first standards based on XML began to appear. The Workflow Management Coalition Working Gr
|
https://en.wikipedia.org/wiki/Adaptive%20sort
|
A sorting algorithm falls into the adaptive sort family if it takes advantage of existing order in its input. It benefits from the presortedness in the input sequence – or a limited amount of disorder for various definitions of measures of disorder – and sorts faster. Adaptive sorting is usually performed by modifying existing sorting algorithms.
Motivation
Comparison-based sorting algorithms have traditionally dealt with achieving an optimal bound of O(n log n) when dealing with time complexity. Adaptive sort takes advantage of the existing order of the input to try to achieve better times, so that the time taken by the algorithm to sort is a smoothly growing function of the size of the sequence and the disorder in the sequence. In other words, the more presorted the input is, the faster it should be sorted.
This is an attractive feature for a sorting algorithm because nearly sorted sequences are common in practice. Thus, the performance of existing sort algorithms can be improved by taking into account the existing order in the input.
Note that most worst-case sorting algorithms that do optimally well in the worst-case, notably heap sort and merge sort, do not take existing order within their input into account, although this deficiency is easily rectified in the case of merge sort by checking if the last element of the lefthand group is less than (or equal) to the first element of the righthand group, in which case a merge operation may be replaced by simple concatenation – a modification that is well within the scope of making an algorithm adaptive.
Examples
A classic example of an adaptive sorting algorithm is Straight Insertion Sort. In this sorting algorithm, we scan the input from left to right, repeatedly finding the position of the current item, and insert it into an array of previously sorted items.
In pseudo-code form, the Straight Insertion Sort algorithm could look something like this (array X is zero-based):
procedure Straight Insertion
|
https://en.wikipedia.org/wiki/Basketball%20statistics
|
Statistics in basketball are kept to evaluate a player's or a team's performance.
Examples
Examples of basketball statistics include:
GM, GP; GS: games played; games started
PTS: points
FGM, FGA, FG%: field goals made, attempted and percentage
FTM, FTA, FT%: free throws made, attempted and percentage
3FGM, 3FGA, 3FG%: three-point field goals made, attempted and percentage
REB, OREB, DREB: rebounds, offensive rebounds, defensive rebounds
AST: assists
STL: steals
BLK: blocks
TO: turnovers
TD: triple double
EFF: efficiency: NBA's efficiency rating: (PTS + REB + AST + STL + BLK − ((FGA − FGM) + (FTA − FTM) + TO))
PF: personal fouls
MIN: minutes
AST/TO: assist to turnover ratio
PER: Player Efficiency Rating: John Hollinger's Player Efficiency Rating
PIR: Performance Index Rating: Euroleague's and Eurocup's Performance Index Rating: (Points + Rebounds + Assists + Steals + Blocks + Fouls Drawn) − (Missed Field Goals + Missed Free Throws + Turnovers + Shots Rejected + Fouls Committed)
Averages per game are denoted by *PG (e.g. BLKPG or BPG, STPG or SPG, APG, RPG and MPG). Sometime the players statistics are divided by minutes played and multiplied by 48 minutes (had he played the entire game), denoted by * per 48 min. or *48M.
A player who makes double digits in a game in any two of the PTS, REB, AST, STL, and BLK statistics is said to make a double double; in three statistics, a triple double; in four statistics, a quadruple double. A quadruple double is extremely rare (and has only occurred four times in the NBA). There is also a 5x5, when a player records at least a 5 in each of the 5 statistics.
The NBA also posts to the statistics section of its Web site a simple composite efficiency statistic, denoted EFF and derived by the formula, ((Points + Rebounds + Assists + Steals + Blocks) − ((Field Goals Attempted − Field Goals Made) + (Free Throws Attempted − Free Throws Made) + Turnovers)). While conveniently distilling most of a player's key statist
|
https://en.wikipedia.org/wiki/Ewan%20Birney
|
John Frederick William Birney (known as Ewan Birney) (born 6 December 1972) is joint director of EMBL's European Bioinformatics Institute (EMBL-EBI), in Hinxton, Cambridgeshire and deputy director general of the European Molecular Biology Laboratory (EMBL). He also serves as non-executive director of Genomics England, chair of the Global Alliance for Genomics and Health (GA4GH) and honorary professor of bioinformatics at the University of Cambridge. Birney has made significant contributions to genomics, through his development of innovative bioinformatics and computational biology tools. He previously served as an associate faculty member at the Wellcome Trust Sanger Institute.
Education
Birney was educated at Eton College as an Oppidan Scholar. Before going to university, Birney completed a gap year internship at Cold Spring Harbor Laboratory supervised by James Watson and Adrian Krainer.
Birney completed his Bachelor of Arts degree in Biochemistry at the University of Oxford in 1996, where he was an undergraduate student at Balliol College, Oxford. He completed his PhD at the Sanger Institute, supervised by Richard Durbin while he was a postgraduate student at St John's College, Cambridge. His doctoral research used dynamic programming, finite-state machines and probabilistic automatons for sequence alignment.
While he was a student he completed internships in the office of the Mayor of Baltimore and also in financial services on valuation of options for the Swiss Bank Corporation.
Research and career
From 2000 to 2003, Birney organised a scientific wager and sweepstake known as GeneSweep, for the genomics community, taking bets on estimates of the total number of genes (and noncoding DNA) in the human genome.
Birney is one of the founders of the Ensembl genome browser and other databases, and has played a role in the sequencing of the Human Genome in 2000 and the analysis of genome function in the ENCODE project. He has played a role in annotating the genom
|
https://en.wikipedia.org/wiki/Thermodynamic%20diagrams
|
Thermodynamic diagrams are diagrams used to represent the thermodynamic states of a material (typically fluid) and the consequences of manipulating this material. For instance, a temperature–entropy diagram (T–s diagram) may be used to demonstrate the behavior of a fluid as it is changed by a compressor.
Overview
Especially in meteorology they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes, usually obtained with weather balloons. In such diagrams, temperature and humidity values (represented by the dew point) are displayed with respect to pressure. Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification.
By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day.
The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air.
Types of thermodynamic diagrams
General purpose diagrams include:
PV diagram
T–s diagram
h–s (Mollier) diagram
Psychrometric chart
Cooling curve
Indicator diagram
Saturation vapor curve
Thermodynamic surface
Specific to weather services, there are mainly three different types of thermodynamic diagrams used:
Skew-T log-P diagram
Tephigram
Emagram
All three diagrams are derived from the physical P–alpha diagram which combines pressure (P) and specific volume (alpha) as its basic coordinates. The P–alpha diagram s
|
https://en.wikipedia.org/wiki/Burning%20Ship%20fractal
|
The Burning Ship fractal, first described and created by Michael Michelitsch and Otto E. Rössler in 1992, is generated by iterating the function:
in the complex plane which will either escape or remain bounded. The difference between this calculation and that for the Mandelbrot set is that the real and imaginary components are set to their respective absolute values before squaring at each iteration. The mapping is non-analytic because its real and imaginary parts do not obey the Cauchy–Riemann equations.
Implementation
The below pseudocode implementation hardcodes the complex operations for Z. Consider implementing complex number operations to allow for more dynamic and reusable code. Note that the typical images of the Burning Ship fractal display the ship upright: the actual fractal, and that produced by the below pseudocode, is inverted along the x-axis.
for each pixel (x, y) on the screen, do:
x := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
y := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
zx := x // zx represents the real part of z
zy := y // zy represents the imaginary part of z
iteration := 0
max_iteration := 100
while (zx*zx + zy*zy < 4 and iteration < max_iteration) do
xtemp := zx*zx - zy*zy + x
zy := abs(2*zx*zy) + y // abs returns the absolute value
zx := xtemp
iteration := iteration + 1
if iteration = max_iteration then // Belongs to the set
return insideColor
return (max_iteration / iteration) × color
References
External links
About properties and symmetries of the Burning Ship fractal, featured by Theory.org
Burning Ship Fractal, Description and C source code.
Burning Ship with its Mset of higher powers and Julia Sets
Burningship, Video,
Fractal webpage includes the first representations and the original paper cited above on the Burning Ship fractal.
3D representati
|
https://en.wikipedia.org/wiki/Ovoid%20%28projective%20geometry%29
|
In projective geometry an ovoid is a sphere like pointset (surface) in a projective space of dimension . Simple examples in a real projective space are hyperspheres (quadrics). The essential geometric properties of an ovoid are:
Any line intersects in at most 2 points,
The tangents at a point cover a hyperplane (and nothing more), and
contains no lines.
Property 2) excludes degenerated cases (cones,...). Property 3) excludes ruled surfaces (hyperboloids of one sheet, ...).
An ovoid is the spatial analog of an oval in a projective plane.
An ovoid is a special type of a quadratic set.
Ovoids play an essential role in constructing examples of Möbius planes and higher dimensional Möbius geometries.
Definition of an ovoid
In a projective space of dimension a set of points is called an ovoid, if
(1) Any line meets in at most 2 points.
In the case of , the line is called a passing (or exterior) line, if the line is a tangent line, and if the line is a secant line.
(2) At any point the tangent lines through cover a hyperplane, the tangent hyperplane, (i.e., a projective subspace of dimension ).
(3) contains no lines.
From the viewpoint of the hyperplane sections, an ovoid is a rather homogeneous object, because
For an ovoid and a hyperplane , which contains at least two points of , the subset is an ovoid (or an oval, if ) within the hyperplane .
For finite projective spaces of dimension (i.e., the point set is finite, the space is pappian), the following result is true:
If is an ovoid in a finite projective space of dimension , then .
(In the finite case, ovoids exist only in 3-dimensional spaces.)
In a finite projective space of order (i.e. any line contains exactly points) and dimension any pointset is an ovoid if and only if and no three points are collinear (on a common line).
Replacing the word projective in the definition of an ovoid by affine, gives the definition of an affine ovoid.
If for an (projective) ovoid there is a su
|
https://en.wikipedia.org/wiki/2-valued%20morphism
|
In mathematics, a 2-valued morphism is a homomorphism that sends a Boolean algebra B onto the two-element Boolean algebra 2 = {0,1}. It is essentially the same thing as an ultrafilter on B, and, in a different way, also the same things as a maximal ideal of B. 2-valued morphisms have also been proposed as a tool for unifying the language of physics.
2-valued morphisms, ultrafilters and maximal ideals
Suppose B is a Boolean algebra.
If s : B → 2 is a 2-valued morphism, then the set of elements of B that are sent to 1 is an ultrafilter on B, and the set of elements of B that are sent to 0 is a maximal ideal of B.
If U is an ultrafilter on B, then the complement of U is a maximal ideal of B, and there is exactly one 2-valued morphism s : B → 2 that sends the ultrafilter to 1 and the maximal ideal to 0.
If M is a maximal ideal of B, then the complement of M is an ultrafilter on B, and there is exactly one 2-valued morphism s : B → 2 that sends the ultrafilter to 1 and the maximal ideal to 0.
Physics
If the elements of B are viewed as "propositions about some object", then a 2-valued morphism on B can be interpreted as representing a particular "state of that object", namely the one where the propositions of B which are mapped to 1 are true, and the propositions mapped to 0 are false. Since the morphism conserves the Boolean operators (negation, conjunction, etc.), the set of true propositions will not be inconsistent but will correspond to a particular maximal conjunction of propositions, denoting the (atomic) state. (The true propositions form an ultrafilter, the false propositions form a maximal ideal, as mentioned above.)
The transition between two states s1 and s2 of B, represented by 2-valued morphisms, can then be represented by an automorphism f from B to B, such that s2 o f = s1.
The possible states of different objects defined in this way can be conceived as representing potential events. The set of events can then be structured in the same way as i
|
https://en.wikipedia.org/wiki/List%20of%20old-growth%20forests
|
This is a list of areas of existing old-growth forest which include at least of old growth. Ecoregion information from "Terrestrial Ecoregions of the World".
(NB: The terms "old growth" and "virgin" may have various definitions and meanings throughout the world. See old-growth forest for more information.)
Africa
Australia
In Australia, the 1992 National Forest Policy Statement (NFPS) made specific provision for the protection of old growth forests. The NFPS initiated a process for undertaking assessments of forests for conservation values, including old growth values. A working group of state and Australian Government agencies took the NFPS definition into consideration in developing a definition that was accepted by all governments (JANIS 1997).
In 2008, only a relatively small area (15%) of Australia's forests (mostly tall, wet forests) had been assessed for old-growth values.
Of the of forest in Australia assessed for their old-growth status, (22%) is classified as old-growth. Almost half of Australia's identified old-growth forest is in NSW, mostly on public land. More than 73% of Australia's identified old-growth forests are in formal or informal nature conservation reserves.
In 2001, Western Australia became the first state in Australia to cease logging in old-growth forests.
The term "old-growth forests" is rarely used in New Zealand, instead, "The Bush" is used to refer to native forests. There are large contiguous areas of forest cover that are protected areas.
Eurasia
North America
Canada
United States
Central America
Caribbean
South America
See also
List of oldest trees
Old-Growth Forest Network
Notes
References
Lists of forests
Forestry-related lists
|
https://en.wikipedia.org/wiki/ATI%20Avivo
|
ATI Avivo is a set of hardware and low level software features present on the ATI Radeon R520 family of GPUs and all later ATI Radeon products. ATI Avivo was designed to offload video decoding, encoding, and post-processing from a computer's CPU to a compatible GPU. ATI Avivo compatible GPUs have lower CPU usage when a player and decoder software that support ATI Avivo is used. ATI Avivo has been long superseded by Unified Video Decoder (UVD) and Video Coding Engine (VCE).
Background
The GPU wars between ATI and NVIDIA have resulted in GPUs with ever-increasing processing power since early 2000s. To parallel this increase in speed and power, both GPU makers needed to increase video quality as well, in 3D graphics applications the focus in increasing quality has mainly fallen on anti-aliasing and anisotropic filtering. However it has dawned upon both companies that video quality on the PC would need improvement as well and the current APIs provided by both companies have not seen many improvements over a few generations of GPUs. Therefore, ATI decided to revamp its GPU's video processing capability with ATI Avivo, in order to compete with NVIDIA PureVideo API.
In the time of release of the latest generation Radeon HD series, the successor, the ATI Avivo HD was announced, and was presented on every Radeon HD 2600 and 2400 video cards to be available July, 2007 after NVIDIA announced similar hardware acceleration solution, PureVideo HD.
In 2011 Avivo is renamed to AMD Media Codec Package, an optional component of the AMD Catalyst software. The last version is released in August 2012. As of 2013, the package is no longer offered by AMD.
Features
ATI Avivo
During capturing, ATI Avivo amplifies the source, automatically adjust its brightness and contrast. ATI Avivo implements 12-bit transform to reduce data loss during conversion; it also utilizes motion adaptive 3D comb filter, automatic color control, automatic gain control, hardware noise reduction and edge enhan
|
https://en.wikipedia.org/wiki/Groff%20%28software%29
|
groff ( ) (also called GNU troff) is a typesetting system that creates formatted output when given plain text mixed with formatting commands. It is the GNU replacement for the troff and nroff text formatters.
Groff contains a large number of helper programs, preprocessors, and postprocessors including eqn, tbl, pic and soelim. There are also several macro packages included that duplicate, expand on the capabilities of, or outright replace the standard troff macro packages.
Groff development of new features is active, and is an important part of free, open source, and UNIX derived operating systems such as Linux and 4.4BSD derivatives — notably because troff macros are used to create man pages, the standard form of documentation on Unix and Unix-like systems.
OpenBSD has replaced groff with mandoc in the base install, since their 4.9 release, as has macOS Ventura.
History
groff is an original implementation written primarily in C++ by James Clark and is modeled after ditroff, including many extensions. The first version, 0.3.1, was released June 1990. The first stable version, 1.04, was announced in November 1991. groff was developed as free software to provide an easily obtained replacement for the standard AT&T troff/nroff package, which at the time was proprietary, and was not always available even on branded UNIX systems. In 1999, Werner Lemberg and Ted Harding took over maintenance of groff.
See also
TeX
Desktop publishing
Notes
References
External links
groff mailing list archive (searchable)
Groff Forum, hosted by Nabble, archiving the groff mailing list into a searchable forum (sadly none of the emails are visible today).
gives background and examples of troff, including the GNU roff implementation.
Home page of mom macros
GNU Project software
groff
Free typesetting software
|
https://en.wikipedia.org/wiki/Method%20of%20matched%20asymptotic%20expansions
|
In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt.
Method overview
In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small areas in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers, and as boundary or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain.
An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the "inner solution," and the other is the "outer solution," named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained.
A simple example
Consider the boundary value problem
where is a
|
https://en.wikipedia.org/wiki/Key%20stretching
|
In cryptography, key stretching techniques are used to make a possibly weak key, typically a password or passphrase, more secure against a brute-force attack by increasing the resources (time and possibly space) it takes to test each possible key. Passwords or passphrases created by humans are often short or predictable enough to allow password cracking, and key stretching is intended to make such attacks more difficult by complicating a basic step of trying a single password candidate. Key stretching also improves security in some real-world applications where the key length has been constrained, by mimicking a longer key length from the perspective of a brute-force attacker.
There are several ways to perform key stretching. One way is to apply a cryptographic hash function or a block cipher repeatedly in a loop. For example, in applications where the key is used for a cipher, the key schedule in the cipher may be modified so that it takes a specific length of time to perform. Another way is to use cryptographic hash functions that have large memory requirements – these can be effective in frustrating attacks by memory-bound adversaries.
Process
Key stretching algorithms depend on an algorithm which receives an input key and then expends considerable effort to generate a stretched cipher (called an enhanced key) mimicking randomness and longer key length. The algorithm must have no known shortcut, so the most efficient way to relate the input and cipher is to repeat the key stretching algorithm itself. This compels brute-force attackers to expend the same effort for each attempt. If this added effort compares to a brute-force key search of all keys with a certain key length, then the input key may be described as stretched by that same length.
Key stretching leaves an attacker with two options:
Attempt possible combinations of the enhanced key, but this is infeasible if the enhanced key is sufficiently long and unpredictable ( i.e.,the algorithm mimics
|
https://en.wikipedia.org/wiki/Message%20Signaled%20Interrupts
|
Message Signaled Interrupts (MSI) are a method of signaling interrupts, using special in-band messages to replace traditional out-of-band signals on dedicated interrupt lines. While message signaled interrupts are more complex to implement in a device, they have some significant advantages over pin-based out-of-band interrupt signalling, such as improved interrupt handling performance. This is in contrast to traditional interrupt mechanisms, such as the legacy interrupt request (IRQ) system.
Message signaled interrupts are supported in PCI bus since its version 2.2, and in later available PCI Express bus. Some non-PCI architectures also use message signaled interrupts.
Overview
Traditionally, a device has an interrupt line (pin) which it asserts when it wants to signal an interrupt to the host processing environment. This traditional form of interrupt signalling is an out-of-band form of control signalling since it uses a dedicated path to send such control information, separately from the main data path. MSI replaces those dedicated interrupt lines with in-band signalling, by exchanging special messages that indicate interrupts through the main data path. In particular, MSI allows the device to write a small amount of interrupt-describing data to a special memory-mapped I/O address, and the chipset then delivers the corresponding interrupt to a processor.
A common misconception with MSI is that it allows the device to send data to a processor as part of the interrupt. The data that is sent as part of the memory write transaction is used by the chipset to determine which interrupt to trigger on which processor; that data is not available for the device to communicate additional information to the interrupt handler.
As an example, PCI Express does not have separate interrupt pins at all; instead, it uses special in-band messages to allow pin assertion or deassertion to be emulated. Some non-PCI architectures also use MSI; as another example, HP GSC devices do no
|
https://en.wikipedia.org/wiki/Parable%20of%20the%20Workers%20in%20the%20Vineyard
|
The Parable of the Workers in the Vineyard (also called the Parable of the Laborers in the Vineyard or the Parable of the Generous Employer) is a parable of Jesus which appears in chapter 20 of the Gospel of Matthew in the New Testament. It is not included in the other canonical gospels. It has been described as a difficult parable to interpret.
Text
Interpretations
The parable has often been interpreted to mean that even those who are converted late in life earn equal rewards along with those converted early, and also that people who convert early in life need not feel jealous of those later converts. An alternative interpretation identifies the early laborers as Jews, some of whom resent the late-comers (Gentiles) being welcomed as equals in God's Kingdom. Both of these interpretations are discussed in Matthew Henry's 1706 Commentary on the Bible.
An alternative interpretation is that all Christians can be identified with the eleventh-hour workers. Arland J. Hultgren writes: "While interpreting and applying this parable, the question inevitably arises: Who are the eleventh-hour workers in our day? We might want to name them, such as deathbed converts or persons who are typically despised by those who are longtime veterans and more fervent in their religious commitment. But it is best not to narrow the field too quickly. At a deeper level, we are all the eleventh-hour workers; to change the metaphor, we are all honored guests of God in the kingdom. It is not really necessary to decide who the eleventh-hour workers are. The point of the parable—both at the level of Jesus and the level of Matthew's Gospel—is that God saves by grace, not by our worthiness. That applies to all of us."
Some commentators have used the parable to justify the principle of a "living wage", though generally conceding that this is not the main point of the parable. An example is John Ruskin in the 19th century, who quoted the parable in the title of his book Unto This Last. Ruskin did
|
https://en.wikipedia.org/wiki/Balsam%20of%20Peru
|
Balsam of Peru or Peru balsam, also known and marketed by many other names, is a balsam derived from a tree known as Myroxylon balsamum var. pereirae; it is found in El Salvador, where it is an endemic species.
Balsam of Peru is used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items for healing properties. It has a sweet scent. In some instances, balsam of Peru is listed on the ingredient label of a product by one of its various names, but it may not be required to be listed by its name by mandatory labeling conventions.
It can cause allergic reactions, with numerous large surveys identifying it as being in the "top five" allergens most commonly causing patch test reactions. It may cause inflammation, redness, swelling, soreness, itching, and blisters, including allergic contact dermatitis, stomatitis (inflammation and soreness of the mouth or tongue), cheilitis (inflammation, rash, or painful erosion of the lips, oropharyngeal mucosa, or angles of the mouth), pruritus, hand eczema, generalized or resistant plantar dermatitis, rhinitis, and conjunctivitis.
Harvesting and processing
Balsam of Peru is obtained by using rags to soak up the resin after strips of bark are removed from the trunk of Myroxylon balsamum var. pereirae, boiling the rags and letting the balsam sink in water. The balsam is an aromatic dark-brown oily fluid.
Composition
Balsam of Peru contains 25 or so different substances, including cinnamein, cinnamic acid, cinnamyl cinnamate, benzyl benzoate, benzoic acid, and vanillin. It also contains cinnamyl alcohol, cinnamaldehyde, farnesol, and nerolidol. A minority of it, approximately 30–40%, contains resins or esters of unknown composition.
Uses
Balsam of Peru is used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items for healing properties.
In some cases, it is listed on the ingredient label of a product by one
|
https://en.wikipedia.org/wiki/Cardinal%20point%20%28optics%29
|
In Gaussian optics, the cardinal points consist of three pairs of points located on the optical axis of a rotationally symmetric, focal, optical system. These are the focal points, the principal points, and the nodal points; there are two of each. For ideal systems, the basic imaging properties such as image size, location, and orientation are completely determined by the locations of the cardinal points; in fact, only four points are necessary: the two focal points and either the principal points or the nodal points. The only ideal system that has been achieved in practice is a plane mirror, however the cardinal points are widely used to approximate the behavior of real optical systems. Cardinal points provide a way to analytically simplify an optical system with many components, allowing the imaging characteristics of the system to be approximately determined with simple calculations.
Explanation
The cardinal points lie on the optical axis of an optical system. Each point is defined by the effect the optical system has on rays that pass through that point, in the paraxial approximation. The paraxial approximation assumes that rays travel at shallow angles with respect to the optical axis, so that , , and . Aperture effects are ignored: rays that do not pass through the aperture stop of the system are not considered in the discussion below.
Focal points and planes
The front focal point of an optical system, by definition, has the property that any ray that passes through it will emerge from the system parallel to the optical axis. The rear (or back) focal point of the system has the reverse property: rays that enter the system parallel to the optical axis are focused such that they pass through the rear focal point.
The front and rear (or back) focal planes are defined as the planes, perpendicular to the optic axis, which pass through the front and rear focal points. An object infinitely far from the optical system forms an image at the rear focal plane. For a
|
https://en.wikipedia.org/wiki/Sand%20bath
|
A sand bath is a common piece of laboratory equipment made from a container filled with heated sand. It is used to evenly heat another container, most often during a chemical reaction.
A sand bath is most commonly used in conjunction with a hot plate or heating mantle. A beaker is filled with sand or metal pellets (called shot) and is placed on the plate or mantle. The reaction vessel is then partially covered by sand or pellets. The sand or shot then conducts the heat from the plate to all sides of the reaction vessel.
This technique allows a reaction vessel to be heated throughout with minimal stirring, as opposed to heating the bottom of the vessel and waiting for convection to heat the remainder, cutting down on both the duration of the reaction and the possibility of side reactions that may occur at higher temperatures.
A variation on this theme is the water bath in which the sand is replaced with water. It can be used to keep a reaction vessel at the temperature of boiling water until all water is evaporated (see Standard enthalpy change of vaporization).
Sand baths are one of the oldest known pieces of laboratory equipment, having been used by the alchemists. In Arabic alchemy, a sand bath was known as a qadr. In Latin alchemy, a sand bath was called balneum siccum, balneum cineritium, or balneum arenosum.
See also
Heat bath
Water bath
Oil bath
Notes
References
External links
https://web.archive.org/web/20110604144037/http://digicoll.library.wisc.edu/cgi-bin/HistSciTech/HistSciTech-idx?type=turn&entity=HistSciTech000900240229&isize=L
Laboratory equipment
Thermodynamics
Alchemical tools
|
https://en.wikipedia.org/wiki/Bookmark%20manager
|
A bookmark manager is any software program or feature designed to store, organize, and display web bookmarks. The bookmarks feature included in each major web browser is a rudimentary bookmark manager. More capable bookmark managers are available online as web apps, mobile apps, or browser extensions, and may display bookmarks as text links or graphical tiles (often depicting icons). Social bookmarking websites are bookmark managers. Start page browser extensions, new tab page browser extensions, and some browser start pages, also have bookmark presentation and organization features, which are typically tile-based. Some more general programs, such as certain note taking apps, have bookmark management functionality built-in.
See also
Bookmark destinations
Deep links
Home pages
Types of bookmark management
Enterprise bookmarking
Comparison of enterprise bookmarking platforms
Social bookmarking
List of social bookmarking websites
Other weblink-based systems
Search engine
Comparison of search engines with social bookmarking systems
Search engine results page
Web directory
Lists of websites
References
External links
Software engineering
|
https://en.wikipedia.org/wiki/Inflatable%20space%20structures
|
Inflatable space structures are structures which use pressurized air to maintain shape and rigidity. The technological approach has been employed from the early days of the space program with satellites such as Echo, to impact attenuation system that enabled the successful landing of the Pathfinder satellite and rover on Mars in 1997. Inflatable structures are also candidates for space structures, given their low weight, and hence easy transportability.
Application
Inflatable space structures use pressurized air or gas to maintain shape and rigidity. Notable examples of terrestrial inflatable structures include inflatable boats, and some military tents. The airships of the twentieth century are examples of the concept applied in the aviation environment.
NASA has investigated inflatable, deployable structures since the early 1950s. Concepts include inflatable satellites, booms, and antennas. Inflatable heatshields, decelerators, and airbags can be used for entry, descent and landing applications. Inflatable habitats, airlocks, and space stations are possible for in-space living spaces and surface exploration missions.
The Echo 1 satellite, launched in 1960, was large inflated satellite with a diameter of 30 meters and coated with reflective material that allowed for radio signals to be bounced off its surface. The satellite was sent to orbit in a flat-folded configuration and inflated once in orbit. The airbags used on the Mars Pathfinder mission descent and landing in 1997 are an example of use of an inflatable system for impact attenuation.
Space Solar Power (SSP) solutions employing inflatable structures have been designed and qualified for space by NASA engineers.
NASA is testing a deployable heat shield solution in space as a secondary payload on the launch that will deliver the NASA JPSS-2 launch in late 2022. The Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID) is designed to demonstrate aerobraking and re-entry from 18,000 miles per
|
https://en.wikipedia.org/wiki/Eddie%20Kidd%20Jump%20Challenge
|
Eddie Kidd Jump Challenge is a stunt bike video game released for the Acorn Electron, BBC Micro, Commodore 64, MSX and ZX Spectrum first released in 1984, licensed by British stunt performer, Eddie Kidd.
Gameplay
The player takes the role of Eddie Kidd and must make a series of jumps. Like the real Kidd, the player must start by jumping a BMX over oil barrels and work up to jumping cars on a motorbike.
The player starts by riding away from the jump to get a big enough run up. They then must set the correct speed, correctly selecting gears, to hit the ramp with enough speed to clear the obstacles but not too much to miss the landing ramp. While in the air, the player can lean forward or back to land correctly.
Development and release
The game was first released in late 1984 for the ZX Spectrum published by Software Communications' Martech label. This version was ported to the MSX in 1985. A similar version was released for the BBC Micro and Acorn Electron and a modified version of the game (with a much more zoomed in camera angle and no on screen display) released for the Commodore 64, also in 1985.
The game cassette came with a sticker and numbered competition entry card which could be used to win prizes including BMX bikes, computers and TVs.
The game was reissued at a budget price as part of Mastertronic's Ricochet label in 1987.
Reception
Crash gave the game an overall score of 56% concluding it is "a good simulation, but as a game not over exciting and not particularly addictive". The difficulty curve was criticised with the early BMX-based levels, which can not be skipped, described as "a doddle" and once the skill has been mastered, the game holds no challenge. Clare Edgeley of Sinclair User agreed that having to replay the BMX section after failing the more advanced jumps "seems a waste of time" and gave a similar score of 6/10.
Computer and Video Games gave scores between 7/10 and 8/10, particularly praising the zoomed in graphics and improved sound
|
https://en.wikipedia.org/wiki/Stripline
|
In electronics, stripline is a transverse electromagnetic (TEM) transmission line medium invented by Robert M. Barrett of the Air Force Cambridge Research Centre in the 1950s. Stripline is the earliest form of planar transmission line.
Description
A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line. As shown in the diagram, the central conductor need not be equally spaced between the ground planes. In the general case, the dielectric material may be different above and below the central conductor.
To prevent the propagation of unwanted modes, the two ground planes must be shorted together. This is commonly achieved by a row of vias running parallel to the strip on each side.
Like coaxial cable, stripline is non-dispersive, and has no cutoff frequency. Good isolation between adjacent traces can be achieved more easily than with microstrip.
Stripline provides for enhanced noise immunity against the propagation of radiated RF emissions, at the expense of slower propagation speeds when compared to microstrip lines. The effective permittivity of striplines equals the relative permittivity of the dielectric substrate because of wave propagation only in the substrate. Hence striplines have higher effective permittivity in comparison to microstrip lines, which in turn reduces wave propagation speed (see also velocity factor) according to
History
Stripline, now used as a generic term, was originally a proprietary brand of Airborne Instruments Laboratory Inc. (AIL). The version as produced by AIL was essentially air insulated (air stripline) with just a thin layer of dielectric material - just enough to support the conducting strip. The conductor was printed on both sides of t
|
https://en.wikipedia.org/wiki/Encapsulation%20%28networking%29
|
Encapsulation is the computer-networking process of concatenating layer-specific headers or trailers with a service data unit (i.e. a payload) for transmitting information over computer networks. Deencapsulation (or de-encapsulation) is the reverse computer-networking process for receiving information; it removes from the protocol data unit (PDU) a previously concatenated header or tailer that an underlying communications layer transmitted.
Encapsulation and deencapsulation allow the design of modular communication protocols so to logically separate the function of each communications layer, and abstract the structure of the communicated information over the other communications layers. These two processes are common features of the computer-networking models and protocol suites, like in the OSI model and internet protocol suite. However, encapsulation/deencapsulation processes can also serve as malicious features like in the tunneling protocols.
The physical layer is responsible for physical transmission of the data, link encapsulation allows local area networking, IP provides global addressing of individual computers, and TCP selects the process or application (i.e., the TCP or UDP port) that specifies the service such as a Web or TFTP server.
For example, in the IP suite, the contents of a web page are encapsulated with an HTTP header, then by a TCP header, an IP header, and, finally, by a frame header and trailer. The frame is forwarded to the destination node as a stream of bits, where it is decapsulated into the respective PDUs and interpreted at each layer by the receiving node.
The result of encapsulation is that each lower-layer provides a service to the layer or layers above it, while at the same time each layer communicates with its corresponding layer on the receiving node. These are known as adjacent-layer interaction and same-layer interaction, respectively.
In discussions of encapsulation, the more abstract layer is often called the upper-laye
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.