source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Thrifty%20phenotype
|
Thrifty phenotype refers to the correlation between low birth weight of neonates and the increased risk of developing metabolic syndromes later in life, including type 2 diabetes and cardiovascular diseases. Although early life undernutrition is thought to be the key driving factor to the hypothesis, other environmental factors have been explored for their role in susceptibility, such as physical inactivity. Genes may also play a role in susceptibility of these diseases, as they may make individuals predisposed to factors that lead to increased disease risk.
Historical overview
The term thrifty phenotype was first coined by Charles Nicholas Hales and David Barker in a study published in 1992. In their study, the authors reviewed the literature up to and addressed five central questions regarding role of different factors in type 2 diabetes on which they based their hypothesis. These questions included the following:
The role of beta cell deficiency in type 2 diabetes.
The extent to which beta cell deficiency contributes to insulin intolerance.
The role of major nutritional elements in fetal growth.
The role of abnormal amino acid supply in growth limited neonates.
The role of malnutrition in irreversibly defective beta cell growth.
From the review of the existing literature, they posited that poor nutritional status in fetal and early neonatal stages could hamper the development and proper functioning of the pancreatic beta cells by impacting structural features of islet anatomy, which could consequently make the individual more susceptible to the development of type 2 diabetes in later life. However, they did not exclude other causal factors such as obesity, ageing and physical inactivity as determining factors of type 2 diabetes.
In a later study, Barker et al. analyzed living patient data from Hertfordshire, UK, and found that men in their sixties having low birthweight (2.95 kg or less) were 10 times more likely to develop syndrome X (type 2 diabetes,
|
https://en.wikipedia.org/wiki/Ballyhoo%20%28video%20game%29
|
Ballyhoo is an interactive fiction video game designed by Jeff O'Neill and published by Infocom in 1985. The circus-themed game was released for ten systems, including DOS, Atari ST, and Commodore 64. Ballyhoo was labeled as "Standard" difficulty. It is Infocom's nineteenth game.
Plot
The player's character is bedazzled by the spectacle of the circus and the mystery of the performer's life. After attending a show of Tomas Munrab's "The Travelling Circus That Time Forgot", the player loiters near the tents instead of rushing through the exit. Maybe some clowns will practice a new act, or perhaps at least one of the trapeze artists will trip...
Instead, the player overhears a strange conversation. The circus' owner has hired a drunken, inept detective to find his daughter Chelsea, who has been kidnapped. Munrab is convinced that it was an outside job; surely his loyal employees would never betray him like this!
As the player begins to investigate the abduction, it soon becomes clear that the circus workers don't appreciate the intrusion. Their reactions range from indifference to hostility to attempted murder. In order to unravel the mystery, the player engages in a series of actions straight out of a circus fan's dream: dressing up as a clown, walking the high wire, and taming lions.
Release
Ballyhoo included the following physical items in the package:
An "Official Souvenir Program" from The Traveling Circus That Time Forgot describing each of the featured acts and listing common circus slang
A ticket to the circus
A toy balloon imprinted with the circus' name and logo (blue was the most common color, although a few orange or black ones were also shipped)
A trade card for "Dr. Nostrum's Extract", a fictitious patent medicine hailed as a "wondrous curative" containing 19% alcohol
Reception
Compute!'s Gazette in 1986 called Ballyhoo "richly evocative, often exasperating, and very clever". The magazine approved of the splendid feelies and "surprisingly flexible"
|
https://en.wikipedia.org/wiki/Micropolis%20Corporation
|
Micropolis Corporation (styled as MICROPΩLIS) was a disk drive company located in Chatsworth, California and founded in 1976. Micropolis initially manufactured high capacity (for the time) hard-sectored 5.25-inch floppy drives and controllers, later manufacturing hard drives using SCSI and ESDI interfaces.
History
Micropolis's first advance was to take the existing 48 tpi (tracks per inch) standard created by Shugart Associates, and double both the track density and track recording density to get four times the total storage on a 5.25-inch floppy in the "MetaFloppy" series with quad density (Drives :1054, :1053, and :1043) around 1980. Micropolis pioneered 100 tpi density because of the attraction of an exact 100 tracks to the inch. "Micropolis-compatible" 5.25-inch 77-track diskette drives were also manufactured by Tandon (TM100-3M and TM100-4M). Such drives were used in a number of computers like in a Vector Graphic S-100 bus computer, the Durango F-85 and in a few Commodore disk drives (8050, 8250, 8250LP and SFD-1001).
Micropolis later switched to 96 tpi when Shugart went to the 96 tpi standard, based on exact doubling of the 48 tpi standard. This allowed for backwards compatibility for reading by double stepping to read 48 tpi disks.
Micropolis entered the hard disk business with an 8-inch hard drive, following Seagate's lead (Seagate was the next company Alan Shugart founded after Shugart Associates was sold). They later followed with a 5.25-inch hard drive. Micropolis started to manufacture drives in Singapore in 1986. Manufacturing of 3.5-inch hard disks started in 1991.
Micropolis was one of the many hard drive manufacturers in the 1980s and 1990s that went out of business, merged, or closed their hard drive divisions; as a result of capacities and demand for products increased, and profits became hard to find. While Micropolis was able to hold on longer than many of the others, it ultimately sold its hard drive business to Singapore Technology (ST) in
|
https://en.wikipedia.org/wiki/System%20Management%20BIOS
|
In computing, the System Management BIOS (SMBIOS) specification defines data structures (and access methods) that can be used to read management information produced by the BIOS of a computer. This eliminates the need for the operating system to probe hardware directly to discover what devices are present in the computer. The SMBIOS specification is produced by the Distributed Management Task Force (DMTF), a non-profit standards development organization. The DMTF estimates that two billion client and server systems implement SMBIOS.
The DMTF released the version 3.6.0 of the specification on June 20, 2022.
SMBIOS was originally known as Desktop Management BIOS (DMIBIOS), since it interacted with the Desktop Management Interface (DMI).
History
Version 1 of the Desktop Management BIOS (DMIBIOS) specification was produced by Phoenix Technologies in or before 1996.
Version 2.0 of the Desktop Management BIOS specification was released on March 6, 1996 by American Megatrends (AMI), Award Software, Dell, Intel, Phoenix Technologies, and SystemSoft Corporation. It introduced 16-bit plug-and-play functions used to access the structures from Windows 95.
The last version to be published directly by vendors was 2.3 on August 12, 1998. The authors were American Megatrends, Award Software, Compaq, Dell, Hewlett-Packard, Intel, International Business Machines (IBM), Phoenix Technologies, and SystemSoft Corporation.
Circa 1999, the Distributed Management Task Force (DMTF) took ownership of the specification. The first version published by the DMTF was 2.3.1 on March 16, 1999. At approximately the same time Microsoft started to require that OEMs and BIOS vendors support the interface/data-set in order to have Microsoft certification.
Version 3.0.0, introduced in February 2015, added a 64-bit entry point, which can coexist with the previously defined 32-bit entry point.
Version 3.4.0 was released in August 2020.
Version 3.5.0 was released in September 2021.
Version 3.6.0
|
https://en.wikipedia.org/wiki/Parent%E2%80%93offspring%20conflict
|
Parent–offspring conflict (POC) is an expression coined in 1974 by Robert Trivers. It is used to describe the evolutionary conflict arising from differences in optimal parental investment (PI) in an offspring from the standpoint of the parent and the offspring. PI is any investment by the parent in an individual offspring that decreases the parent's ability to invest in other offspring, while the selected offspring's chance of surviving increases.
POC occurs in sexually reproducing species and is based on a genetic conflict: Parents are equally related to each of their offspring and are therefore expected to equalize their investment among them. Offspring are only half or less related to their siblings (and fully related to themselves), so they try to get more PI than the parents intended to provide even at their siblings' disadvantage.
However, POC is limited by the close genetic relationship between parent and offspring: If an offspring obtains additional PI at the expense of its siblings, it decreases the number of its surviving siblings. Therefore, any gene in an offspring that leads to additional PI decreases (to some extent) the number of surviving copies of itself that may be located in siblings. Thus, if the costs in siblings are too high, such a gene might be selected against despite the benefit to the offspring.
The problem of specifying how an individual is expected to weigh a relative against itself has been examined by W. D. Hamilton in 1964 in the context of kin selection. Hamilton's rule says that altruistic behavior will be positively selected if the benefit to the recipient multiplied by the genetic relatedness of the recipient to the performer is greater than the cost to the performer of a social act. Conversely, selfish behavior can only be favoured when Hamilton's inequality is not satisfied. This leads to the prediction that, other things being equal, POC will be stronger under half siblings (e.g., unrelated males father a female's successive
|
https://en.wikipedia.org/wiki/SCIgen
|
SCIgen is a paper generator that uses context-free grammar to randomly generate nonsense in the form of computer science research papers. Its original data source was a collection of computer science papers downloaded from CiteSeer. All elements of the papers are formed, including graphs, diagrams, and citations. Created by scientists at the Massachusetts Institute of Technology, its stated aim is "to maximize amusement, rather than coherence." Originally created in 2005 to expose the lack of scrutiny of submissions to conferences, the generator subsequently became used, primarily by Chinese academics, to create large numbers of fraudulent conference submissions, leading to the retraction of 122 SCIgen generated papers and the creation of detection software to combat its use.
Sample output
Opening abstract of Rooter: A Methodology for the Typical Unification of Access Points and Redundancy:
Prominent results
In 2005, a paper generated by SCIgen, Rooter: A Methodology for the Typical Unification of Access Points and Redundancy, was accepted as a non-reviewed paper to the 2005 World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) and the authors were invited to speak. The authors of SCIgen described their hoax on their website, and it soon received great publicity when picked up by Slashdot. WMSCI withdrew their invitation, but the SCIgen team went anyway, renting space in the hotel separately from the conference and delivering a series of randomly generated talks on their own "track". The organizer of these WMSCI conferences is Professor Nagib Callaos. From 2000 until 2005, the WMSCI was also sponsored by the Institute of Electrical and Electronics Engineers. The IEEE stopped granting sponsorship to Callaos from 2006 to 2008.
Submitting the paper was a deliberate attempt to embarrass WMSCI, which the authors claim accepts low-quality papers and sends unsolicited requests for submissions in bulk to academics. As the SCIgen website states:
Computi
|
https://en.wikipedia.org/wiki/Visual%20Prolog
|
Visual Prolog, previously known as PDC Prolog and Turbo Prolog, is a strongly typed object-oriented extension of Prolog. As Turbo Prolog, it was marketed by Borland but it is now developed and marketed by the Danish firm PDC that originally created it. Visual Prolog can build Microsoft Windows GUI-applications, console applications, DLLs (dynamic link libraries), and CGI-programs. It can also link to COM components and to databases by means of ODBC.
Visual Prolog contains a compiler which generates x86 and x86-64 machine code. Unlike standard Prolog, programs written in Visual Prolog are statically typed. This allows some errors to be caught at compile-time instead of run-time.
History
Hanoi example
In the Towers of Hanoi example, the Prolog inference engine figures out how to move a stack of any number of progressively smaller disks, one at a time, from the left pole to the right pole in the described way, by means of a center as transit, so that there's never a bigger disk on top of a smaller disk. The predicate hanoi takes an integer indicating the number of disks as an initial argument.
class hanoi
predicates
hanoi : (unsigned N).
end class hanoi
implement hanoi
domains
pole = left; center; right.
clauses
hanoi(N) :- move(N, left, center, right).
class predicates
move : (unsigned N, pole A, pole B, pole C).
clauses
move(0, _, _, _) :- !.
move(N, A, B, C) :-
move(N-1, A, C, B),
stdio::writef("move a disc from % pole to the % pole\n", A, C),
move(N-1, B, A, C).
end implement hanoi
goal
console::init(),
hanoi::hanoi(4).
Reception
Bruce F. Webster of BYTE praised Turbo Prolog in September 1986, stating that it was the first Borland product to excite him as much as Turbo Pascal did. He liked the user interface and low price, and reported that two BYU professors stated that it was superior to the Prolog they used at the university. While q
|
https://en.wikipedia.org/wiki/System%20equivalence
|
In the systems sciences system equivalence is the behavior of a parameter or component of a system in a way similar to a parameter or component of a different system. Similarity means that mathematically the parameters and components will be indistinguishable from each other. Equivalence can be very useful in understanding how complex systems work.
Overview
Examples of equivalent systems are first- and second-order (in the independent variable) translational, electrical, torsional, fluidic, and caloric systems.
Equivalent systems can be used to change large and expensive mechanical, thermal, and fluid systems into a simple, cheaper electrical system. Then the electrical system can be analyzed to validate that the system dynamics will work as designed. This is a preliminary inexpensive way for engineers to test that their complex system performs the way they are expecting.
This testing is necessary when designing new complex systems that have many components. Businesses do not want to spend millions of dollars on a system that does not perform the way that they were expecting. Using the equivalent system technique, engineers can verify and prove to the business that the system will work. This lowers the risk factor that the business is taking on the project.
The following is a chart of equivalent variables for the different types of systems
{| class="wikitable"
|-
! System type
! Flow variable
! Effort variable
! Compliance
! Inductance
! Resistance
|-
| Mechanical
| dx/dt
| F = force
| spring (k)
| mass (m)
| damper (c)
|-
| Electrical
| i = current
| V = voltage
| capacitance (C)
| inductance (L)
| resistance (R)
|-
| Thermal
| qh = heat flow rate
| ∆T = change in temperature
| object (C)
| inductance (L)
| conduction and convection (R)
|-
| Fluid
| qm = mass flow rate,
qv = volume flow rate
| p = pressure, h = height
| tank (C)
| mass (m)
| valve or orifice (R)
|}
Flow variable: moves through the system
Effort variable: puts the system into action
|
https://en.wikipedia.org/wiki/Acid2
|
Acid2 is a webpage that test web browsers' functionality in displaying aspects of HTML markup, CSS 2.1 styling, PNG images, and data URIs. The test page was released on 13 April 2005 by the Web Standards Project. The Acid2 test page will be displayed correctly in any application that follows the World Wide Web Consortium and Internet Engineering Task Force specifications for these technologies. These specifications are known as web standards because they describe how technologies used on the web are expected to function.
The Acid2 tests rendering flaws in web browsers and other applications that render HTML. Named after the acid test for gold, it was developed in the spirit of Acid1, a relatively narrow test of compliance with the Cascading Style Sheets 1.0 (CSS1) standard. As with Acid1, an application passes the test if the way it displays the test page matches a reference image.
Acid2 was designed with Microsoft Internet Explorer particularly in mind. The creators of Acid2 were dismayed that Internet Explorer did not follow web standards. It was prone to display web pages differently from other browsers, causing web developers to spend time tweaking their web pages. Acid2 challenged Microsoft to make Internet Explorer comply with web standards. On 31 October 2005, Safari 2.0.2 became the first browser to pass Acid2. Opera, Konqueror, Firefox, and others followed. With the release of Internet Explorer 8 on 19 March 2009, the latest versions of all major desktop web browsers now pass the test. Acid2 was followed by Acid3.
History
Acid2 was first proposed by Håkon Wium Lie, chief technical officer of Opera Software and creator of the widely used Cascading Style Sheets web standard. In a 16 March 2005 article on CNET, Lie expressed dismay that Microsoft Internet Explorer did not properly support web standards and hence was not completely interoperable with other browsers. He announced that Acid2 would be a challenge to Microsoft to design Internet Explorer 7, the
|
https://en.wikipedia.org/wiki/Degree%20of%20parallelism
|
The degree of parallelism (DOP) is a metric which indicates how many operations can be or are being simultaneously executed by a computer. It is used as an indicator of the complexity of algorithms, and is especially useful for describing the performance of parallel programs and multi-processor systems.
A program running on a parallel computer may utilize different numbers of processors at different times. For each time period, the number of processors used to execute a program is defined as the degree of parallelism. The plot of the DOP as a function of time for a given program is called the parallelism profile.
See also
Optical Multi-Tree with Shuffle Exchange
References
Instruction processing
Parallel computing
|
https://en.wikipedia.org/wiki/Convergent%20series
|
In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series that is denoted
The th partial sum is the sum of the first terms of the sequence; that is,
A series is convergent (or converges) if the sequence of its partial sums tends to a limit; that means that, when adding one after the other in the order given by the indices, one gets partial sums that become closer and closer to a given number. More precisely, a series converges, if there exists a number such that for every arbitrarily small positive number , there is a (sufficiently large) integer such that for all ,
If the series is convergent, the (necessarily unique) number is called the sum of the series.
The same notation
is used for the series, and, if it is convergent, to its sum. This convention is similar to that which is used for addition: denotes the operation of adding and as well as the result of this addition, which is called the sum of and .
Any series that is not convergent is said to be divergent or to diverge.
Examples of convergent and divergent series
The reciprocals of the positive integers produce a divergent series (harmonic series):
Alternating the signs of the reciprocals of positive integers produces a convergent series (alternating harmonic series):
The reciprocals of prime numbers produce a divergent series (so the set of primes is "large"; see divergence of the sum of the reciprocals of the primes):
The reciprocals of triangular numbers produce a convergent series:
The reciprocals of factorials produce a convergent series (see e):
The reciprocals of square numbers produce a convergent series (the Basel problem):
The reciprocals of powers of 2 produce a convergent series (so the set of powers of 2 is "small"):
The reciprocals of powers of any n>1 produce a convergent series:
Alternating the signs of reciprocals of powers of 2 also produces a convergent series:
|
https://en.wikipedia.org/wiki/Serial%20analysis%20of%20gene%20expression
|
Serial Analysis of Gene Expression (SAGE) is a transcriptomic technique used by molecular biologists to produce a snapshot of the messenger RNA population in a sample of interest in the form of small tags that correspond to fragments of those transcripts. Several variants have been developed since, most notably a more robust version, LongSAGE, RL-SAGE and the most recent SuperSAGE. Many of these have improved the technique with the capture of longer tags, enabling more confident identification of a source gene.
Overview
Briefly, SAGE experiments proceed as follows:
The mRNA of an input sample (e.g. a tumour) is isolated and a reverse transcriptase and biotinylated primers are used to synthesize cDNA from mRNA.
The cDNA is bound to Streptavidin beads via interaction with the biotin attached to the primers, and is then cleaved using a restriction endonuclease called an anchoring enzyme (AE). The location of the cleavage site and thus the length of the remaining cDNA bound to the bead will vary for each individual cDNA (mRNA).
The cleaved cDNA downstream from the cleavage site is then discarded, and the remaining immobile cDNA fragments upstream from cleavage sites are divided in half and exposed to one of two adaptor oligonucleotides (A or B) containing several components in the following order upstream from the attachment site: 1) Sticky ends with the AE cut site to allow for attachment to cleaved cDNA; 2) A recognition site for a restriction endonuclease known as the tagging enzyme (TE), which cuts about 15 nucleotides downstream of its recognition site (within the original cDNA/mRNA sequence); 3) A short primer sequence unique to either adaptor A or B, which will later be used for further amplification via PCR.
After adaptor ligation, cDNA are cleaved using TE to remove them from the beads, leaving only a short "tag" of about 11 nucleotides of original cDNA (15 nucleotides minus the 4 corresponding to the AE recognition site).
The cleaved cDNA tags are th
|
https://en.wikipedia.org/wiki/Trustworthy%20computing
|
The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available, and reliable. It is particularly associated with the Microsoft initiative of the same name, launched in 2002.
History
Until 1995, there were restrictions on commercial traffic over the Internet.
On, May 26, 1995, Bill Gates sent the "Internet Tidal Wave" memorandum to Microsoft executives assigning "...the Internet this highest level of importance..." but Microsoft's Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web had caught them by surprise but by mid 1995, they were testing their own web server, and on August 24, 1995, launched a major online service, MSN.
The National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure and produced an important report in 1999, "Trust in Cyberspace". This report reviews the cost of un-trustworthy systems and identifies actions required for improvement.
Microsoft and Trustworthy Computing
Bill Gates launched Microsoft's "Trustworthy Computing" initiative with a January 15, 2002 memo, referencing an internal whitepaper by Microsoft CTO and Senior Vice President Craig Mundie. The move was reportedly prompted by the fact that they "...had been under fire from some of its larger customers–government agencies, financial companies and others–about the security problems in Windows, issues that were being brought front and center by a series of self-replicating worms and embarrassing attacks." such as Code Red, Nimda, Klez and Slammer.
Four areas were identified as the initiative's key areas: Security, Privacy, Reliability, and Business Integrity, and despite some initial scepticism, at its 10-year anniversary it was generally accepted as having "...made a positive impact on the industry...".
The Trustworthy Computing campaign was t
|
https://en.wikipedia.org/wiki/Starch%20gelatinization
|
Starch gelatinization is a process of breaking down of intermolecular bonds of starch molecules in the presence of water and heat, allowing the hydrogen bonding sites (the hydroxyl hydrogen and oxygen) to engage more water. This irreversibly dissolves the starch granule in water. Water acts as a plasticizer.
Gelatinization Process
Three main processes happen to the starch granule: granule swelling, crystallite and double-helical melting, and amylose leaching.
Granule swelling: During heating, water is first absorbed in the amorphous space of starch, which leads to a swelling phenomenon.
Melting of double helical structures: Water then enters via amorphous regions into the tightly bound areas of double helical structures of amylopectin. At ambient temperatures these crystalline regions do not allow water to enter. Heat causes such regions to become diffuse, the amylose chains begin to dissolve, to separate into an amorphous form and the number and size of crystalline regions decreases. Under the microscope in polarized light starch loses its birefringence and its extinction cross.
Amylose Leaching: Penetration of water thus increases the randomness in the starch granule structure, and causes swelling; eventually amylose molecules leach into the surrounding water and the granule structure disintegrates.
The gelatinization temperature of starch depends upon plant type and the amount of water present, pH, types and concentration of salt, sugar, fat and protein in the recipe, as well as starch derivatisation technology are used. Some types of unmodified native starches start swelling at 55 °C, other types at 85 °C. The gelatinization temperature of modified starch depends on, for example, the degree of cross-linking, acid treatment, or acetylation.
Gel temperature can also be modified by genetic manipulation of starch synthase genes. Gelatinization temperature also depends on the amount of damaged starch granules; these will swell faster. Damaged starch can be
|
https://en.wikipedia.org/wiki/Resurrection%20ecology
|
"Resurrection ecology" is an evolutionary biology technique whereby researchers hatch dormant eggs from lake sediments to study animals as they existed decades ago. It is a new approach that might allow scientists to observe evolution as it occurred, by comparing the animal forms hatched from older eggs with their extant descendants. This technique is particularly important because the live organisms hatched from egg banks can be used to learn about the evolution of behavioural, plastic or competitive traits that are not apparent from more traditional paleontological methods.
One such researcher in the field is W. Charles Kerfoot of Michigan Technological University whose results were published in the journal Limnology and Oceanography. He reported on success in a search for "resting eggs" of zooplankton that are dormant in Portage Lake on Michigan's Upper Peninsula. The lake has undergone a considerable amount of change over the last 100 years including flooding by copper mine debris, dredging, and eutrophication. Others have used this technique to explore the evolutionary effects of eutrophication, predation, and metal contamination. Resurrection ecology provided the best empirical example of the "Red Queen Hypothesis" in nature. Any organism that produces a resting stage can be used for resurrection ecology. However, the most frequently used organism is the water flea, Daphnia. This genus has well-established protocols for lab experimentation and usually asexually reproduces allowing for experiments on many individuals with the same genotype.
Although the more esoteric demonstration of natural selection is alone a valuable aspect of the study described, there is a clear ecological implication in the discovery that very old zooplankton eggs have survived in the lake: the potential still exists, if and when this environment is restored to something of a more pristine nature, for at least some of the original (pre-disturbance) inhabitants to re-establish populatio
|
https://en.wikipedia.org/wiki/Underground%20storage%20tank
|
An underground storage tank (UST) is, according to United States federal regulations, a storage tank, including any underground piping connected to the tank, that has at least 10 percent of its volume underground.
Definition & Regulation in U.S. federal law
"Underground storage tank" or "UST" means any one or combination of tanks including connected underground pipes that is used to contain regulated substances, and the volume of which including the volume of underground pipes is 10 percent or more beneath the surface of the ground. This does not include, among other things, any farm or residential tank of 1,100 gallons or less capacity used for storing motor fuel for noncommercial purposes, tanks for storing heating oil for consumption on the premises, or septic tanks.
USTs are regulated in the United States by the U.S. Environmental Protection Agency to prevent the leaking of petroleum or other hazardous substances and the resulting contamination of groundwater and soil. In 1984, U.S. Congress amended the Resource Conservation Recovery Act to include Subtitle I: Underground Storage Tanks, calling on the U.S. Environmental Protection Agency (EPA) to regulate the tanks. In 1985, when it was launched, there were more than 2 million tanks in the country and more than 750,000 owners and operators. The program was given 90 staff to oversee this responsibility. In September 1988, the EPA published initial underground storage tank regulations, including a 10-year phase-in period that required all operators to upgrade their USTs with spill prevention and leak detection equipment.
For USTs in service in the United States, the EPA and states collectively require tank operators to take financial responsibility for any releases or leaks associated with the operation of those below ground tanks. As a condition to keep a tank in operation a demonstrated ability to pay for any release must be shown via UST insurance, a bond, or some other ability to pay.
EPA updated UST and s
|
https://en.wikipedia.org/wiki/Damping
|
In physical systems, damping is the loss of energy of an oscillating system by dissipation. Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation. Examples of damping include viscous damping in a fluid (see viscous drag), surface friction, radiation, resistance in electronic oscillators, and absorption and scattering of light in optical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur in biological systems and bikes (ex. Suspension (mechanics)). Damping is not to be confused with friction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping.
The damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure describing how rapidly the oscillations decay from one bounce to the next.
The damping ratio is a system parameter, denoted by ("zeta"), that can vary from undamped (), underdamped () through critically damped () to overdamped ().
The behaviour of oscillating systems is often of interest in a diverse range of disciplines that include control engineering, chemical engineering, mechanical engineering, structural engineering, and electrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of an electric motor, but a normalised, or non-dimensionalised approach can be convenient in descr
|
https://en.wikipedia.org/wiki/Transfer%20length%20method
|
The Transfer Length Method or the "Transmission Line Model" (both abbreviated as TLM) is a technique used in semiconductor physics and engineering to determine the specific contact resistivity between a metal and a semiconductor. TLM has been developed because with the ongoing device shrinkage in microelectronics the relative contribution of the contact resistance at metal-semiconductor interfaces in a device could not be neglected any more and an accurate measurement method for determining the specific contact resistivity was required.
General description
The goal of the transfer length method (TLM) is the determination of the specific contact resistivity of a metal-semiconductor junction. To create a metal-semiconductor junction a metal film is deposited on the surface of a semiconductor substrate. The TLM is usually used to determine the specific contact resistivity when the metal-semiconductor junction shows ohmic behaviour. In this case the contact resistivity can be defined as the voltage difference across the interfacial layer between the deposited metal and the semiconductor substrate divided by the current density which is defined as the current divided by the interfacial area through which the current is passing:
In this definition of the specific contact resistivity refers to the voltage value just below the metal-semiconductor interfacial layer while represents the voltage value just above the metal-semiconductor interfacial layer. There are two different methods of performing TLM measurements which are both introduced in the remainder of this section. One is called just transfer length method while the other is named circular transfer length method (c-TLM).
TLM
To determine the specific contact resistivity an array of rectangular metal pads is deposited on the surface of a semiconductor substrate as it is depicted in the image to the right. The definition of the rectangular pads can be done by utilizing photolithography while the metal depo
|
https://en.wikipedia.org/wiki/Delta-sigma%20modulation
|
Delta-sigma (ΔΣ; or sigma-delta, ΣΔ) modulation is an oversampling method for encoding signals into low bit depth digital signals at a very high sample-frequency as part of the process of delta-sigma analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). Delta-sigma modulation achieves high quality by utilizing a negative feedback loop during quantization to the lower bit depth that continuously corrects quantization errors and moves quantization noise to higher frequencies well above the original signal's bandwidth. Subsequent low-pass filtering for demodulation easily removes this high frequency noise and time averages to achieve high accuracy in amplitude.
Both ADCs and DACs can employ delta-sigma modulation. A delta-sigma ADC (e.g. Figure 1 top) encodes an analog signal using high-frequency delta-sigma modulation and then applies a digital filter to demodulate it to a high-bit digital output at a lower sampling-frequency. A delta-sigma DAC (e.g. Figure 1 bottom) encodes a high-resolution digital input signal into a lower-resolution but higher sample-frequency signal that may then be mapped to voltages and smoothed with an analog filter for demodulation. In both cases, the temporary use of a low bit depth signal at a higher sampling frequency simplifies circuit design and takes advantage of the efficiency and high accuracy in time of digital electronics.
Primarily because of its cost efficiency and reduced circuit complexity, this technique has found increasing use in modern electronic components such as DACs, ADCs, frequency synthesizers, switched-mode power supplies and motor controllers. The coarsely-quantized output of a delta-sigma ADC is occasionally used directly in signal processing or as a representation for signal storage (e.g., Super Audio CD stores the raw output of a 1-bit delta-sigma modulator).
Motivation
When transmitting an analog signal directly, all noise in the system and transmission is added to the analog signal, re
|
https://en.wikipedia.org/wiki/Social%20choice%20theory
|
Social choice theory or social choice is a theoretical framework for analysis of combining individual opinions, preferences, interests, or welfares to reach a collective decision or social welfare in some sense. Whereas choice theory is concerned with individuals making choices based on their preferences, social choice theory is concerned with how to translate the preferences of individuals into the preferences of a group. A non-theoretical example of a collective decision is enacting a law or set of laws under a constitution. Another example is voting, where individual preferences over candidates are collected to elect a person that best represents the group's preferences.
Social choice blends elements of welfare economics and public choice theory. It is methodologically individualistic, in that it aggregates preferences and behaviors of individual members of society. Using elements of formal logic for generality, analysis proceeds from a set of seemingly reasonable axioms of social choice to form a social welfare function (or constitution). Results uncovered the logical incompatibility of various axioms, as in Arrow's theorem, revealing an aggregation problem and suggesting reformulation or theoretical triage in dropping some axiom(s).
Overlap with public choice theory
"Public choice" and "social choice" are heavily overlapping fields of endeavor.
Social choice and public choice theory may overlap but are disjoint if narrowly construed. The Journal of Economic Literature classification codes place Social Choice under Microeconomics at JEL D71 (with Clubs, Committees, and Associations) whereas most Public Choice subcategories are in JEL D72 (Economic Models of Political Processes: Rent-Seeking, Elections, Legislatures, and Voting Behavior).
Social choice theory (and public choice theory) dates from Condorcet's formulation of the voting paradox, though it arguably goes back further to Ramon Llull's 1299 publication.
Kenneth Arrow's Social Choice and Individua
|
https://en.wikipedia.org/wiki/Multigrid%20method
|
In numerical analysis, a multigrid method (MG method) is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners.
The main idea of multigrid is to accelerate the convergence of a basic iterative method (known as relaxation, which generally reduces short-wavelength error) by a global correction of the fine grid solution approximation from time to time, accomplished by solving a coarse problem. The coarse problem, while cheaper to solve, is similar to the fine grid problem in that it also has short- and long-wavelength errors. It can also be solved by a combination of relaxation and appeal to still coarser grids. This recursive process is repeated until a grid is reached where the cost of direct solution there is negligible compared to the cost of one relaxation sweep on the fine grid. This multigrid cycle typically reduces all error components by a fixed amount bounded well below one, independent of the fine grid mesh size. The typical application for multigrid is in the numerical solution of elliptic partial differential equations in two or more dimensions.
Multigrid methods can be applied in combination with any of the common discretization techniques. For example, the finite element method may be recast as a multigrid method. In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions and boundary conditions. They do not depend on the separability of the equations or other special
|
https://en.wikipedia.org/wiki/Gravity-based%20structure
|
A gravity-based structure (GBS) is a support structure held in place by gravity, most notably offshore oil platforms. These structures are often constructed in fjords due to their protected area and sufficient depth.
Offshore oil platforms
Prior to deployment, a study of the seabed must be done to ensure it can withstand the vertical load from the structure. It is then constructed with steel reinforced concrete into tanks or cells, some of which are used to control the buoyancy. When construction is complete, the structure is towed to its intended location.
Wind turbines
Early deployments of offshore wind power turbines used these structures. As of 2010, 14 of the world's offshore wind farms had some of their turbines supported by gravity-based structures. The deepest registered offshore wind farm with gravity-based structures is the Blyth Offshore Wind Farm, UK, with a depth of approx. 40 m.
See also
Offshore concrete structure
List of tallest oil platforms
Troll A platform
Gullfaks C
Hibernia (oil field)
References
Offshore engineering
Structural engineering
Oil platforms
|
https://en.wikipedia.org/wiki/Pharming%20%28genetics%29
|
Pharming, a portmanteau of "farming" and "pharmaceutical", refers to the use of genetic engineering to insert genes that code for useful pharmaceuticals into host animals or plants that would otherwise not express those genes, thus creating a genetically modified organism (GMO). Pharming is also known as molecular farming, molecular pharming or biopharming.
The products of pharming are recombinant proteins or their metabolic products. Recombinant proteins are most commonly produced using bacteria or yeast in a bioreactor, but pharming offers the advantage to the producer that it does not require expensive infrastructure, and production capacity can be quickly scaled to meet demand, at greatly reduced cost.
History
The first recombinant plant-derived protein (PDP) was human serum albumin, initially produced in 1990 in transgenic tobacco and potato plants. Open field growing trials of these crops began in the United States in 1992 and have taken place every year since. While the United States Department of Agriculture has approved planting of pharma crops in every state, most testing has taken place in Hawaii, Nebraska, Iowa, and Wisconsin.
In the early 2000s, the pharming industry was robust. Proof of concept has been established for the production of many therapeutic proteins, including antibodies, blood products, cytokines, growth factors, hormones, recombinant enzymes and human and veterinary vaccines. By 2003 several PDP products for the treatment of human diseases were under development by nearly 200 biotech companies, including recombinant gastric lipase for the treatment of cystic fibrosis, and antibodies for the prevention of dental caries and the treatment of non-Hodgkin's lymphoma.
However, in late 2002, just as ProdiGene was ramping up production of trypsin for commercial launch it was discovered that volunteer plants (left over from the prior harvest) of one of their GM corn products were harvested with the conventional soybean crop later plante
|
https://en.wikipedia.org/wiki/Parental%20investment
|
Parental investment, in evolutionary biology and evolutionary psychology, is any parental expenditure (e.g. time, energy, resources) that benefits offspring. Parental investment may be performed by both males and females (biparental care), females alone (exclusive maternal care) or males alone (exclusive paternal care). Care can be provided at any stage of the offspring's life, from pre-natal (e.g. egg guarding and incubation in birds, and placental nourishment in mammals) to post-natal (e.g. food provisioning and protection of offspring).
Parental investment theory, a term coined by Robert Trivers in 1972, predicts that the sex that invests more in its offspring will be more selective when choosing a mate, and the less-investing sex will have intra-sexual competition for access to mates. This theory has been influential in explaining sex differences in sexual selection and mate preferences, throughout the animal kingdom and in humans.
History
In 1859, Charles Darwin published On the Origin of Species. This introduced the concept of natural selection to the world, as well as related theories such as sexual selection. For the first time, evolutionary theory was used to explain why females are "coy" and males are "ardent" and compete with each other for females' attention. In 1930, Ronald Fisher wrote The Genetical Theory of Natural Selection, in which he introduced the modern concept of parental investment, introduced the sexy son hypothesis, and introduced Fisher's principle. In 1948, Angus John Bateman published an influential study of fruit flies in which he concluded that because female gametes are more costly to produce than male gametes, the reproductive success of females was limited by the ability to produce ovum, and the reproductive success of males was limited by access to females. In 1972, Trivers continued this line of thinking with his proposal of parental investment theory, which describes how parental investment affects sexual behavior. He conclude
|
https://en.wikipedia.org/wiki/PDP-4
|
The PDP-4 was the successor to the Digital Equipment Corporation's PDP-1.
History
This 18-bit machine, first shipped in 1962, was a compromise: "with slower memory and different packaging" than the PDP-1, but priced at $65,000 - less than half the price of its predecessor. All later 18-bit PDP machines (7, 9 and 15) are based on a similar, but enlarged instruction set, more powerful than, but based on the same concepts as, the 12-bit PDP-5/PDP-8 series.
Approximately 54 were sold.
Hardware
The system's memory cycle is 8 microseconds, compared to 5 microseconds for the PDP-1.
The PDP-4 weighs about .
Mass storage
Both the PDP-1 and the PDP-4 were introduced as paper tape-based systems. The only use, if any, for IBM-compatible 200 BPI or 556 BPI magnetic tape was for data. The use of "mass storage" drums - not even a megabyte and non-removable - were an available option, but were not in the spirit of the “personal” or serially shared systems that DEC offered.
It was in this setting that DEC introduced DECtape, initially called "MicroTape", for both the PDP-1 and PDP-4.
Software
DEC provided an editor, an assembler, and a FORTRAN II compiler. The assembler was different from that of the PDP-1 in two ways:
Unlike the PDP-1, macros were not supported.
It was a one-pass assembler; paper-tape input did not have to be read twice.
Photos
PDP-4
See also
Programmed Data Processor
References
DEC minicomputers
18-bit computers
Transistorized computers
Computer-related introductions in 1962
|
https://en.wikipedia.org/wiki/PDP-9
|
The PDP-9, the fourth of the five 18-bit minicomputers produced by Digital Equipment Corporation, was introduced in 1966. A total of 445 PDP-9 systems were produced, of which 40 were the compact, low-cost PDP-9/L units.
History
The 18-bit PDP systems preceding the PDP-9 were the PDP-1, PDP-4 and PDP-7. Its successor was the PDP-15.
Hardware
The PDP-9, which is "two metres wide and about 75cm deep," is approximately twice the speed of the PDP-7. It was built using discrete transistors, and has an optional integrated vector graphics terminal. The PDP-9 has a memory cycle time of 1 microsecond, and weighs about . The PDP-9/L has a memory cycle time of 1.5 microseconds, and weighs about .
It was DEC's first microprogrammed machine.
A typical configuration included:
300 cps Paper Tape Reader
50 cps Paper Tape Punch
10 cps Console Teleprinter, Model 33 KSR
Among the improvements of the PDP-9 over its PDP-7 predecessor were:
the addition of Status flags for reader and punch errors, thus providing added flexibility and for error detection
an entirely new design for multi-level interrupts, called the Automatic Priority Interrupt (API) option
a more advanced form of memory management
User/university-based research projects for extending the PDP-9 included:
a hardware capability for floating-point arithmetic, at a time when machines in this price range used software
a PDP-9 controlled parallel computer
Software
The system came with a single-user keyboard monitor. DECsys provided an interactive, single-user, program development environment for Fortran and assembly language programs.
Both FORTRAN II and FORTRAN IV were implemented for the PDP-9.
MUMPS was originally developed on the PDP-7, and ran on several PDP-9s at the Massachusetts General Hospital.
Sales
The PDP-7, of which 120 were sold, was described as "highly successful". The PDP-9 sold 445 units. Both have submodels, the PDP-7A and the PDP-9/L, neither of which accounted for a substantial percent
|
https://en.wikipedia.org/wiki/PDP-5
|
The PDP-5 was Digital Equipment Corporation's first 12-bit computer, introduced in 1963.
History
An earlier 12-bit computer, named LINC has been described as the first minicomputer and also "the first modern personal computer." It had 2,048 12-bit words, and the first LINC was built in 1962.
DEC's founder, Ken Olsen, had worked with both it and a still earlier computer, the 18-bit 64,000-word TX-0, at MIT's Lincoln Laboratory.
Neither of these machines was mass-produced.
Applicability
Although the LINC computer was intended primarily for laboratory use, the PDP-5's 12-bit system had a far wider range of use. An example of DEC's "The success of the PDP-5 ... proved that a market for minicomputers did exist"
is:
"Data-processing computers have accomplished for mathematicians what the wheel did for transportation"
"Very reliable data was obtained with ..."
"A PDP-5 computer was used very successfully aboard Evergreen for ..."
all of which described the same PDP-5 used by the United States Coast Guard.
The architecture of the PDP-5 was specified by Alan Kotok and Gordon Bell; the principal logic designer was the young engineer Edson de Castro who went on later to found Data General.
Hardware
By contrast with the 4-cabinet PDP-1, the minimum configuration of the PDP-5 was a single 19-inch cabinet with "150 printed circuit board modules holding over 900 transistors." Additional cabinets were required to house many peripheral devices.
The minimum configuration weighed about .
The machine was offered with from 1,024 to 32,768 12-bit words of core memory. Addressing more than 4,096 words of memory required the addition of a Type 154 Memory Extension Control unit (in modern terms, a memory management unit); this allowed adding additional Type 155 4,096 word core memory modules.
Instruction set
Of the 12 bits in each word, exactly 3 were used for instruction op-codes.
The PDP-5's instruction set was later expanded in its successor, the PDP-8. The biggest c
|
https://en.wikipedia.org/wiki/PDP-15
|
The PDP-15 was the fifth and last of the 18-bit minicomputers produced by Digital Equipment Corporation. The PDP-1 was first delivered in December 1959 and the first PDP-15 was delivered in February 1970. More than 400 of these successors to the PDP-9 (and 9/L) were ordered within the first eight months.
In addition to operating systems, the PDP-15 has compilers for Fortran and ALGOL.
History
The 18-bit PDP systems preceding the PDP-15 were named PDP-1, PDP-4, PDP-7 and PDP-9.
The last PDP-15 was produced in 1979.
Hardware
The PDP-15 was DEC's only 18-bit machine constructed from TTL integrated circuits rather than discrete transistors, and, like every DEC 18-bit system could be equipped with:
an optional X-Y (point-plot or vector graphics) display.
a hardware floating-point option, with a 10x speedup, was offered.
up to 128Kwords of core main memory
Models
The PDP-15 models offered by DEC were:
PDP-15/10: a 4K-word paper-tape based system
PDP-15/20: 8K, added DECtape
PDP-15/30: 16K word, added memory protection and a foreground/background Monitor
PDP-15/35: Added a 524K-word fixed-head disk drive
PDP-15/40: 24K memory
PDP-15/50:
PDP-15/76
PDP-15/76: 15/40 plus PDP-11 frontend. The PDP-15/76 was a dual-processor system that shared memory with an attached PDP-11/05. The PDP-11 served as a peripheral processor and enabled use of Unibus peripherals.
Software
DECsys, RSX-15, and XVM/RSX were the operating systems supplied by DEC for the PDP-15. A batch processing monitor (BOSS-15: Batch Operating Software System) was also available.
DECsys
The first DEC-supplied mass-storage operating system available for the PDP-15 was DECsys, an interactive single-user system. This software was provided on a DECtape reel, of which copies were made for each user. This copied DECtape was then added to by the user, and thus was storage
for personal programs and data. A second DECtape was used as a scratch tape by the assembler and the Fortran compiler.
RSX-15
RSX-
|
https://en.wikipedia.org/wiki/Encyclopedia%20of%20World%20Problems%20and%20Human%20Potential
|
The Encyclopedia of World Problems and Human Potential (EWPHP) is a research work published by the Union of International Associations (UIA). It is available online since 2000, and was previously available as a CD-ROM and as a three-volume book.
The EWPHP began under the direction of Anthony Judge in 1972 and eventually came to comprise more than 100,000 entries and 700,000 links, as well as hundreds of pages of introductory notes and commentaries on problems, strategies, values, concepts of human development, and various intellectual resources.
Contributors and history
The project was originally conceived in 1972 by James Wellesley-Wesley, who provided financial support through the foundation Mankind 2000, and Anthony Judge, by whom the work was orchestrated.
Work on the first edition started with funds from Mankind 2000, matching those of the UIA. The publisher Klaus Saur, of Munich, provided funds, in conjunction with those from the UIA, for work on the 2nd, 3rd, and 4th editions. Seed funding for the third volume of the 4th edition was also provided on behalf of Mankind 2000. In the nineties, seed funding was provided, again on behalf of Mankind 2000, for computer equipment which subsequently allowed the UIA to develop a website and make available for free the 1994–1995 edition of the EWPHP databases. The UIA, on the initiative of Nadia McLaren, a consultant ecologist who has been a primary editor for the EWPHP, instigated two multi-partner projects funded by the European Union, with matching funds from the UIA. The work done through those two projects, Ecolynx: Information Context for Biodiversity Conservation (mainly) and Interactive Health Ecology Access Links, resulted in a fifth, web-based edition of the EWPHP in 2000. Two other individuals supported the project: Robert Jungk of Mankind 2000, and Christian de Laet of the UIA.
EWPHP began as a processing of documents gathered from entities profiled in the Yearbook of International Organizations. The Uni
|
https://en.wikipedia.org/wiki/List%20of%20finite%20simple%20groups
|
In mathematics, the classification of finite simple groups states that every finite simple group is cyclic, or alternating, or in one of 16 families of groups of Lie type, or one of 26 sporadic groups.
The list below gives all finite simple groups, together with their order, the size of the Schur multiplier, the size of the outer automorphism group, usually some small representations, and lists of all duplicates.
Summary
The following table is a complete list of the 18 families of finite simple groups and the 26 sporadic simple groups, along with their orders. Any non-simple members of each family are listed, as well as any members duplicated within a family or between families. (In removing duplicates it is useful to note that no two finite simple groups have the same order, except that the group A8 = A3(2) and A2(4) both have order 20160, and that the group Bn(q) has the same order as Cn(q) for q odd, n > 2. The smallest of the latter pairs of groups are B3(3) and C3(3) which both have order 4585351680.)
There is an unfortunate conflict between the notations for the alternating groups An and the groups of Lie type An(q). Some authors use various different fonts for An to distinguish them. In particular,
in this article we make the distinction by setting the alternating groups An in Roman font and the Lie-type groups An(q) in italic.
In what follows, n is a positive integer, and q is a positive power of a prime number p, with the restrictions noted. The notation (a,b) represents the greatest common divisor of the integers a and b.
Cyclic groups, Zp
Simplicity: Simple for p a prime number.
Order: p
Schur multiplier: Trivial.
Outer automorphism group: Cyclic of order p − 1.
Other names: Z/pZ, Cp
Remarks: These are the only simple groups that are not perfect.
Alternating groups, An, n > 4
Simplicity: Solvable for n < 5, otherwise simple.
Order: n!/2 when n > 1.
Schur multiplier: 2 for n = 5 or n > 7, 6 for n = 6 or 7; see Covering groups of the alt
|
https://en.wikipedia.org/wiki/Double-chance%20function
|
In software engineering, a double-chance function is a software design pattern with a strong application in cross-platform and scalable development.
Examples
Computer graphics
Consider a graphics API with functions to DrawPoint, DrawLine, and DrawSquare. It is easy to see that DrawLine can be implemented solely in terms of DrawPoint, and DrawSquare can in turn be implemented through four calls to DrawLine. If you were porting this API to a new architecture you would have a choice: implement three different functions natively (taking more time to implement, but likely resulting in faster code), or write DrawPoint natively, and implement the others as described above using common, cross-platform, code. An important example of this approach is the X11 graphics system, which can be ported to new graphics hardware by providing a very small number of device-dependent primitives, leaving higher level functions to a hardware-independent layer.
The double-chance function is an optimal method of creating such an implementation, whereby the first draft of the port can use the "fast to market, slow to run" version with a common DrawPoint function, while later versions can be modified as "slow to market, fast to run". Where the double-chance pattern scores high is that the base API includes the self-supporting implementation given here as part of the null driver, and all other implementations are extensions of this. Consequently, the first port is, in fact, the first usable implementation.
One typical implementation in C++ could be:
class CBaseGfxAPI {
virtual void DrawPoint(int x, int y) = 0; /* Abstract concept for the null driver */
virtual void DrawLine(int x1, int y1, int x2, int y2) { /* DrawPoint() repeated */}
virtual void DrawSquare(int x1, int y1, int x2, int y2) { /* DrawLine() repeated */}
};
class COriginalGfxAPI : public CBaseGfxAPI {
virtual void DrawPoint(int x, int y) { /* The only necessary native calls */ }
virtual void DrawL
|
https://en.wikipedia.org/wiki/Microsoft%20Compiled%20HTML%20Help
|
Microsoft Compiled HTML Help is a Microsoft proprietary online help format, consisting of a collection of HTML pages, an index and other navigation tools. The files are compressed and deployed in a binary format with the extension .CHM, for Compiled HTML. The format is often used for software documentation.
It was introduced as the successor to Microsoft WinHelp with the release of Windows 95 OSR 2.5 and consequently, Windows 98. Within the Windows NT family, the CHM file support is introduced in Windows NT 4.0 and is still supported in Windows 11. Although the format was designed by Microsoft, it has been successfully reverse-engineered and is now supported in many document viewer applications.
History
Microsoft has announced that they do not intend to add any new features to HTML Help.
File format
Help is delivered as a binary file with the .chm extension. It contains a set of HTML files, a hyperlinked table of contents, and an index file. The file format has been reverse-engineered and documentation of it is freely available.
The file starts with bytes "ITSF" (in ASCII), for "Info-Tech Storage Format", which is the internal name given by Microsoft to the generic storage file format used in with CHM files.
CHM files support the following features:
Data compression (using LZX)
Built-in search engine
Ability to merge multiple .chm help files
Extended character support, although it does not fully support Unicode.
Use in Windows applications
The Microsoft Reader's .lit file format is a modification of the HTML Help CHM format. CHM files are sometimes used for e-books.
Sumatra PDF supports viewing CHM documents since version 1.9.
Various applications, such as HTML Help Workshop and 7-Zip can decompile CHM files. The hh.exe utility on Windows and the extract_chmLib utility (a component of chmlib) on Linux can also decompile CHM files.
Microsoft's HTML Help Workshop and Compiler generate CHM files by instructions stored in a HTML Help project. The file name
|
https://en.wikipedia.org/wiki/WinHelp
|
Microsoft WinHelp is a proprietary format for online help files that can be displayed by the Microsoft Help browser winhelp.exe or winhlp32.exe. The file format is based on Rich Text Format (RTF). It remained a popular Help platform from Windows 3.0 through Windows XP. WinHelp was removed in Windows Vista purportedly to discourage software developers from using the obsolete format and encourage use of newer help formats. Support for WinHelp files would eventually be removed entirely in Windows 10.
History
1990 – WinHelp 1.0 shipped with Windows 3.0.
1995 – WinHelp 4.0 shipped with Windows 95 / Windows NT.
2006 – Microsoft announced its intentions to phase out WinHelp as a supported platform. WinHelp is not part of Windows Vista out of the box. WinHelp files come in 16 bit and 32 bit types. Vista treats these files types differently. When starting an application that uses the 32 bit .hlp format, Windows warns that the format is no longer supported. A downloadable viewer for 32 bit .hlp files is available from the Microsoft Download Center. The 16 bit WinHelp files continue to display in Windows Vista (32 bit only) without the viewer download.
January 9, 2009 – Microsoft announced the availability of Windows Help program (WinHlp32.exe) for Windows Server 2008 at the Microsoft Download Center.
October 14, 2009 – Microsoft announced the availability of Windows Help program (WinHlp32.exe) for Windows 7 and Windows Server 2008 R2 at the Microsoft Download Center.
October 26, 2012 – Microsoft announced the availability of Windows Help program (WinHlp32.exe) for Windows 8 at the Microsoft Download Center.
November 5, 2013 – Microsoft announced the availability of Windows Help program (WinHlp32.exe) for Windows 8.1 at the Microsoft Download Center.
July 15, 2015 - Microsoft completely removed Windows Help from Windows 10. Attempting to open a .hlp file just brings users to a help page detailing that it was removed.
File format
A WinHelp file has a ".hlp" suffix. It
|
https://en.wikipedia.org/wiki/Silver%20staining
|
In pathology, silver staining is the use of silver to selectively alter the appearance of a target in microscopy of histological sections; in temperature gradient gel electrophoresis; and in polyacrylamide gels.
In traditional stained glass, silver stain is a technique to produce yellow to orange or brown shades (or green on a blue glass base), by adding a mixture containing silver compounds (notably silver nitrate), and firing lightly. It was introduced soon after 1800, and is the "stain" in the term "stained glass". Silver compounds are mixed with binding substances, applied to the surface of glass, and then fired in a furnace or kiln.
History
Camillo Golgi perfected silver staining for the study of the nervous system. Although the exact chemical mechanism by which this occurs is unknown, Golgi's method stains a limited number of cells at random in their entirety.
Silver staining was introduced by Kerenyi and Gallyas as a sensitive procedure to detect trace amounts of proteins in gels. The technique has been extended to the study of other biological macromolecules that have been separated in a variety of supports.
Classical Coomassie brilliant blue staining can usually detect a 50 ng protein band; silver staining increases the sensitivity typically 50 times.
Many variables can influence the color intensity and every protein has its own staining characteristics; clean glassware, pure reagents, and water of highest purity are the key points to successful staining.
Chemistry
Some cells are argentaffin. These reduce silver solution to metallic silver after formalin fixation. Other cells are argyrophilic. These reduce silver solution to metallic silver after being exposed to the stain that contains a reductant, for example hydroquinone or formalin.
Silver nitrate forms insoluble silver phosphate with phosphate ions; this method is known as the Von Kossa Stain. When subjected to a reducing agent, usually hydroquinone, it forms black elementary silver. This is use
|
https://en.wikipedia.org/wiki/Sublinear%20function
|
In linear algebra, a sublinear function (or functional as is more often used in functional analysis), also called a quasi-seminorm or a Banach functional, on a vector space is a real-valued function with only some of the properties of a seminorm. Unlike seminorms, a sublinear function does not have to be nonnegative-valued and also does not have to be absolutely homogeneous. Seminorms are themselves abstractions of the more well known notion of norms, where a seminorm has all the defining properties of a norm that it is not required to map non-zero vectors to non-zero values.
In functional analysis the name Banach functional is sometimes used, reflecting that they are most commonly used when applying a general formulation of the Hahn–Banach theorem.
The notion of a sublinear function was introduced by Stefan Banach when he proved his version of the Hahn-Banach theorem.
There is also a different notion in computer science, described below, that also goes by the name "sublinear function."
Definitions
Let be a vector space over a field where is either the real numbers or complex numbers
A real-valued function on is called a (or a if ), and also sometimes called a or a , if it has these two properties:
Positive homogeneity/Nonnegative homogeneity: for all real and all
This condition holds if and only if for all positive real and all
Subadditivity/Triangle inequality: for all
This subadditivity condition requires to be real-valued.
A function is called or if for all although some authors define to instead mean that whenever these definitions are not equivalent.
It is a if for all
Every subadditive symmetric function is necessarily nonnegative.
A sublinear function on a real vector space is symmetric if and only if it is a seminorm.
A sublinear function on a real or complex vector space is a seminorm if and only if it is a balanced function or equivalently, if and only if for every unit length scalar (satisfying ) and ever
|
https://en.wikipedia.org/wiki/Managed%20code
|
Managed code is computer program code that requires and will execute only under the management of a Common Language Infrastructure (CLI); Virtual Execution System (VES); virtual machine, e.g. .NET, CoreFX, or .NET Framework; Common Language Runtime (CLR); or Mono. The term was coined by Microsoft.
Managed code is the compiler output of source code written in one of over twenty high-level programming languages, including C#, J# and Visual Basic .NET.
Terminology
The distinction between managed and unmanaged code is prevalent and only relevant when developing applications that interact with CLR implementations. Since many older programming languages have been ported to the CLR, the differentiation is needed to identify managed code, especially in a mixed setup. In this context, code that does not rely on the CLR is termed "unmanaged".
A source of confusion was created when Microsoft started connecting the .NET Framework with C++, and the choice of how to name the Managed Extensions for C++. It was first named Managed C++ and then renamed to C++/CLI. The creator of the C++ programming language and member of the C++ standards committee, Bjarne Stroustrup, even commented on this issue, "On the difficult and controversial question of what the CLI binding/extensions to C++ is to be called, I prefer C++/CLI as a shorthand for "The CLI extensions to ISO C++". Keeping C++ as part of the name reminds people what is the base language and will help keep C++ a proper subset of C++ with the C++/CLI extensions."
Uses
The Microsoft Visual C++ compiler can produce both managed code, running under CLR, or unmanaged binaries, running directly on Windows.
Benefits of using managed code include programmer convenience (by increasing the level of abstraction, creating smaller models) and enhanced security guarantees, depending on the platform (including the VM implementation). There are many historical examples of code running on virtual machines, such as the language UCSD Pascal usin
|
https://en.wikipedia.org/wiki/Zoogeography
|
Zoogeography is the branch of the science of biogeography that is concerned with geographic distribution (present and past) of animal species.
As a multifaceted field of study, zoogeography incorporates methods of molecular biology, genetics, morphology, phylogenetics, and Geographic Information Systems (GIS) to delineate evolutionary events within defined regions of study around the globe. Once proposed by Alfred Russel Wallace, known to be the father of zoogeography, phylogenetic affinities can be quantified among zoogeographic regions, further elucidating the phenomena surrounding geographic distributions of organisms and explaining evolutionary relationships of taxa.
Advancements in molecular biology and theory of evolution within zoological research has unraveled questions concerning speciation events and has expanded phylogenic relationships amongst taxa. Integration of phylogenetics with GIS provides a means for communicating evolutionary origins through cartographic design. Related research linking phylogenetics and GIS has been conducted in areas of the southern Atlantic, Mediterranean, and Pacific Oceans. Recent innovations in DNA bar-coding, for example, have allowed for explanations of phylogenetic relationships within two families of marine venomous fishes, scorpaenidae and tetraodontidae, residing in the Andaman Sea. Continued efforts to understand species evolutionary divergence articulated in the geologic time scale based on fossil records for killifish (Aphanius and Aphanolebias) in locales of the Mediterranean and Paratethys areas revealed climatological influences during the Miocene Further development of research within zoogeography has expanded upon knowledge of the productivity of South Atlantic ocean regions and distribution of organisms in analogous regions, providing both ecological and geographic data to supply a framework for the taxonomic relationships and evolutionary branching of benthic polychaetes.
Modern-day zoogeography also plac
|
https://en.wikipedia.org/wiki/Carvone
|
Carvone is a member of a family of chemicals called terpenoids. Carvone is found naturally in many essential oils, but is most abundant in the oils from seeds of caraway (Carum carvi), spearmint (Mentha spicata), and dill.
Uses
Both carvones are used in the food and flavor industry. R-(−)-Carvone is also used for air freshening products and, like many essential oils, oils containing carvones are used in aromatherapy and alternative medicine. S-(+)-Carvone has shown a suppressant effect against high-fat diet induced weight gain in mice.
Food applications
As the compound most responsible for the flavor of caraway, dill and spearmint, carvone has been used for millennia in food. Wrigley's Spearmint Gum and spearmint flavored Life Savers are major users of natural spearmint oil from Mentha spicata. Caraway seed is extracted with alcohol to make the European drink Kümmel.
Agriculture
S-(+)-Carvone is also used to prevent premature sprouting of potatoes during storage, being marketed in the Netherlands for this purpose under the name Talent.
Insect control
R-(−)-Carvone has been approved by the U.S. Environmental Protection Agency for use as a mosquito repellent.
Organic synthesis
Carvone is available inexpensively in both enantiomerically pure forms, making it an attractive starting material for the asymmetric total synthesis of natural products. For example, (S)-(+)-carvone was used to begin a 1998 synthesis of the terpenoid quassin:<ref>(a) Shing, T. K. M.; Jiang, Q; Mak, T. C. W. J. Org. Chem. 1998, 63, 2056-2057. (b) Shing, T. K. M.; Tang, Y. J. Chem. Soc. Perkin Trans. 1 1994, 1625.</ref>
Stereoisomerism and odor
Carvone forms two mirror image forms or enantiomers: R-(−)-carvone, has a sweetish minty smell, like spearmint leaves. Its mirror image, S-(+)-carvone, has a spicy aroma with notes of rye, like caraway seeds. The fact that the two enantiomers are perceived as smelling different is evidence that olfactory receptors must respond more strongly to on
|
https://en.wikipedia.org/wiki/S3%20Savage
|
Savage was a product-line of PC graphics chipsets designed by S3.
Graphics Processors
Savage 3D
At the 1998 E3 Expo S3 introduced the first Savage product, Savage3D. Compared to its ViRGE-derived predecessor (Trio3D), Savage3D was a technological leap forward. Its innovative feature-set included the following:
"free" (single-cycle) trilinear-filtering
hardware motion-compensation and subpicture alpha-blending (MPEG-2 video)
integrated NTSC/PAL TV-encoder, (optional) Macrovision
S3 Texture Compression (S3TC)
multi-tap X/Y interpolating front-end (BITBLT) and back-end (overlay) video-scaler
Unfortunately for S3, deliveries of the Savage3D were hampered by poor manufacturing yields. Only one major board-vendor, Hercules, made any real effort to ship a Savage3D product. S3's yield problems forced Hercules to hand pick usable chips from the silicon wafers. Combined with poor drivers and the chip's lack of multitexturing support, the Savage3D failed in the market.
Savage 3D also dropped support for the S3D API from the S3 ViRGE predecessor.
In early 1999, S3 retired the Savage3D and released the Savage4 family. Many of the Savage3D's limitations were addressed by the Savage 4 chipset.
Savage4
Savage4 was an evolution of Savage 3D technology in many ways. S3 refined the chip, fixing hardware bugs and streamlining the chip for both cost reduction and performance. They added single-pass multi-texturing, meaning the board could sample 2 textures per pixel in one pass (not one clock cycle) through the rendering engine instead of halving its texture fillrate in dual-textured games like Savage 3D. Savage4 supported the then-new AGP 4X although at the older 3.3 voltage specification. It was manufactured on a 250 nm process, like Savage 3D. The graphics core was clocked at 125 MHz, with the board's SDRAM clocked at either 125 MHz or 143 MHz (Savage4 Pro). They could be equipped with 8-32 MiB memory. And while an integrated TV encoder was dropped, the DVD accelerat
|
https://en.wikipedia.org/wiki/Videocipher
|
VideoCipher is a brand name of analog scrambling and de-scrambling equipment for cable and satellite television invented primarily to enforce Television receive-only (TVRO) satellite equipment to only receive TV programming on a subscription basis.
The second version of Videocipher, Videocipher II, was the primary encryption scheme used by major cable TV programmers to prevent TVRO owners from receiving free terrestrial television programming. It was especially notable due to the widespread compromise of its encryption scheme.
Background
Satellite providers
Though the first half of the 1980s, HBO, Cinemax and other premium television providers with analog satellite transponders faced a fast growing market of TVRO equipment owners. Satellite television consumers could watch these services simply by pointing their dish at a satellite, and tuning into the provider's transponder. Two open questions existed about this practice: whether the Communications Act of 1934 applied as a case of "unauthorized reception" by TVRO consumers; and to what it extent it was legal for a service provider to encrypt their signals in an effort to prevent its reception.
The Cable Communications Policy Act of 1984 clarified all of these matters, making the following legal:
Reception of unencrypted satellite signals by a consumer
Reception of encrypted satellite signals by a consumer, when they have received authorization to legally decrypt it
This created a framework for the wide deployment of encryption on analog satellite signals. It further created a framework (and implicit mandate to provide) subscription services to TVRO consumers to allow legal decryption of those signals. HBO and Cinemax became the first two services to announce intent to encrypt their satellite feeds late in 1984.
Videocipher technology
Videocipher was invented in 1983 by Linkabit Corporation (later bought out by M/A-COM in 1985, operated as M/A-COM Linkabit). In the mid 1980s, M/A-COM began divesting divis
|
https://en.wikipedia.org/wiki/Human%20ecosystem
|
Human ecosystems are human-dominated ecosystems of the anthropocene era that are viewed as complex cybernetic systems by conceptual models that are increasingly used by ecological anthropologists and other scholars to examine the ecological aspects of human communities in a way that integrates multiple factors as economics, sociopolitical organization, psychological factors, and physical factors related to the environment.
A human ecosystem has three central organizing concepts: human environed unit (an individual or group of individuals), environment, interactions and transactions between and within the components. The total environment includes three conceptually distinct, but interrelated environments: the natural, human constructed, and human behavioral. These environments furnish the resources and conditions necessary for life and constitute a life-support system.
Further reading
Basso, Keith 1996 “Wisdom Sits in Places: Landscape and Language among the Western Apache.” Albuquerque: University of New Mexico Press.
Douglas, Mary 1999 “Implicit Meanings: Selected Essays in Anthropology.” London and New York: Routledge, Taylor & Francis Group.
Nadasdy, Paul 2003 “Hunters and Bureaucrats: Power, Knowledge, and Aboriginal-State Relations in the Southwest Yukon.” Vancouver and Toronto: UBC Press.
References
See also
Media ecosystem
Urban ecosystem
Total human ecosystem
Anthropology
Ecosystems
Environmental sociology
Social systems concepts
Systems biology
|
https://en.wikipedia.org/wiki/Rugae
|
In anatomy, rugae are a series of ridges produced by folding of the wall of an organ. Most commonly rugae refers to the gastric rugae of the internal surface of the stomach.
Function
A purpose of the gastric rugae is to allow for expansion of the stomach after the consumption of foods and liquids. This expansion increases the volume of the stomach to hold larger amounts of food. The folds also result in greater surface area, allowing the stomach to absorb nutrients more quickly.
Location
Rugae can appear in the following locations in humans:
Wrinkles of the labia and scrotum
Hard palate immediately behind the upper anterior teeth
Inside the urinary bladder
Vagina
Gallbladder
Inside the stomach
Inside the rectum
Difference between rugae and plicae
With few exceptions (e.g. the scrotum), rugae are only evident when an organ or tissue is deflated or relaxed. For example, rugae are evident within the stomach when it is deflated. However, when the stomach distends, the rugae unfold to allow for the increase in volume. On the other hand, plicae remain folded regardless of distension as is evident within the plicae of the small intestine walls.
References
Anatomy
Tissues (biology)
ja:襞
|
https://en.wikipedia.org/wiki/Yakov%20Sinai
|
Yakov Grigorevich Sinai (; born September 21, 1935) is a Russian–American mathematician known for his work on dynamical systems. He contributed to the modern metric theory of dynamical systems and connected the world of deterministic (dynamical) systems with the world of probabilistic (stochastic) systems. He has also worked on mathematical physics and probability theory. His efforts have provided the groundwork for advances in the physical sciences.
Sinai has won several awards, including the Nemmers Prize, the Wolf Prize in Mathematics and the Abel Prize. He serves as the professor of mathematics at Princeton University since 1993 and holds the position of Senior Researcher at the Landau Institute for Theoretical Physics in Moscow, Russia.
Biography
Yakov Grigorevich Sinai was born into a Russian Jewish academic family on September 21, 1935, in Moscow, Soviet Union (now Russia). His parents, Nadezda Kagan and Gregory Sinai, were both microbiologists. His grandfather, Veniamin Kagan, headed the Department of
Differential Geometry at Moscow State University and was a major influence on Sinai's life.
Sinai received his bachelor's and master's degrees from Moscow State University. In 1960, he earned his Ph.D., also from Moscow State; his adviser was Andrey Kolmogorov. Together with Kolmogorov, he showed that even for "unpredictable" dynamic systems, the level of unpredictability of motion can be described mathematically. In their idea, which became known as Kolmogorov–Sinai entropy, a system with zero entropy is entirely predictable, while a system with non-zero entropy has an unpredictability factor directly related to the amount of entropy.
In 1963, Sinai introduced the idea of dynamical billiards, also known as "Sinai Billiards". In this idealized system, a particle bounces around inside a square boundary without loss of energy. Inside the square is a circular wall, of which the particle also bounces off. He then proved that for most initial trajector
|
https://en.wikipedia.org/wiki/Christian%20views%20on%20cloning
|
Christians take multiple positions in the debate on the morality of human cloning. Since Dolly the sheep was successfully cloned on 5 July 1996, and the possibility of cloning humans became a reality, Christian leaders have been pressed to take an ethical stance on its morality. While many Christians tend to disagree with the practice, such as Roman Catholics and a majority of fundamentalist pastors, including Southern Baptists, the views taken by various other Christian denominations are diverse and often conflicting. It is hard to pinpoint any one, definite stance of the Christian religion, since there are so many Christian denominations and so few official statements from each of them concerning the morality of human cloning.
There are certain Protestant denominations that do not disagree with the acceptability of human cloning. Mary Seller, for example, a member of the Church of England's Board of Social Responsibility and a professor of developmental genetics, states, "Cloning, like all science, must be used responsibly. Cloning humans is not desirable. But cloning sheep has its uses." On the other hand, according to a survey of Christian fundamentalist pastors, responses indicated a "common account of human cloning as primarily reproductive in nature, proscribed by its violation of God's will and role." Many of these pastors acknowledged the reason for this violation being rooted in the religiously motivated view that human cloning is an example of scientists 'playing God.'" It is not only this that many Christians are concerned about, however; other concerns include whether the dignity of the human person is overlooked, as well as the role of the parents as co-creators. All of these things may contribute to why many fundamentalist Christian pastors see human reproductive cloning as simply "forbidden territory."
Understanding cloning
Some scientists do argue that the plurality of views comes from the differing understandings of what exactly human cloning is
|
https://en.wikipedia.org/wiki/Cue%20card
|
Cue cards, also known as note cards, are cards with words written on them that help actors and speakers remember what they have to say. They are typically used in television productions where they can be held off-camera and are unseen by the audience. Cue cards are still currently being used on many late night talk shows including The Tonight Show Starring Jimmy Fallon, Late Night with Seth Meyers and formerly on Conan as well as variety and sketch comedy shows like Saturday Night Live due to the practice of last minute script changes. Many other TV shows, including game and reality shows, still use cue cards due to their mobility, as a teleprompter only allows the actor or broadcaster to look directly into the camera.
History
Cue cards were originally used to aid aging actors. One early use was by John Barrymore in the late 1930s.
Cue cards however did not become widespread until 1949 when Barney McNulty a CBS page and former military pilot, was asked to write ailing actor Ed Wynn's script lines on large sheets of paper to help him remember his script. McNulty volunteered for this duty because his training as a pilot taught him to write very quickly and clearly. McNulty soon saw the necessity of this concept and formed the company "Ad-Libs". McNulty continued to be Bob Hope's personal cue card man until he stopped performing. McNulty who died in 2000 at the age of 77 was known in Hollywood as the "Cue-Card King".
Marlon Brando was also a frequent user of cue cards, feeling that this helped bring realism and spontaneity to his performances, instead of giving the impression that he was merely reciting a writer's speech. During production of the film Last Tango in Paris, he had cue cards posted about the set, although director Bernardo Bertolucci declined his request to have lines written on actress Maria Schneider's rear end. Tony Mendez became a minor celebrity for his cue card work on the Late Show with David Letterman.
See also
Wally Feresten - cue card h
|
https://en.wikipedia.org/wiki/143%20%28number%29
|
143 (one hundred [and] forty-three) is the natural number following 142 and preceding 144.
In mathematics
143 is the sum of seven consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31). But this number is never the sum of an integer and its base 10 digits, making it a self number. It is also the product of a twin prime pair (11 × 13).
Every positive integer is the sum of at most 143 seventh powers (see Waring's problem).
143 is the difference in the first exception to the pattern shown below:
.
In the military
Vickers Type 143 was a British single-seat fighter biplane in 1929
United States Air Force 143d Airlift Wing airlift unit at Quonset Point, Rhode Island
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy in World War II
was a United States Navy patrol boat
was a United States Navy during the Cuban Missile Crisis
was a United States Navy during World War I
In transportation
London Buses route 143 is a Transport for London contracted bus route in London
Air Canada Flight 143, landed at Gimli, Manitoba Air Force Base after gliding after running out of fuel on July 23, 1983
Philippine Airlines Flight 143 exploded prior to takeoff on May 11, 1990, at Manila Airport
Bristol Type 143 was a British twin-engined monoplane aircraft of the Bristol Aeroplane Company
British Rail Class 143 diesel multiple unit, part of the Pacer family of trains introduced in 1985
East 143rd Street–St. Mary's Street station on the IRT Pelham Line of the New York City Subway
143rd Street station on Metra's SouthWest Service in Orland Park, Illinois
In media
Musicians Ray J and Bobby Brackins wrote the song "143"
On Mister Rogers' Neighborhood: "Transformations", 143 is used to mean "I love you". 1 meaning I for 1 letter, 4 meaning Love for the 4 letters, and 3 meaning You for the 3 letters. Reportedly, Fred Rogers maintained his weight at exactly
|
https://en.wikipedia.org/wiki/Universality%20%28dynamical%20systems%29
|
In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system. Systems display universality in a scaling limit, when a large number of interacting parts come together. The modern meaning of the term was introduced by Leo Kadanoff in the 1960s, but a simpler version of the concept was already implicit in the van der Waals equation and in the earlier Landau theory of phase transitions, which did not incorporate scaling correctly.
The term is slowly gaining a broader usage in several fields of mathematics, including combinatorics and probability theory, whenever the quantitative features of a structure (such as asymptotic behaviour) can be deduced from a few global parameters appearing in the definition, without requiring knowledge of the details of the system.
The renormalization group provides an intuitively appealing, albeit mathematically non-rigorous, explanation of universality. It classifies operators in a statistical field theory into relevant and irrelevant. Relevant operators are those responsible for perturbations to the free energy, the imaginary time Lagrangian, that will affect the continuum limit, and can be seen at long distances. Irrelevant operators are those that only change the short-distance details. The collection of scale-invariant statistical theories define the universality classes, and the finite-dimensional list of coefficients of relevant operators parametrize the near-critical behavior.
Universality in statistical mechanics
The notion of universality originated in the study of phase transitions in statistical mechanics. A phase transition occurs when a material changes its properties in a dramatic way: water, as it is heated boils and turns into vapor; or a magnet, when heated, loses its magnetism. Phase transitions are characterized by an order parameter, such as the density or the magnetization, that changes as a function of
|
https://en.wikipedia.org/wiki/J%C3%BCrgen%20Moser
|
Jürgen Kurt Moser (July 4, 1928 – December 17, 1999) was a German-American mathematician, honored for work spanning over four decades, including Hamiltonian dynamical systems and partial differential equations.
Life
Moser's mother Ilse Strehlke was a niece of the violinist and composer Louis Spohr. His father was the neurologist Kurt E. Moser (July 21, 1895 – June 25, 1982), who was born to the merchant Max Maync (1870–1911) and Clara Moser (1860–1934). The latter descended from 17th century French Huguenot immigrants to Prussia. Jürgen Moser's parents lived in Königsberg, German empire and resettled in Stralsund, East Germany as a result of the second world war. Moser attended the Wilhelmsgymnasium (Königsberg) in his hometown, a high school specializing in mathematics and natural sciences education, from which David Hilbert had graduated in 1880. His older brother Friedrich Robert Ernst (Friedel) Moser (August 31, 1925 – January 14, 1945) served in the German Army and died in Schloßberg during the East Prussian offensive.
Moser married the biologist Dr. Gertrude C. Courant (Richard Courant's daughter, Carl Runge's granddaughter and great-granddaughter of Emil DuBois-Reymond) on September 10, 1955 and took up permanent residence in New Rochelle, New York in 1960, commuting to work in New York City. In 1980 he moved to Switzerland, where he lived in Schwerzenbach near Zürich. He was a member of the Akademisches Orchester Zürich. He was survived by his younger brother, the photographic printer and processor Klaus T. Moser-Maync from Northport, New York, his wife, Gertrude Moser from Seattle, their daughters, the theater designer Nina Moser from Seattle and the mathematician Lucy I. Moser-Jauslin from Dijon, and his stepson, the lawyer Richard D. Emery from New York City. Moser played the piano and the cello, performing chamber music since his childhood in the tradition of a musical family, where his father played the violin and his mother the piano. He was a lifelo
|
https://en.wikipedia.org/wiki/Contractible%20space
|
In mathematics, a topological space X is contractible if the identity map on X is null-homotopic, i.e. if it is homotopic to some constant map. Intuitively, a contractible space is one that can be continuously shrunk to a point within that space.
Properties
A contractible space is precisely one with the homotopy type of a point. It follows that all the homotopy groups of a contractible space are trivial. Therefore any space with a nontrivial homotopy group cannot be contractible. Similarly, since singular homology is a homotopy invariant, the reduced homology groups of a contractible space are all trivial.
For a topological space X the following are all equivalent:
X is contractible (i.e. the identity map is null-homotopic).
X is homotopy equivalent to a one-point space.
X deformation retracts onto a point. (However, there exist contractible spaces which do not strongly deformation retract to a point.)
For any path-connected space Y, any two maps f,g: Y → X are homotopic.
For any space Y, any map f: Y → X is null-homotopic.
The cone on a space X is always contractible. Therefore any space can be embedded in a contractible one (which also illustrates that subspaces of contractible spaces need not be contractible).
Furthermore, X is contractible if and only if there exists a retraction from the cone of X to X.
Every contractible space is path connected and simply connected. Moreover, since all the higher homotopy groups vanish, every contractible space is n-connected for all n ≥ 0.
Locally contractible spaces
A topological space X is locally contractible at a point x if for every neighborhood U of x there is a neighborhood V of x contained in U such that the inclusion of V is nulhomotopic in U. A space is locally contractible if it is locally contractible at every point. This definition is occasionally referred to as the "geometric topologist's locally contractible," though is the most common usage of the term. In Hatcher's standard Algebraic Topology text,
|
https://en.wikipedia.org/wiki/Current-mode%20logic
|
Current mode logic (CML), or source-coupled logic (SCL), is a digital design style used both for logic gates and for board-level digital signaling of digital data.
The basic principle of CML is that current from a constant current generator is steered between two alternate paths depending on whether a logic zero or logic one is being represented. Typically, the generator is connected to the two sources of a pair of differential FETs, with the two paths being their two drains. The bipolar equivalent emitter-coupled logic (ECL) operates similarly, with the output being taken from the collectors of the BJT transistors.
As a differential PCB-level interconnect, it is intended to transmit data at speeds between 312.5 Mbit/s and 3.125 Gbit/s across standard printed circuit boards.
The transmission is point-to-point, unidirectional, and is usually terminated at the destination with 50 Ω resistors to Vcc on both differential lines. CML is frequently used in interfaces to fiber optic components. The principle difference between CML and ECL as a link technology is the output impedance of the driver stage: the emitter follower of ECL has a low resistance of around 5 Ω whereas CML connects to the drains of the driving transistors, that have a high impedance, and so the impedance of the pull up/down network (typically 50 Ω resistive) is the effective output impedance. Matching this drive impedance close to the driven transmission line's characteristic impedance greatly reduces undesirable ringing.
CML signals have also been found useful for connections between modules. CML is the physical layer used in DVI, HDMI and FPD-Link III video links, the interfaces between a display controller and a monitor.
In addition, CML has been widely used in high-speed integrated systems, such as for serial data transceivers and frequency synthesizers in telecommunication systems.
Operation
The fast operation of CML circuits is mainly due to their lower output voltage swing compared to the s
|
https://en.wikipedia.org/wiki/Integrated%20device%20manufacturer
|
An integrated device manufacturer (IDM) is a semiconductor company which designs, manufactures, and sells integrated circuit (IC) products.
IDM is often used to refer to a company which handles semiconductor manufacturing in-house, compared to a fabless semiconductor company, which outsources production to a third-party semiconductor fabrication plant.
Examples of IDMs are Intel, Samsung, and Texas Instruments, examples of fabless companies are AMD, Nvidia, and Qualcomm, and examples of pure play foundries are GlobalFoundries, TSMC, and UMC.
Due to the dynamic nature of the semiconductor industry, the term IDM has become less accurate than when it was coined.
OSATs
The term OSATs means "outsourced semiconductor assembly and test providers". OSATs have dominated IC packaging and testing.
Fabless operations
The terms fabless (fabrication-less), foundry, and IDM are now used to describe the role a company has in a business relationship. For example, Freescale owns and operates fabrication facilities (fab) where it manufactures many chip product lines, as a traditional IDM would. Yet it is known to contract with merchant foundries for other products, as would fabless companies.
Manufacturers
Many electronic manufacturing companies engage in business that would qualify them as an IDM:
Analog Devices
ams AG
Belling
Changxin Memory Technologies
Cypress Semiconductor
CR Microelectronics
Fujitsu
Good-Ark Electronics
Hitachi
IBM
IM Flash Technologies
Infineon
Intersil
Intel
LSI Corporation
Matsushita
Maxim Integrated Products
Micron Technology
Mitsubishi
National Semiconductor
Nexperia
NXP (formerly Philips + Freescale Semiconductors)
OMMIC
ON Semiconductor
Pericom
Qorvo
Renesas (formerly NEC semiconductor)
Samsung
SK Hynix
STMicroelectronics
Sony
Texas Instruments
Tsinghua Unigroup
Toshiba
Reading
Understanding fabless IC technology By Jeorge S. Hurtarte, Evert A. Wolsheimer, Lisa M. Tafoya 1.4.1 Integrated device manufacturer Pag
|
https://en.wikipedia.org/wiki/Moore%27s%20second%20law
|
Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years. As of 2015, the price had already reached about 14 billion US dollars.
Rock's law can be seen as the economic flip side to Moore's (first) law – that the number of transistors in a dense integrated circuit doubles every two years. The latter is a direct consequence of the ongoing growth of the capital-intensive semiconductor industry— innovative and popular products mean more profits, meaning more capital available to invest in ever higher levels of large-scale integration, which in turn leads to the creation of even more innovative products.
The semiconductor industry has always been extremely capital-intensive, with ever-dropping manufacturing unit costs. Thus, the ultimate limits to growth of the industry will constrain the maximum amount of capital that can be invested in new products; at some point, Rock's Law will collide with Moore's Law.
It has been suggested that fabrication plant costs have not increased as quickly as predicted by Rock's law – indeed plateauing in the late 1990s – and also that the fabrication plant cost per transistor (which has shown a pronounced downward trend) may be more relevant as a constraint on Moore's Law.
See also
Semiconductor device fabrication
Fabless manufacturing
Wirth's law, an analogous law about software complicating over time
Semiconductor consolidation
References
External links
Adages
Computer architecture statements
Computer industry
Computing culture
Electronic design
Temporal exponentials
Rules of thumb
Electronics industry
|
https://en.wikipedia.org/wiki/Nagle%27s%20algorithm
|
Nagle's algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It was defined by John Nagle while working for Ford Aerospace. It was published in 1984 as a Request for Comments (RFC) with title Congestion Control in IP/TCP Internetworks in .
The RFC describes what Nagle calls the "small-packet problem", where an application repeatedly emits data in small chunks, frequently only 1 byte in size. Since TCP packets have a 40-byte header (20 bytes for TCP, 20 bytes for IPv4), this results in a 41-byte packet for 1 byte of useful information, a huge overhead. This situation often occurs in Telnet sessions, where most keypresses generate a single byte of data that is transmitted immediately. Worse, over slow links, many such packets can be in transit at the same time, potentially leading to congestion collapse.
Nagle's algorithm works by combining a number of small outgoing messages and sending them all at once. Specifically, as long as there is a sent packet for which the sender has received no acknowledgment, the sender should keep buffering its output until it has a full packet's worth of output, thus allowing output to be sent all at once.
Algorithm
The RFC defines the algorithm as
inhibit the sending of new TCP segments when new outgoing data arrives from the user if any previously transmitted data on the connection remains unacknowledged.
Where MSS is the maximum segment size, the largest segment that can be sent on this connection, and the window size is the currently acceptable window of unacknowledged data, this can be written in pseudocode as
if there is new data to send then
if the window size ≥ MSS and available data is ≥ MSS then
send complete MSS segment now
else
if there is unconfirmed data still in the pipe then
enqueue data in the buffer until an acknowledge is received
else
send data immediately
|
https://en.wikipedia.org/wiki/Self-assembled%20monolayer
|
Self-assembled monolayers (SAM) of organic molecules are molecular assemblies formed spontaneously on surfaces by adsorption and are organized into more or less large ordered domains. In some cases molecules that form the monolayer do not interact strongly with the substrate. This is the case for instance of the two-dimensional supramolecular networks of e.g. perylenetetracarboxylic dianhydride (PTCDA) on gold or of e.g. porphyrins on highly oriented pyrolitic graphite (HOPG). In other cases the molecules possess a head group that has a strong affinity to the substrate and anchors the molecule to it. Such a SAM consisting of a head group, tail and functional end group is depicted in Figure 1. Common head groups include thiols, silanes, phosphonates, etc.
SAMs are created by the chemisorption of "head groups" onto a substrate from either the vapor or liquid phase followed by a slow organization of "tail groups". Initially, at small molecular density on the surface, adsorbate molecules form either a disordered mass of molecules or form an ordered two-dimensional "lying down phase", and at higher molecular coverage, over a period of minutes to hours, begin to form three-dimensional crystalline or semicrystalline structures on the substrate surface. The "head groups" assemble together on the substrate, while the tail groups assemble far from the substrate. Areas of close-packed molecules nucleate and grow until the surface of the substrate is covered in a single monolayer.
Adsorbate molecules adsorb readily because they lower the surface free-energy of the substrate and are stable due to the strong chemisorption of the "head groups." These bonds create monolayers that are more stable than the physisorbed bonds of Langmuir–Blodgett films. A Trichlorosilane based "head group", for example in a FDTS molecule, reacts with a hydroxyl group on a substrate, and forms very stable, covalent bond [R-Si-O-substrate] with an energy of 452 kJ/mol. Thiol-metal bonds are on the o
|
https://en.wikipedia.org/wiki/Rubberhose%20%28file%20system%29
|
In computing, rubberhose (also known by its development codename Marutukku) is a deniable encryption archive containing multiple file systems whose existence can only be verified using the appropriate cryptographic key.
Name and history
The project was originally named Rubberhose, as it was designed to be resistant to attacks by people willing to use torture on those who knew the encryption keys. This is a reference to the rubber-hose cryptanalysis euphemism.
It was written in 1997–2000 by Julian Assange, Suelette Dreyfus, and Ralf Weinmann.
Technical
The following paragraphs are extracts from the project's documentation:
Status
Rubberhose is not actively maintained, although it is available for Linux kernel 2.2, NetBSD and FreeBSD. The latest version available, still in alpha stage, is v0.8.3.
See also
Rubber-hose cryptanalysis
Key disclosure law
StegFS
VeraCrypt hidden volumes
References
External links
Marutukku.org documentation and downloads
Cryptographic software
Works by Julian Assange
Steganography
|
https://en.wikipedia.org/wiki/Agile%20manufacturing
|
Agile manufacturing is a term applied to an organization that has created the processes, tools, and training to enable it to respond quickly to customer needs and market changes while still controlling costs and quality. It is mostly related to lean manufacturing.
An enabling factor in becoming an agile manufacturer has been the development of manufacturing support technology that allows the marketers, the designers and the production personnel to share a common database of parts and products, to share data on production capacities and problems—particularly where small initial problems may have larger downstream effects. It is a general proposition of manufacturing that the cost of correcting quality issues increases as the problem moves downstream, so that it is cheaper to correct quality problems at the earliest possible point in the process.
Agile manufacturing is seen as the next step after lean manufacturing in the evolution of production methodology. The key difference between the two is like between a thin and an athletic person, agile being the latter. One can be neither, one or both. In manufacturing theory, being both is often referred to as leagile.
According to Martin Christopher, when companies have to decide what to be, they have to look at the customer order cycle (COC) (the time the customers are willing to wait) and the leadtime for getting supplies. If the supplier has a short lead time, lean production is possible. If the COC is short, agile production is beneficial.
Agile manufacturing is an approach to manufacturing which is focused on meeting the needs of customers while maintaining high standards of quality and controlling the overall costs involved in the production of a particular product. This approach is geared towards companies working in a highly competitive environment, where small variations in performance and product delivery can make a huge difference in the long term to a company's survival and reputation among consumers.
This
|
https://en.wikipedia.org/wiki/Drift%20mining
|
Drift mining is either the mining of an ore deposit by underground methods, or the working of coal seams accessed by adits driven into the surface outcrop of the coal bed. A drift mine is an underground mine in which the entry or access is above water level and generally on the slope of a hill, driven horizontally into the ore seam.
Drift is a more general mining term, meaning a near-horizontal passageway in a mine, following the bed (of coal, for instance) or vein of ore. A drift may or may not intersect the ground surface. A drift follows the vein, as distinguished from a crosscut that intersects it, or a level or gallery, which may do either. All horizontal or subhorizontal development openings made in a mine have the generic name of drift. These are simply tunnels made in the rock, with a size and shape depending on their use—for example, haulage, ventilation, or exploration.
Historical US drift mining (coal)
Colorado
The Boulder-Weld Coal Field beneath Marshall Mesa in Boulder, Colorado was drift mined from 1863 to 1939. Measurements in 2003, 2005, and 2022 showed that the mine has an active coal-seam fire. It was investigated as a possible cause of the 2021 Marshall Fire.
Illinois
Argyle Lake State Park's website says the Argyle Hollow (occupied by a lake since 1948) has been rich in coal, clay and limestone resources. Historically, individuals commonly opened and dug their own "drift mines" to supplement their income. In Appalachia, small coal mining operations such as these were known as "country bank" or "farmer" coal mines, and usually produced only small quantities for local use.
Indiana
The Lusk Mine, now in Turkey Run State Park, was in operation from the late 1800s through the late 1920s. Too small for commercial operation, the mine probably provided coal for the Lusk family and later for the park.
Kentucky
In 1820 the first commercial mine in Kentucky, known as the "McLean drift bank" opened near the Green River and Paradise in Muhlenber
|
https://en.wikipedia.org/wiki/Remote%20backup%20service
|
A remote, online, or managed backup service, sometimes marketed as cloud backup or backup-as-a-service, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providers are companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing.
Online backup systems are typically built for a client software program that runs on a given schedule. Some systems run once a day, usually at night while computers aren't in use. Other newer cloud backup services run continuously to capture changes to user systems nearly in real-time. The online backup system typically collects, compresses, encrypts, and transfers the data to the remote backup service provider's servers or off-site hardware.
There are many products on the market – all offering different feature sets, service levels, and types of encryption. Providers of this type of service frequently target specific market segments. High-end LAN-based backup systems may offer services such as Active Directory, client remote control, or open file backups. Consumer online backup companies frequently have beta software offerings and/or free-trial backup services with fewer live support options.
History
In the mid-1980s, the computer industry was in a great state of change with modems at speeds of 1200 to 2400 baud, making transfers of large amounts of data slow (1 MB in 72 minutes). While faster modems and more secure network protocols were in development, tape backup systems gained in popularity. During that same period the need for an affordable, reliable online backup system was becoming clear, especially for businesses with critical data.
More online/remote backup services came into existence during the heyday of the dot-com boom in the late 1990s. The initial years of these large industry service providers were about capturing market share and understanding the importance and the role that thes
|
https://en.wikipedia.org/wiki/EDonkey%20network
|
The eDonkey Network (also known as the eDonkey2000 network or eD2k) is a decentralized, mostly server-based, peer-to-peer file sharing network created in 2000 by US developers Jed McCaleb and Sam Yagan that is best suited to share big files among users, and to provide long term availability of files. Like most sharing networks, it is decentralized, as there is no central hub for the network; also, files are not stored on a central server but are exchanged directly between users based on the peer-to-peer principle.
The server part of the network is proprietary freeware. There are two families of server software for the eD2k network: the original one from MetaMachine, written in C++, closed-source and proprietary, and no longer maintained; and eserver, written in C, also closed-source and proprietary, although available free of charge and for several operating systems and computer architectures. The eserver family is currently in active development and support, and almost all eD2k servers as of 2008 run this server software.
There are many programs that act as the client part of the network. Most notably, eDonkey2000, the original client by MetaMachine, closed-source but freeware, and no longer maintained but very popular in its day; and eMule, a free program for Windows written in Visual C++ and licensed under the GNU GPL.
The original eD2k protocol has been extended by subsequent releases of both eserver and eMule programs, generally working together to decide what new features the eD2k protocol should support. However, the eD2k protocol is not formally documented (especially in its current extended state), and it can be said that in practice the eD2k protocol is what eMule and eserver do together when running, and also how eMule clients communicate among themselves. As eMule is open source, its code is freely available for peer-review of the workings of the protocol. Examples of eD2k protocol extensions are "peer exchange among clients", "protocol obfuscation"
|
https://en.wikipedia.org/wiki/Comparison%20of%20display%20technology
|
This is a comparison of various properties of different display technologies.
General characteristics
Major technologies are CRT, LCD and its derivatives (Quantum dot display, LED backlit LCD, WLCD, OLCD), Plasma, and OLED and its derivatives (Transparent OLED, PMOLED, AMOLED). An emerging technology is Micro LED and cancelled and now obsolete technologies are SED and FED.
Temporal characteristics
Different display technologies have vastly different temporal characteristics, leading to perceptual differences for motion, flicker, etc.
The figure shows a sketch of how different technologies present a single white/grey frame. Time and intensity is not to scale. Notice that some have a fixed intensity, while the illuminated period is variable. This is a kind of pulse-width modulation. Others can vary the actual intensity in response to the input signal.
Single-chip DLPs use a kind of "chromatic multiplexing" in which each color is presented serially. The intensity is varied by modulating the "on" time of each pixel within the time-span of one color. Multi-chip DLPs are not represented in this sketch, but would have a curve identical to the plasma display.
LCDs have a constant (backlit) image, where the intensity is varied by blocking the light shining through the panel.
CRTs use an electron beam, scanning the display, flashing a lit image. If interlacing is used, a single full-resolution image results in two "flashes". The physical properties of the phosphor are responsible for the rise and decay curves.
Plasma displays modulate the "on" time of each sub-pixel, similar to DLP.
Movie theaters use a mechanical shutter to illuminate the same frame 2 or 3 times, increasing the flicker frequency to make it less perceptible to the human eye.
Research
Researchers announced a display that uses silicon metasurface pixels that do not require polarized light and require half the energy. It employs a transparent conductive oxide as a heater that can quickly change the pixe
|
https://en.wikipedia.org/wiki/66%20block
|
A 66 block is a type of punch-down block used to connect sets of wires in a telephone system. They have been manufactured in four common configurations, A, B, E and M. A and B styles have the clip rows on 0.25" centers while E and M have the clip rows on 0.20" centers. The A blocks have 25 slotted holes on the left side for position the incoming building cable with a 50 slot fanning strip on the right side for distribution cables. They have been obsolete for many years. The B & M styles have 50 slot fanning strip on both sides. The B style is used mainly in distribution panels where several destinations (often 1A2 key telephones) need to connect to the same source. The M blocks are often used to connect a single instrument to such a distribution block. The E style has 5 columns of 10 2 clips rows and are used for transitioning from the 25 pair distribution cable to a 25 pair RJ21 style female ribbon connector.
66 blocks are designed to terminate 20 through 26 AWG insulated solid copper wire or 18 & 19 gauge skinned solid copper wire. The 66 series connecting block, introduced in the Bell System in 1962, was the first terminating device with insulation displacement connector technology. The term 66 block reflects its Western Electric model number.
The 25-pair standard non-split 66 block contains 50 rows; each row has two (E) or four (M) or six (A) & (B) columns of clips that are electrically bonded. The 25-pair split 50 66 block is the industry standard for easy termination of voice cabling, and is a standard network termination by telephone companies—generally on commercial properties. Each row contains four (M) or six (B) clips, but the left-side clips are electrically isolated from the right-side clips. Smaller versions also exist with fewer rows for smaller-scale use, such as residential.
66 E blocks are available pre-assembled with an RJ-21 female connector that accepts a quick connection to a 25-pair cable with a male end. These connections are typica
|
https://en.wikipedia.org/wiki/110%20block
|
A 110 block is a type of punch-down block used to terminate runs of on-premises wiring in a structured cabling system. The designation 110 is also used to describe a type of insulation displacement contact (IDC) connector used to terminate twisted pair cables, which uses a punch-down tool similar to the type used for the older 66 block.
Usage
Telephone distribution
Early residential telephone systems used simple screw terminals to join cables to sockets in a tree topology. These screw-terminal blocks have been slowly replaced by 110 blocks and connectors. Modern homes usually have phone service entering the house to a single 110 block, whence it is distributed by on-premises wiring to outlet boxes throughout the home in star topology. At the outlet box, cables are attached to ports with insulation-displacement contacts (IDCs), and those ports fit into special faceplates.
In commercial settings, this style of home run or star topology wiring was already in use on 66 blocks in telecom closets and switchrooms. The 110 block has been slowly replacing the 66 block, especially for data communications usage.
Computer networks
The 110 style insulation-displacement connection is often used at both ends of networking cables such as Category 5, Category 5e, Category 6, and Category 6A cables which run through buildings, as shown in the image. In switch rooms, 110 blocks are often built into the backs of patch panels to terminate cable runs. At the other end, 110 connections may be used with keystone modules that are attached to wall plates. 110 blocks are preferred over 66 blocks in high-speed networks because they introduce less crosstalk and many are certified for use in Category 5, Category 6 and Category 6A wiring systems.
Individual Category 5e and better-rated 8P8C jacks (keystone and patch panel) with IDC connectors commonly use the same punchdown 'teeth' dimensions and tools as a full size 110 block.
Advantages
110 style blocks allow a much higher density of te
|
https://en.wikipedia.org/wiki/Ex%20vivo
|
Ex vivo (Latin: "out of the living") literally means that which takes place outside an organism. In science, ex vivo refers to experimentation or measurements done in or on tissue from an organism in an external environment with minimal alteration of natural conditions.
A primary advantage of using ex vivo tissues is the ability to perform tests or measurements that would otherwise not be possible or ethical in living subjects. Tissues may be removed in many ways, including in part, as whole organs, or as larger organ systems.
Examples of ex vivo specimen use include:
bioassays;
using cancerous cell lines, like DU145 for prostate cancer, in drug testing of anticancer agents;
measurements of physical, thermal, electrical, mechanical, optical and other tissue properties, especially in various environments that may not be life-sustaining (for example, at extreme pressures or temperatures);
realistic models for surgical procedure development;
investigations into the interaction of different energy types with tissues; or
as phantoms in imaging technique development.
The term ex vivo means that the samples to be tested have been extracted from the organism. The term in vitro (lit. "within the glass") means the samples to be tested are obtained from a repository. In the case of cancer cells, a strain that would produce favorable results, then grown to produce a control sample and the number of samples required for the number of tests. These two terms are not synonymous even though the testing in both cases is "within the glass".
In cell biology, ex vivo procedures often involve living cells or tissues taken from an organism and cultured in a laboratory apparatus, usually under sterile conditions with no alterations, for up to 24 hours to obtain sufficient cells for the experiments. Experiments generally start after 24 hours of incubation. Using living cells or tissue from the same organism are still considered to be ex vivo. One widely performed ex vivo study is
|
https://en.wikipedia.org/wiki/List%20of%20entomology%20journals
|
The following is a list of entomological journals and magazines:
References
Lists of academic journals
Zoology-related lists
Entomology journals
|
https://en.wikipedia.org/wiki/Eventual%20consistency
|
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. Eventual consistency, also called optimistic replication, is widely deployed in distributed systems and has origins in early mobile computing projects. A system that has achieved eventual consistency is often said to have converged, or achieved replica convergence. Eventual consistency is a weak guarantee – most stronger models, like linearizability, are trivially eventually consistent.
Eventually-consistent services are often classified as providing BASE semantics (basically-available, soft-state, eventual consistency), in contrast to traditional ACID (atomicity, consistency, isolation, durability). In chemistry, a base is the opposite of an acid, which helps in remembering the acronym. According to the same resource, these are the rough definitions of each term in BASE:
Basically available: reading and writing operations are available as much as possible (using all nodes of a database cluster), but might not be consistent (the write might not persist after conflicts are reconciled, and the read might not get the latest write)
Soft-state: without consistency guarantees, after some amount of time, we only have some probability of knowing the state, since it might not yet have converged
Eventually consistent: If we execute some writes and then the system functions long enough, we can know the state of the data; any further reads of that data item will return the same value
Eventual consistency is sometimes criticized as increasing the complexity of distributed software applications. This is partly because eventual consistency is purely a liveness guarantee (reads eventually return the same value) and does not guarantee safety: an eventually consistent system can return any value before it converges.
Conflic
|
https://en.wikipedia.org/wiki/Thermal%20hydraulics
|
Thermal hydraulics (also called thermohydraulics) is the study of hydraulic flow in thermal fluids. The area can be mainly divided into three parts: thermodynamics, fluid mechanics, and heat transfer, but they are often closely linked to each other. A common example is steam generation in power plants and the associated energy transfer to mechanical motion and the change of states of the water while undergoing this process. Thermal-hydraulic analysis can determine important parameters for reactor design such as plant efficiency and coolability of the system.
The common adjectives are "thermohydraulic", "thermal-hydraulic" and "thermalhydraulic".
Thermodynamic analysis
In the thermodynamic analysis, all states defined in the system are assumed to be in thermodynamic equilibrium; each state has mechanical, thermal, and phase equilibrium, and there is no macroscopic change with respect to time. For the analysis of the system, the first law and second law of thermodynamics can be applied.
In power plant analysis, a series of states can comprise a cycle. In this case, each state represents condition at the inlet/outlet of individual component. The example of components are pumpcompressor, turbine, reactor, and heat exchanger. By considering the constitutive equation for the given type of fluid, thermodynamic state of each point can be analyzed. As a result, the thermal efficiency of the cycle can be defined.
Examples of the cycle include the Carnot cycle, Brayton cycle, and Rankine cycle. Based on the simple cycle, modified or combined cycle also exists.
Thermo-Hydraulic Improvement Parameter (THIP)
Authors (Sahu et al.[6])observe that Thermo-hydraulic Parameter (THP) is less sensitive towards the Friction Factor Improvement Factor (FFER). The deviation between the terms (fR/fS) and (fR/fS)0.33 has been found 48 % to 64 % for the range of roughness and other parameters with (Re) 2900 – 14,000, which has been used for the present study.
Therefore, in order to eval
|
https://en.wikipedia.org/wiki/Kad%20network
|
The Kad network is a peer-to-peer (P2P) network which implements the Kademlia P2P overlay protocol. The majority of users on the Kad Network are also connected to servers on the eDonkey network, and Kad Network clients typically query known nodes on the eDonkey network in order to find an initial node on the Kad network.
Usage
The Kad network uses a UDP-based protocol to:
Find sources for eD2k hashes.
Search for eD2k hashes based on keywords in the file name.
Find comments and ratings for files (hashes).
Provide buddy services for firewalled (Low ID) nodes.
Store locations, comments and (keywords out of) filenames.
Note that the Kad network is not used to actually transfer files across the P2P network. Instead, when a file transfer is initiated, clients connect directly to each other (using the standard public IP network). This traffic is susceptible to blocking/shaping/tracking by an ISP or any other opportunistic middle-man.
As with all decentralized networks, the Kad network requires no official or common servers. As such, it cannot be disabled by shutting down a given subset of key nodes. While the decentralization of the network prevents a simple shut-down, traffic analysis and deep packet inspection will more readily identify the traffic as P2P due to the high variable-destination packet throughput. The large packet volume typically causes a reduction in available CPU and/or network resources usually associated with P2P traffic.
Clients
Client search
The Kad network supports searching of files by name and a number of secondary characteristics such as size, extension, bit-rate, and more. Features vary based on client used.
Major clients
Only a few major clients currently support the Kad network implementation. However, they comprise over 80% of the user base and are probably closer to 95% of ed2k installations.
eMule: An open source Windows client which is the most popular, with 80% of network users. It also runs on Linux using the Wine librar
|
https://en.wikipedia.org/wiki/Pelorus%20%28instrument%29
|
In marine navigation, a pelorus is a reference tool for maintaining bearing of a vessel at sea. It is a "simplified compass" without a directive element, suitably mounted and provided with vanes to permit observation of relative bearings.
The instrument was named for one Pelorus, said to have been the pilot for Hannibal, circa 203 BC.
Ancient use
Harold Gatty described the use of a pelorus by Polynesians before the use of a compass. In equatorial waters the nightly course of stars overhead is nearly uniform during the year. This regularity simplified navigation for the Polynesians using a pelorus, or dummy compass:
Reading from North to South, in their rising and setting positions, these stars are:
{| class="wikitable"
|+ Stars
|- valign="top"
! Point
! Star
|- valign="top"
| N || Polaris
|- valign="top"
| NbE || "the Guards" (Ursa Minor)
|- valign="top"
| NNE || Alpha Ursa Major
|- valign="top"
| NEbN ||Alpha Cassiopeiae
|- valign="top"
| NE || Capella
|- valign="top"
| NEbE || Vega
|- valign="top"
| ENE || Arcturus
|- valign="top"
| EbN || the Pleiades
|- valign="top"
| E || Altair
|- valign="top"
| EbS || Orion's belt
|- valign="top"
| ESE || Sirius
|- valign="top"
| SEbE || Beta Scorpionis
|- valign="top"
| SE || Antares
|- valign="top"
| SEbS || Alpha Centauri
|- valign="top"
| SSE || Canopus
|- valign="top"
| SbE || Achernar
|- valign="top"
| S || Southern Cross
|}
The true position of these stars is only approximate to their theoretical equidistant rhumbs on the sidereal compass. Over time, the elaboration of the pelorus points led to the modern compass rose.
Modern use
In appearance and use, a pelorus resembles a compass or compass repeater, with sighting vanes or a sighting telescope attached, but it has no directive properties. That is, it remains at any relative direction to which it is set. It is generally used by setting 000° at the lubber's line. Relative bearings are then observed. They can be converted to bearings true, magnetic, gr
|
https://en.wikipedia.org/wiki/Copycat%20%28software%29
|
Copycat is a model of analogy making and human cognition based on the concept of the parallel terraced scan, developed in 1988 by Douglas Hofstadter, Melanie Mitchell, and others at the Center for Research on Concepts and Cognition, Indiana University Bloomington. The original Copycat was written in Common Lisp and is bitrotten (as it relies on now-outdated graphics libraries for Lucid Common Lisp); however, Java and Python ports exist. The latest version in 2018 is a Python3 port by Lucas Saldyt and J. Alan Brogan.
Description
Copycat produces answers to such problems as "abc is to abd as ijk is to what?" (abc:abd :: ijk:?). Hofstadter and Mitchell consider analogy making as the core of high-level cognition, or high-level perception, as Hofstadter calls it, basic to recognition and categorization.
High-level perception emerges from the spreading activity of many independent processes, called codelets, running in parallel, competing or cooperating. They create and destroy temporary perceptual constructs, probabilistically trying out variations to eventually produce an answer. The codelets rely on an associative network, slipnet, built on pre-programmed concepts and their associations (a long-term memory). The changing activation levels of the concepts make a conceptual overlap with neighboring concepts.
Copycat's architecture is tripartite, consisting of a slipnet, a working area (also called workspace, similar to blackboard systems), and the coderack (with the codelets). The slipnet is a network composed of nodes, which represent permanent concepts, and weighted links, which are relations, between them. It differs from traditional semantic networks, since the effective weight associated with a particular link may vary through time according to the activation level of specific concepts (nodes). The codelets build structures in the working area and modify activations in the slipnet accordingly (bottom-up processes), and the current state of slipnet determines prob
|
https://en.wikipedia.org/wiki/Shirley%20Corriher
|
Shirley O. Corriher (born February 23, 1935) is an American biochemist and author of CookWise: The Hows and Whys of Successful Cooking, winner of a James Beard Foundation award, and BakeWise: The Hows and Whys of Successful Baking. CookWise shows how scientific insights can be applied to traditional cooking, while BakeWise applies the same idea to baking. Some compare Corriher's approach to that of Harold McGee (whom Corriher thanks as her "intellectual hero" in the "My Gratitude and Thanks" section of Cookwise) and Alton Brown. She has made a number of appearances as a food consultant on Brown's show Good Eats and has released a DVD, Shirley O. Corriher's Kitchen Secrets Revealed.
Personal life
After graduating from Vanderbilt University in 1959, she and her husband opened a boys' school in Atlanta, Georgia, where she was responsible for cooking three meals a day for 30 boys. By 1970 the school had grown to 140 students. That same year she divorced her husband and left the school; she took up cooking to support herself and her three children.
Corriher lives with her current husband, Arch, in Atlanta.
Books
See also
Harold McGee
Alton Brown
References
External links
Newsletter of the Food, Agriculture, and Nutrition Division of the Special Libraries Association Vol 32 No 1, August 2000, PDF which contains Corriher's Compendium of Ingredient and Cooking Problems - An expanded collection of typical problems in different food areas
1935 births
American women biochemists
American food writers
Women food writers
Women cookbook writers
Living people
Molecular gastronomy
Vanderbilt University alumni
Writers from Atlanta
James Beard Foundation Award winners
|
https://en.wikipedia.org/wiki/Inverse%20semigroup
|
In group theory, an inverse semigroup (occasionally called an inversion semigroup) S is a semigroup in which every element x in S has a unique inverse y in S in the sense that and , i.e. a regular semigroup in which every element has a unique inverse. Inverse semigroups appear in a range of contexts; for example, they can be employed in the study of partial symmetries.
(The convention followed in this article will be that of writing a function on the right of its argument, e.g. x f rather than f(x), and
composing functions from left to right—a convention often observed in semigroup theory.)
Origins
Inverse semigroups were introduced independently by Viktor Vladimirovich Wagner in the Soviet Union in 1952, and by Gordon Preston in the United Kingdom in 1954. Both authors arrived at inverse semigroups via the study of partial bijections of a set: a partial transformation α of a set X is a function from A to B, where A and B are subsets of X. Let α and β be partial transformations of a set X; α and β can be composed (from left to right) on the largest domain upon which it "makes sense" to compose them:
where α−1 denotes the preimage under α. Partial transformations had already been studied in the context of pseudogroups. It was Wagner, however, who was the first to observe that the composition of partial transformations is a special case of the composition of binary relations. He recognised also that the domain of composition of two partial transformations may be the empty set, so he introduced an empty transformation to take account of this. With the addition of this empty transformation, the composition of partial transformations of a set becomes an everywhere-defined associative binary operation. Under this composition, the collection of all partial one-one transformations of a set X forms an inverse semigroup, called the symmetric inverse semigroup (or monoid) on X, with inverse the functional inverse defined from image to domain (equivalently, the convers
|
https://en.wikipedia.org/wiki/Laser%20trimming
|
Laser trimming is the manufacturing process of using a laser to adjust the operating parameters of an electronic circuit.
One of the most common applications uses a laser to burn away small portions of resistors, raising their resistance value. The burning operation can be conducted while the circuit is being tested by automatic test equipment, leading to optimum final values for the resistor(s) in the circuit.
The resistance value of a film resistor is defined by its geometric dimensions (length, width, height) and the resistor material. A lateral cut in the resistor material by the laser narrows or lengthens the current flow path and increases the resistance value. The same effect is obtained whether the laser changes a thick-film or a thin-film resistor on a ceramic substrate or an SMD-resistor on a SMD circuit. The SMD-resistor is produced with the same technology and may be laser trimmed as well.
Trimmable chip capacitors are built up as multilayer plate capacitors. Vaporizing the top layer with a laser decreases the capacitance by reducing the area of the top electrode.
Passive trim is the adjustment of a resistor to a given value. If the trimming adjusts the whole circuit output such as output voltage, frequency, or switching threshold, this is called active trim. During the trim process, the corresponding parameter is measured continuously and compared to the programmed nominal value. The laser stops automatically when the value reaches the nominal value.
Trimming LTCC resistances in a pressure chamber
One type of passive trimmer uses a pressure chamber to enable resistor trimming in a single run. The LTCC boards are contacted by test probes on the assembly side and trimmed with a laser beam from the resistor side. This trimming method requires no contact points between the resistances, because the fine pitch adapter contacts the component on the opposite side of where the trimming occurs. Therefore, the LTCC can be arranged more compactly and less exp
|
https://en.wikipedia.org/wiki/Semantics%20encoding
|
A semantics encoding is a translation between formal languages. For programmers, the most familiar form of encoding is the compilation of a programming language into machine code or byte-code. Conversion between document formats are also forms of encoding. Compilation of TeX or LaTeX documents to PostScript are also commonly encountered encoding processes. Some high-level preprocessors such as OCaml's Camlp4 also involve encoding of a programming language into another.
Formally, an encoding of a language A into language B is a mapping of all terms of A into B. If there is a satisfactory encoding of A into B, B is considered at least as powerful (or at least as expressive) as A.
Properties
An informal notion of translation is not sufficient to help determine expressivity of languages, as it permits trivial encodings such as mapping all elements of A to the same element of B. Therefore, it is necessary to determine the definition of a "good enough" encoding. This notion varies with the application.
Commonly, an encoding is expected to preserve a number of properties.
Preservation of compositions
soundness For every n-ary operator of A, there exists an n-ary operator of B such that
completeness For every n-ary operator of A, there exists an n-ary operator of B such that
(Note: as far as the author is aware of, this criterion of completeness is never used.)
Preservation of compositions is useful insofar as it guarantees that components can be examined either separately or together without "breaking" any interesting property. In particular, in the case of compilations, this soundness guarantees the possibility of proceeding with separate compilation of components, while completeness guarantees the possibility of de-compilation.
Preservation of reductions
This assumes the existence of a notion of reduction on both language A and language B. Typically, in the case of a programming language, reduction is the relation which models the execution of a
|
https://en.wikipedia.org/wiki/ATM%20Adaptation%20Layer%205
|
ATM Adaptation Layer 5 (AAL5) is an ATM adaptation layer used to send variable-length packets up to 65,535 octets in size across an Asynchronous Transfer Mode (ATM) network.
Unlike most network frames, which place control information in the header, AAL5 places control information in an 8-octet trailer at the end of the packet. The AAL5 trailer contains a 16-bit length field, a 32-bit cyclic redundancy check (CRC) and two 8-bit fields labeled UU and CPI that are currently unused.
Each AAL5 packet is divided into an integral number of ATM cells and reassembled into a packet before delivery to the receiving host. This process is known as Segmentation and Reassembly (see below). The last cell contains padding to ensure that the entire packet is a multiple of 48 octets long. The final cell contains up to 40 octets of data, followed by padding bytes and the 8-octet trailer. In other words, AAL5 places the trailer in the last 8 octets of the final cell where it can be found without knowing the length of the packet; the final cell is identified by a bit in the ATM header (see below), and the trailer is always in the last 8 octets of that cell.
Convergence, segmentation, and reassembly
When an application sends data over an ATM connection using AAL5, the host delivers a block of data to the AAL5 interface. AAL5 generates a trailer, divides the information into 48-octet pieces, and transfers each piece across the ATM network in a single cell. On the receiving end of the connection, AAL5 reassembles incoming cells into a packet, checks the CRC to ensure that all pieces arrived correctly, and passes the resulting block of data to the host software. The process of dividing a block of data into cells and regrouping them is known as ATM segmentation and reassembly (SAR).
By separating the functions of segmentation and reassembly from cell transport, AAL5 follows the layering principle. The ATM cell transfer layer is classified as "machine-to-machine" because the layering pri
|
https://en.wikipedia.org/wiki/Superabundant%20number
|
In mathematics, a superabundant number is a certain kind of natural number. A natural number is called superabundant precisely when, for all :
where denotes the sum-of-divisors function (i.e., the sum of all positive divisors of , including itself). The first few superabundant numbers are . For example, the number 5 is not a superabundant number because for , and 5, the sigma is , and .
Superabundant numbers were defined by . Unknown to Alaoglu and Erdős, about 30 pages of Ramanujan's 1915 paper "Highly Composite Numbers" were suppressed. Those pages were finally published in The Ramanujan Journal 1 (1997), 119–153. In section 59 of that paper, Ramanujan defines generalized highly composite numbers, which include the superabundant numbers.
Properties
proved that if n is superabundant, then there exist a k and a1, a2, ..., ak such that
where pi is the i-th prime number, and
That is, they proved that if n is superabundant, the prime decomposition of n has non-increasing exponents (the exponent of a larger prime is never more than that a smaller prime) and that all primes up to are factors of n. Then in particular any superabundant number is an even integer, and it is a multiple of the k-th primorial
In fact, the last exponent ak is equal to 1 except when n is 4 or 36.
Superabundant numbers are closely related to highly composite numbers. Not all superabundant numbers are highly composite numbers. In fact, only 449 superabundant and highly composite numbers are the same . For instance, 7560 is highly composite but not superabundant. Conversely, 1163962800 is superabundant but not highly composite.
Alaoglu and Erdős observed that all superabundant numbers are highly abundant.
Not all superabundant numbers are Harshad numbers. The first exception is the 105th superabundant number, 149602080797769600. The digit sum is 81, but 81 does not divide evenly into this superabundant number.
Superabundant numbers are also of interest in connection with the
|
https://en.wikipedia.org/wiki/Colossally%20abundant%20number
|
In number theory, a colossally abundant number (sometimes abbreviated as CA) is a natural number that, in a particular, rigorous sense, has many divisors. Particularly, it is defined by a ratio between the sum of an integer's divisors and that integer raised to a power higher than one. For any such exponent, whichever integer has the highest ratio is a colossally abundant number. It is a stronger restriction than that of a superabundant number, but not strictly stronger than that of an abundant number.
Formally, a number is said to be colossally abundant if there is an such that for all ,
where denotes the sum-of-divisors function.
The first 15 colossally abundant numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800 are also the first 15 superior highly composite numbers, but neither set is a subset of the other.
History
Colossally abundant numbers were first studied by Ramanujan and his findings were intended to be included in his 1915 paper on highly composite numbers. Unfortunately, the publisher of the journal to which Ramanujan submitted his work, the London Mathematical Society, was in financial difficulties at the time and Ramanujan agreed to remove aspects of the work to reduce the cost of printing. His findings were mostly conditional on the Riemann hypothesis and with this assumption he found upper and lower bounds for the size of colossally abundant numbers and proved that what would come to be known as Robin's inequality (see below) holds for all sufficiently large values of n.
The class of numbers was reconsidered in a slightly stronger form in a 1944 paper of Leonidas Alaoglu and Paul Erdős in which they tried to extend Ramanujan's results.
Properties
Colossally abundant numbers are one of several classes of integers that try to capture the notion of having many divisors. For a positive integer n, the sum-of-divisors function σ(n) gives the sum of all those numbers that divide n, i
|
https://en.wikipedia.org/wiki/Highly%20abundant%20number
|
In number theory, a highly abundant number is a natural number with the property that the sum of its divisors (including itself) is greater than the sum of the divisors of any smaller natural number.
Highly abundant numbers and several similar classes of numbers were first introduced by , and early work on the subject was done by . Alaoglu and Erdős tabulated all highly abundant numbers up to 104, and showed that the number of highly abundant numbers less than any is at least proportional to .
Formal definition and examples
Formally, a natural number n is called highly abundant if and only if for all natural numbers m < n,
where σ denotes the sum-of-divisors function. The first few highly abundant numbers are
1, 2, 3, 4, 6, 8, 10, 12, 16, 18, 20, 24, 30, 36, 42, 48, 60, ... .
For instance, 5 is not highly abundant because σ(5) = 5+1 = 6 is smaller than σ(4) = 4 + 2 + 1 = 7, while 8 is highly abundant because σ(8) = 8 + 4 + 2 + 1 = 15 is larger than all previous values of σ.
The only odd highly abundant numbers are 1 and 3.
Relations with other sets of numbers
Although the first eight factorials are highly abundant, not all factorials are highly abundant. For example,
σ(9!) = σ(362880) = 1481040,
but there is a smaller number with larger sum of divisors,
σ(360360) = 1572480,
so 9! is not highly abundant.
Alaoglu and Erdős noted that all superabundant numbers are highly abundant, and asked whether there are infinitely many highly abundant numbers that are not superabundant. This question was answered affirmatively by .
Despite the terminology, not all highly abundant numbers are abundant numbers. In particular, none of the first seven highly abundant numbers (1, 2, 3, 4, 6, 8, and 10) is abundant. Along with 16, the ninth highly abundant number, these are the only highly abundant numbers that are not abundant.
7200 is the largest powerful number that is also highly abundant: all larger highly abundant numbers have a prime factor that divides them only o
|
https://en.wikipedia.org/wiki/Superior%20highly%20composite%20number
|
In number theory, a superior highly composite number is a natural number which, in a particular rigorous sense, has many divisors. Particularly, it is defined by a ratio between the number of divisors an integer has and that integer raised to some positive power. For any possible exponent, whichever integer has the highest ratio is a superior highly composite number. It is a stronger restriction than that of a highly composite number, which is defined as having more divisors than any smaller positive integer.
The first 10 superior highly composite numbers and their factorization are listed.
For a superior highly composite number there exists a positive real number such that for all natural numbers smaller than we have
and for all natural numbers larger than we have
where , the divisor function, denotes the number of divisors of . The term was coined by Ramanujan (1915).
For example, the number with the most divisors per square root of the number itself is 12; this can be demonstrated using some highly composites near 12.
120 is another superior highly composite number because it has the highest ratio of divisors to itself raised to the .4 power.
The first 15 superior highly composite numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800 are also the first 15 colossally abundant numbers, which meet a similar condition based on the sum-of-divisors function rather than the number of divisors. Neither set, however, is a subset of the other.
Properties
All superior highly composite numbers are highly composite. This is easy to prove: if there is some number k that has the same number of divisors as n but is less than n itself (i.e. , but ), then for all positive ε, so if a number "n" is not highly composite, it cannot be superior highly composite.
An effective construction of the set of all superior highly composite numbers is given by the following monotonic mapping from the positive real number
|
https://en.wikipedia.org/wiki/TippingPoint
|
TippingPoint Technologies was an American computer hardware and software company active between 1999 and 2015. Its focus was on network security products, particularly intrusion prevention systems for networks. In 2015, it was acquired by Trend Micro.
History
The company was founded in January 1999 under the name Shbang! in Texas.
Its co-founders were John F. McHale, Kent A. Savage (first chief executive), and Kenneth A. Kalinoski. Its business was to develop and sell Internet appliances.
In May 1999, the company changed its name to Netpliance and in November they released the i-Opener, a low-cost computer intended for browsing the World Wide Web. The hardware was sold at a loss, and costs were recouped through a subscription service plan. When the device was found to be easily modded to avoid the service plan, Netpliance changed the terms of sale to charge a termination fee. In 2001, the Federal Trade Commission fined the company $100,000 for inaccurate advertising and unfair billing of customers.
In 2002, the company discontinued operations of its internet appliance business and renamed itself TippingPoint. CEO Savage was replaced by chairman of the board McHale. McHale stepped down in 2004, but remained chairman of the board. The position was filled by Kip McClanahan, former CEO of BroadJump.
In January 2005, TippingPoint was acquired by the network equipment company 3Com for $442 million, operating as a division of 3Com led by James Hamilton (TippingPoint President), later replaced by Alan Kessler. 3Com itself was subsequently acquired by computer manufacturer Hewlett-Packard in April 2010 for approximately $2.7 billion.
On Oct 21, 2015, TippingPoint was acquired by Trend Micro for approximately $300 million.
Technology
The TippingPoint NGIPS is a network Intrusion Prevention System (IPS) deals with IT threat protection. It combines application-level security with user awareness and inbound/outbound messaging inspection capabilities, to protect the user'
|
https://en.wikipedia.org/wiki/Object-based%20language
|
The term object-based language may be used in a technical sense to describe any programming language that uses the idea of encapsulating state and operations inside objects. Object-based languages need not support inheritance or subtyping, but those that do are also termed object-oriented. Object-based languages that do not support inheritance or subtyping are usually not considered to be true object-oriented languages.
Examples of object-oriented languages, in rough chronological order, include Simula, Smalltalk, C++ (which object model is based on Simula's), Objective-C (which object model is based on Smalltalk's), Eiffel, Xojo (formerly REALbasic), Python, Ruby, Java, Visual Basic .NET, C#, and Fortran 2003. Examples of a language that is object-based, but not object-oriented are early versions of Ada, Visual Basic (VB), and Fortran 90. These languages all support the definition of an object as a data structure, but lack polymorphism and inheritance.
In practice, the term object-based is usually applied to those object-based languages that are not also object-oriented, although all object-oriented languages are also object-based, by definition. Instead, the terms object-based and object-oriented are normally used as mutually exclusive alternatives, rather than as categories that overlap.
Sometimes, the term object-based is applied to prototype-based programming languages, true object-oriented languages that lack classes, but in which objects instead inherit their code and data directly from other template objects. An example of a commonly used prototype-based scripting language is JavaScript.
Both object-based and object-oriented languages (whether class-based or prototype-based) may be statically type-checked. Statically checking prototype-based languages can be difficult, because these languages often allow objects to be dynamically extended with new behavior, and even to have their parent object (from which they inherit) changed, at runtime.
References
P
|
https://en.wikipedia.org/wiki/The%20Fat%20Duck
|
The Fat Duck is a fine dining restaurant in Bray, Berkshire, England, owned by the chef Heston Blumenthal. Housed in a 16th-century building, the Fat Duck opened on 16 August 1995. Although it originally served food similar to a French bistro, it soon acquired a reputation for precision and invention, and has been at the forefront of many modern culinary developments, such as food pairing, flavour encapsulation and multi-sensory cooking.
The number of staff in the kitchen increased from four when the Fat Duck first opened to 42, resulting in a ratio of one kitchen staff member per customer. The Fat Duck gained its first Michelin star in 1999, its second in 2002 and its third in 2004, making it one of eight restaurants in the United Kingdom to earn three Michelin stars. It lost its three stars in 2016, as it was under renovation, preventing it from being open for assessment. It regained all three stars in the following year.
The Fat Duck is known for its fourteen-course tasting menu featuring dishes such as nitro-scrambled egg and bacon ice cream, an Alice in Wonderland-inspired mock turtle soup involving a bouillon packet made up to look like a fob watch dissolved in tea, and a dish called Sound of the Sea which includes an audio element. It has an associated laboratory where Blumenthal and his team develop new dish concepts. In 2009, the Fat Duck suffered from the largest norovirus outbreak ever documented at a restaurant, with more than 400 diners falling ill.
Description
The Fat Duck is located on the high street of Bray, Berkshire. The owner, Heston Blumenthal, has owned the premises since it opened at the location in 1995. It is not the only Michelin three-star restaurant in Bray, the other being Michel Roux's restaurant the Waterside Inn. As of 2023, it is one of eight restaurants in the United Kingdom with three Michelin stars.
The Fat Duck has fourteen tables, and can seat 42 diners. It has a high proportion of chefs working, 42, equating to one chef p
|
https://en.wikipedia.org/wiki/Minimum%20mean%20square%20error
|
In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic loss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as the Wiener–Kolmogorov filter and Kalman filter.
Motivation
The term MMSE more specifically refers to estimation in a Bayesian setting with quadratic cost function. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when a new observation is made available; or the statistics of an actual random signal such as speech. This is in contrast to the non-Bayesian approach like minimum-variance unbiased estimator (MVUE) where absolutely nothing is assumed to be known about the parameter in advance and which does not account for such situations. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly on Bayes theorem, it allows us to make better posterior estimates as more observations become available. Thus unlike non-Bayesian approach where parameters of interest are assumed to be deterministic, but unknown constants, the Bayesian estimator seeks to estimate a par
|
https://en.wikipedia.org/wiki/Parabolic%20cylinder%20function
|
In mathematics, the parabolic cylinder functions are special functions defined as solutions to the differential equation
This equation is found when the technique of separation of variables is used on Laplace's equation when expressed in parabolic cylindrical coordinates.
The above equation may be brought into two distinct forms (A) and (B) by completing the square and rescaling , called H. F. Weber's equations:
and
If is a solution, then so are
If is a solution of equation (), then is a solution of (), and, by symmetry,
are also solutions of ().
Solutions
There are independent even and odd solutions of the form (). These are given by (following the notation of Abramowitz and Stegun (1965)):
and
where is the confluent hypergeometric function.
Other pairs of independent solutions may be formed from linear combinations of the above solutions. One such pair is based upon their behavior at infinity:
where
The function approaches zero for large values of and , while diverges for large values of positive real .
and
For half-integer values of a, these (that is, U and V) can be re-expressed in terms of Hermite polynomials; alternatively, they can also be expressed in terms of Bessel functions.
The functions U and V can also be related to the functions (a notation dating back to Whittaker (1902)) that are themselves sometimes called parabolic cylinder functions:
Function was introduced by Whittaker and Watson as a solution of eq.~() with bounded at . It can be expressed in terms of confluent hypergeometric functions as
Power series for this function have been obtained by Abadir (1993).
References
Special hypergeometric functions
Special functions
|
https://en.wikipedia.org/wiki/Biostimulation
|
Biostimulation involves the modification of the environment to stimulate existing bacteria capable of bioremediation. This can be done by addition of various forms of rate limiting nutrients and electron acceptors, such as phosphorus, nitrogen, oxygen, or carbon (e.g. in the form of molasses). Alternatively, remediation of halogenated contaminants in anaerobic environments may be stimulated by adding electron donors (organic substrates), thus allowing indigenous microorganisms to use the halogenated contaminants as electron acceptors. EPA Anaerobic Bioremediation Technologies Additives are usually added to the subsurface through injection wells, although injection well technology for biostimulation purposes is still emerging. Removal of the contaminated material is also an option, albeit an expensive one. Biostimulation can be enhanced by bioaugmentation. This process, overall, is referred to as bioremediation and is an EPA-approved method for reversing the presence of oil or gas spills. While biostimulation is usually associated with remediation of hydrocarbon or high production volume chemical spills, it is also potentially useful for treatment of less frequently encountered contaminant spills, such as pesticides, particularly herbicides.
The primary advantage of biostimulation is that bioremediation will be undertaken by already present native microorganisms that are well-suited to the subsurface environment, and are well distributed spatially within the subsurface. The primary disadvantage is that the delivery of additives in a manner that allows the additives to be readily available to subsurface microorganisms is based on the local geology of the subsurface. Tight, impermeable subsurface lithology (tight clays or other fine-grained material) make it difficult to spread additives throughout the affected area. Fractures in the subsurface create preferential pathways in the subsurface which additives preferentially follow, preventing even distribution o
|
https://en.wikipedia.org/wiki/Transcend%20Information
|
Transcend Information, Inc. () is a Taiwanese company headquartered in Taipei, Taiwan that manufactures and distributes memory products. Transcend deals in over 2,000 products including memory modules, flash memory cards, USB flash drives, portable hard drives, multimedia products, solid-state drives, dashcams, body cameras, personal cloud storage, card readers and accessories.
It has offices in the United States, Germany, the Netherlands, United Kingdom, Japan, Hong Kong, China and South Korea. It was the first Taiwanese memory module manufacturer to receive ISO 9001 certification.
History
Transcend Information. Inc. was founded in 1988 by Mr. Chung-Won Shu, Peter, with its headquarters in Taipei, Taiwan. Today, Transcend has become a global brand of digital storage, multimedia and industrial products with 13 offices worldwide. The design, development, and manufacture all their products is done in-house, as well as the marketing and sales.
Product lines
Transcend manufactures memory modules, flash memory cards, USB flash drives, card readers, external hard drives, solid state drives and industrial-grade products.
Memory cards – SD / CF / micro SD
USB flash drives – USB 3.0 / USB 2.0
External hard drives – Rugged series / Classic series / 3.5" Desktop storage
Solid-state drives – 2.5" SSDs / M.2 SSDs / mSATA SSDs / Portable SSDs
Card readers – USB 3.0 / USB 2.0 / USB Type - C
Dashcams
Body cameras
Personal cloud storage
Apple solutions - Lightning / USB 3.1 flash drives / SSD upgrade kit / Expansion cards for MacBook Air and MacBook Pro / Portable hard drive and portable SSD for Mac
Multimedia products - Digital media players / DVD writers
Memory modules - For Desktop / Notebook / Server
Embedded solutions
Flash: SSD / CF / SD / MMC / eMMC / SATA Module / PATA Module / USB Module
DRAM: Industrial temperature / Workstation / Server / Standard / Low voltage / Low profile
JetFlash
JetFlash are a series of flash based USB drives designed and manufac
|
https://en.wikipedia.org/wiki/Megaminx
|
The Megaminx or Mégaminx (, ) is a dodecahedron-shaped puzzle similar to the Rubik's Cube. It has a total of 50 movable pieces to rearrange, compared to the 20 movable pieces of the Rubik's Cube.
History
The Megaminx, or Magic Dodecahedron, was invented by several people independently and produced by several different manufacturers with slightly different designs. Uwe Mèffert eventually bought the rights to some of the patents and continues to sell it in his puzzle shop under the Megaminx moniker. It is also known by the name Hungarian Supernova, invented by Dr. Christoph Bandelow. His version came out first, shortly followed by Meffert's Megaminx. The proportions of the two puzzles are slightly different.
Speed-solving the Megaminx became an official World Cube Association event in 2003, with the first official single-solve record set by American Grant Tregay with a time of 2 minutes 12.82 seconds during the World Rubik's Games Championship in Canada. The first sub-minute solve in official competition was achieved by Japanese solver Takumi Yoshida with a time of 59.33s at the January 2009 Amagasaki Open, and the first sub-30-second single solve was achieved by Peruvian solver Juan Pablo Huanqui at a 2017 Santiago event with a time of 29.93 seconds. The current world record time for a Megaminx solve is 24.44 seconds, set by Leandro Martín López of Argentina on 23 July 2023
Description
The Megaminx is made in the shape of a dodecahedron, and has 12 faces and center pieces, 20 corner pieces, and 30 edge pieces. The face centers each have a single color, which identifies the color of that face in the solved state. The edge pieces have two colors, and the corner pieces have three. Each face contains a center piece, 5 corner pieces and 5 edge pieces. The corner and edge pieces are shared with adjacent faces. The face centers can only rotate in place, but the other pieces can be permuted by twisting the face layer around the face center.
There are two main versions
|
https://en.wikipedia.org/wiki/Wirehog
|
Wirehog was a friend-to-friend file sharing program that was linked to Facebook and allowed people to transfer files directly between computers.
History
Wirehog was created by Andrew McCollum, Mark Zuckerberg, Adam D'Angelo, and Sean Parker during their development of the Facebook social networking website in Palo Alto in the summer and fall of 2004. The only way to join Wirehog was through an invitation from a member and although it was originally planned as an integrated feature of Facebook, it could also be used by friends who were not registered on Facebook. Wirehog was launched in October 2004, and taken down in January 2006. Its target audience at the time was the same as the campus-only file-sharing service i2hub that had launched earlier that year. i2hub was gaining a lot of traction and growing rapidly. In an interview with The Harvard Crimson, Zuckerberg said, "I think Wirehog will probably spread in the same way that thefacebook did."
The software was described by its creators as "an HTTP file transfer system using dynamic DNS and NAT traversal to make your personal computer addressable, routable and easily accessible". The client allowed users to both access data stored on their home computer from a remote location and let friends exchange files between each other's computers. In ways, Wirehog was a project comparable to Alex Pankratov's Hamachi VPN, the open-source OneSwarm private network, or the darknet RetroShare software.
Until at least July 2005, Facebook officially endorsed the p2p client, saying on their website:
"Wirehog is a social application that lets friends exchange files of any type with each other over the web. Facebook and Wirehog are integrated so that Wirehog knows who your friends are in order to make sure that only people in your network can see your files. Facebook certifies that it is okay to enter your facebook email address and password into Wirehog for the purposes of this integration."
The Wirehog software was written in
|
https://en.wikipedia.org/wiki/Negative%20luminescence
|
Negative luminescence is a physical phenomenon by which an electronic device emits less thermal radiation when an electric current is passed through it than it does in thermal equilibrium (current off). When viewed by a thermal camera, an operating negative luminescent device looks colder than its environment.
Physics
Negative luminescence is most readily observed in semiconductors. Incoming infrared radiation is absorbed in the material by the creation of an electron–hole pair. An electric field is used to remove the electrons and holes from the region before they have a chance to recombine and re-emit thermal radiation. This effect occurs most efficiently in regions of low charge carrier density.
Negative luminescence has also been observed in semiconductors in orthogonal electric and magnetic fields. In this case, the junction of a diode is not necessary and the effect can be observed in bulk material. A term that has been applied to this type of negative luminescence is galvanomagnetic luminescence.
Negative luminescence might appear to be a violation of Kirchhoff's law of thermal radiation. This is not true, as the law only applies in thermal equilibrium.
Another term that has been used to describe negative luminescent devices is "Emissivity switch", as an electric current changes the effective emissivity.
History
This effect was first seen by Russian physicists in the 1960s in A.F.Ioffe Physicotechnical Institute, Leningrad, Russia. Subsequently, it was studied in semiconductors such as indium antimonide (InSb), germanium (Ge) and indium arsenide (InAs) by workers in West Germany, Ukraine (Institute of Semiconductor Physics, Kyiv), Japan (Chiba University) and the United States. It was first observed in the mid-infrared (3-5 µm wavelength) in the more convenient diode structures in InSb heterostructure diodes by workers at the Defence Research Agency, Great Malvern, UK (now QinetiQ). These British workers later demonstrated LWIR band (8-12 µm) negative
|
https://en.wikipedia.org/wiki/Nintendo%20Tumbler%20Puzzle
|
The Nintendo Tumbler Puzzle, also known as the Ten Billion Barrel in English and originally in Japanese, is a mathematical and mechanical puzzle. It is one of many mechanical toys invented by Gunpei Yokoi at Nintendo. It was released in 1980 under . The patent expired in March 1995 due to non-payment of a maintenance fee.
Overview
The puzzle consists of a cylinder of transparent plastic divided into six levels, within a black plastic frame. The frame consists of upper and lower discs that are joined through the middle of the cylinder.
The top and bottom levels of the cylinder form a single piece, but between them are two rotatable pieces each two levels high. Each of the four central levels is divided into five chambers each containing a colored ball. The top and bottom levels have only three chambers, containing either three balls or three parts of the frame depending on the relative position of frame and cylinder.
The balls in three of the five resulting columns of chambers can be moved up or down one level by raising or lowering the frame relative to the transparent cylinder.
The object is to sort the balls, so that each of the five columns contains balls of a single color.
Cameos
As a tribute to the late creator of the puzzle and former Metroid series producer, Gunpei Yokoi, the puzzle has a small cameo appearance in Metroid Prime for the GameCube. A large-scale version appears in the Phazon Mines in which Samus Aran uses the Morph Ball to interact with it by rotating the levels and climbing the side of it with magnetic rails.
The puzzle appears in Animal Crossing: New Leaf, as one of the prizes from Redd during the fireworks displays throughout August.
It is an easter egg in The Legend of Zelda: Majora's Mask 3D.
In WarioWare Gold, it appears as one of the microgames, requiring the matching of 4 of its marbles.
References
External links
http://www.jaapsch.net/puzzles/nintendo.htm (photos and solution)
http://blog.beforemario.com/2011/09/nintendo
|
https://en.wikipedia.org/wiki/Revetment
|
A revetment in stream restoration, river engineering or coastal engineering is a facing of impact-resistant material (such as stone, concrete, sandbags, or wooden piles) applied to a bank or wall in order to absorb the energy of incoming water and protect it from erosion. River or coastal revetments are usually built to preserve the existing uses of the shoreline and to protect the slope.
In architecture generally, it means a retaining wall. In military engineering it is a structure formed to secure an area from artillery, bombing, or stored explosives.
Freshwater revetments
Many revetments are used to line the banks of freshwater rivers, lakes, and man-made reservoirs, especially to prevent damage during periods of floods or heavy seasonal rains (see riprap). Many materials may be used: wooden piles, loose-piled boulders or concrete shapes, or more solid banks.
Concrete revetments are the most common type of infrastructure used to control the Mississippi River. More than of concrete matting has been placed in river bends between Cairo, Illinois and the Gulf of Mexico to slow the natural erosion that would otherwise frequently change small parts of the river's course.
Revetments as coastal defence
Revetments are used as a low-cost solution for coastal erosion defense in areas where crashing waves may otherwise deplete the coastline.
Wooden revetments are made of planks laid against wooden frames so that they disrupt the force of the water. Although once popular, the use of wooden revetments has largely been replaced by modern concrete-based defense structures such as tetrapods. In the 1730s, wooden revetments protecting dikes in the Netherlands were phased out due to the spread of shipworm infestations.
Dynamic revetments use gravel or cobble-sized rocks to mimic a natural cobble beach for the purpose of reducing wave energy and stopping or slowing coastal erosion.
Unlike solid structures, dynamic revetments are designed to allow wave action to rearra
|
https://en.wikipedia.org/wiki/History%20of%20genetics
|
The history of genetics dates from the classical era with contributions by Pythagoras, Hippocrates, Aristotle, Epicurus, and others. Modern genetics began with the work of the Augustinian friar Gregor Johann Mendel. His work on pea plants, published in 1866, provided the initial evidence that, on its rediscovery in 1900, helped to establish the theory of Mendelian inheritance.
In ancient Greece, Hippocrates suggested that all organs of the body of a parent gave off invisible “seeds,” miniaturised components, that were transmitted during sexual intercourse and combined in the mother's womb to form a baby. In the Early Modern times, William Harvey's
book On Animal Generation contradicted Aristotle's theories of genetics and embryology.
The 1900 rediscovery of Mendel's work by Hugo de Vries, Carl Correns and Erich von Tschermak led to rapid advances in genetics. By 1915 the basic principles of Mendelian genetics had been studied in a wide variety of organisms — most notably the fruit fly Drosophila melanogaster. Led by Thomas Hunt Morgan and his fellow "drosophilists", geneticists developed the Mendelian model, which was widely accepted by 1925. Alongside experimental work, mathematicians developed the statistical framework of population genetics, bringing genetic explanations into the study of evolution.
With the basic patterns of genetic inheritance established, many biologists turned to investigations of the physical nature of the gene. In the 1940s and early 1950s, experiments pointed to DNA as the portion of chromosomes (and perhaps other nucleoproteins) that held genes. A focus on new model organisms such as viruses and bacteria, along with the discovery of the double helical structure of DNA in 1953, marked the transition to the era of molecular genetics.
In the following years, chemists developed techniques for sequencing both nucleic acids and proteins, while many others worked out the relationship between these two forms of biological molecules and disc
|
https://en.wikipedia.org/wiki/Laplace%E2%80%93Beltrami%20operator
|
In differential geometry, the Laplace–Beltrami operator is a generalization of the Laplace operator to functions defined on submanifolds in Euclidean space and, even more generally, on Riemannian and pseudo-Riemannian manifolds. It is named after Pierre-Simon Laplace and Eugenio Beltrami.
For any twice-differentiable real-valued function f defined on Euclidean space Rn, the Laplace operator (also known as the Laplacian) takes f to the divergence of its gradient vector field, which is the sum of the n pure second derivatives of f with respect to each vector of an orthonormal basis for Rn. Like the Laplacian, the Laplace–Beltrami operator is defined as the divergence of the gradient, and is a linear operator taking functions into functions. The operator can be extended to operate on tensors as the divergence of the covariant derivative. Alternatively, the operator can be generalized to operate on differential forms using the divergence and exterior derivative. The resulting operator is called the Laplace–de Rham operator (named after Georges de Rham).
Details
The Laplace–Beltrami operator, like the Laplacian, is the (Riemannian) divergence of the (Riemannian) gradient:
An explicit formula in local coordinates is possible.
Suppose first that M is an oriented Riemannian manifold. The orientation allows one to specify a definite volume form on M, given in an oriented coordinate system xi by
where is the absolute value of the determinant of the metric tensor, and the dxi are the 1-forms forming the dual frame to the frame
of the tangent bundle and is the wedge product.
The divergence of a vector field on the manifold is then defined as the scalar function with the property
where LX is the Lie derivative along the vector field X. In local coordinates, one obtains
where here and below the Einstein notation is implied, so that the repeated index i is summed over.
The gradient of a scalar function ƒ is the vector field grad f that may be defined through the in
|
https://en.wikipedia.org/wiki/Tensor%20density
|
In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another (see tensor field), except that it is additionally multiplied or weighted by a power W of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W are called tensor capacity. A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle.
Motivation
In physics and related fields, it is often useful to work with the components of an algebraic object rather than the object itself. An example would be decomposing a vector into a sum of basis vectors weighted by some coefficients such as
where is a vector in 3-dimensional Euclidean space, are the usual standard basis vectors in Euclidean space. This is usually necessary for computational purposes, and can often be insightful when algebraic objects represent complex abstractions but their components have concrete interpretations. However, with this identification, one has to be careful to track changes of the underlying basis in which the quantity is expanded; it may in the course of a computation become expedient to change the basis while the vector remains fixed in physical space. More generally, if an algebraic object represents a geometric object, but is expressed in terms of a particular basis, then it is necessary to, when the basis is changed, also change the representation. Physicists will often call this representation of a geometric object a tensor if it transforms under a sequence of linear maps given a linear change of basis (although confusingly others call the underlyi
|
https://en.wikipedia.org/wiki/PCM%20adaptor
|
A PCM adaptor is a device that encodes digital audio as video for recording on a videocassette recorder. The adapter also has the ability to decode a video signal back to digital audio for playback. This digital audio system was used for mastering early compact discs.
Operation
High-quality pulse-code modulation (PCM) audio requires a significantly larger bandwidth than a regular analog audio signal. For example, a 16-bit PCM signal requires an analog bandwidth of about 1-1.5 MHz compared to about 15-20 kHz of analog bandwidth required for an analog audio signal. A standard analog audio recorder cannot meet this requirement. One solution arrived at in the early 1980s was to use a videotape recorder, which is capable of recording signals with higher bandwidths.
A means of converting digital audio into a video format was necessary. Such an audio recording system includes two devices: the PCM adaptor, which converts audio into pseudo-video, and the videocassette recorder. A PCM adaptor performs an analog-to-digital conversion producing series of binary digits, which, in turn, is coded and modulated into a black and white video signal, appearing as a vibrating checkerboard pattern, which can then be recorded as a video signal.
Most video-based PCM adaptors record audio at 14 or 16 bits per sample, with a sampling frequency of 44.1 kHz for PAL or monochrome NTSC, or 44.056 kHz for color NTSC. Some of the earlier models, such as the Sony PCM-100, recorded 16 bits per sample, but used only 14 of the bits for the audio, with the remaining 2 bits used for error correction for the case of dropouts or other anomalies being present on the videotape.
Sampling frequency
The use of video for the PCM adapter helps to explain the choice of sampling frequency for the CD, because the number of video lines, frame rate and bits per line end up dictating the sampling frequency one can achieve. A sampling frequency of 44.1 kHz was thus adopted for the compact disc, as at the time, th
|
https://en.wikipedia.org/wiki/Embedded%20Java
|
Embedded Java refers to versions of the Java program language that are designed for embedded systems. Since 2010 embedded Java implementations have come closer to standard Java, and are now virtually identical to the Java Standard Edition. Since Java 9 customization of the Java Runtime through modularization removes the need for specialized Java profiles targeting embedded devices.
History
Although in the past some differences existed between embedded Java and traditional PC based Java, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (such as consumers, industrial, white goods, healthcare, metering, smart markets in general)
CORE embedded Java API for a unified Embedded Java ecosystem
In order for a software component to run on any Java system, it must target the core minimal API provided by the different providers of the embedded Java ecosystem. Companies share the same eight packages of pre-written programs. The packages (java.lang, java.io, java.util, ... ) form the CORE Embedded Java API, which means that embedded programmers using the Java language can use them in order to make any worthwhile use of the Java language.
Old distinctions between SE embedded API and ME embedded API from ORACLE
Java SE embedded is based on desktop Java Platform, Standard Edition. It is designed to be used on systems with at least 32 MB of RAM, and can work on Linux ARM, x86, or Power ISA, and Windows XP and Windows XP Embedded architectures.
Java ME embedded used to be based on the Connected Device Configuration subset of Java Platform, Micro Edition. It is designed to be used on systems with at least 8 MB of RAM, and can work on Linux ARM, PowerPC, or MIPS architecture.
See also
Excelsio
|
https://en.wikipedia.org/wiki/Telecommunication%20control%20unit
|
A telecommunication control unit (TCU), line control unit, or terminal control unit (although terminal control unit may also refer to a terminal cluster controller) is a Front-end processor for mainframes and some minicomputers which supports attachment of one or more telecommunication lines. TCUs free processors from handling the data coming in and out of RS-232 ports. The TCU can support multiple terminals, sometimes hundreds. Many of these TCUs can support RS-232 when it is required, although there are other serial interfaces as well.
The advent of ubiquitous TCP/IP has reduced the need for telecommunications control units.
See also
Terminal access controller
IBM 270x
IBM 3705 Communications Controller
IBM 3720
IBM 3745
References
Mainframe computers
Networking hardware
|
https://en.wikipedia.org/wiki/Big%20Brutus
|
Big Brutus is the nickname of the Bucyrus-Erie model 1850-B electric shovel, which was the second largest of its type in operation in the 1960s and 1970s. Big Brutus is the centerpiece of a mining museum in West Mineral, Kansas, United States where it was used in coal strip mining operations. The shovel was designed to dig from down to unearth relatively shallow coal seams, which would themselves be mined with smaller equipment.
Description
Fabrication of Big Brutus was completed in May 1963, after which it was shipped on 150 railroad cars to be assembled in Kansas. It operated until 1974, when it became uneconomical to mine coal at the site. At that time it was considered too big to move and was left in place.
Big Brutus, while not the largest electric shovel ever built, is the largest electric shovel still in existence. The Captain, at – triple that of Big Brutus – was the largest shovel and one of the largest land-based mobile machines ever built, only exceeded by some dragline and bucket-wheel excavators. It was scrapped in 1992, after receiving extreme damage from an hours-long internal fire.
Museum
The Pittsburg & Midway Coal Mining Company donated Big Brutus in 1984 as the core of a mining museum which opened in 1985. In 1987, the American Society of Mechanical Engineers designated Big Brutus a Regional Historic Mechanical Engineering Landmark. It was listed on the National Register of Historic Places in 2018.
The museum offers tours and camping.
Fatal accident
On January 16, 2010, Mark Mosley, a 49-year-old dentist from Lowell, Arkansas, died attempting to base-jump from the top of the boom. Climbing the boom had been prohibited years earlier; after the accident, the attraction's board of directors considered additional restrictions on climbing. During the accident's investigation, examiner Tom Dolphin determined that Mosley had accidentally fallen off the boom while preparing to jump.
See also
The Silver Spade
Bucket wheel excavator
Dragline
|
https://en.wikipedia.org/wiki/Solidum%20Systems
|
Solidum Systems was a fabless semiconductor company founded by Feliks Welfeld and Misha Nossik in Ottawa, Ontario Canada in 1997. The company developed a series of rule-based network classification semiconductor devices. Some of their devices could be found in systems which supported 10 Gbit/s interfaces.
Solidum was acquired in October 2002 by Integrated Device Technology. IDT closed the Ottawa offices supporting the product in March 2009.
Misha Nossik was also the second chairman of the Network Processing Forum. The NPF also released the Look-Aside Interface which is an important specification for Network Search Elements such as Solidum's devices.
Products
Solidum produced a set of Traffic Classification devices called the PAX.port 1100, PAX.port 1200, and PAX.port 2500
The classifier chips were used in Network Switches and Load Balancers.
External links
Packet Description Language introduced Archived
1999 Packet Processing introduction Archived
2001 2nd round financing
2002 NPF names Misha Nossik Chairman
Companies established in 1997
Defunct networking companies
Fabless semiconductor companies
Companies based in Ottawa
Semiconductor companies of Canada
Defunct computer companies of Canada
|
https://en.wikipedia.org/wiki/Jonathan%20Steuer
|
Jonathan Steuer (born December 3, 1965, in Wisconsin) is a pioneer in online publishing.
Steuer led the launch teams of a number of early and influential online publishing ventures, including Cyborganic, a pioneering online/offline community, HotWired, the first ad-supported web magazine, and c|net's online operations. Steuer's article "Defining virtual realities: Dimensions determining telepresence", is widely cited in academic and industry literature. Originally published in 1992 in the Journal of Communication 42, 73-9, it has been reprinted in Communication in the Age of Virtual Reality (1995), F. Biocca & M. R. Levy (Eds.).
Steuer's vividness and interactivity matrix from that article appeared in Wired circa 1995 and has been particularly influential in shaping the discourse by defining virtual reality in terms of human experience, rather than technological hardware, and setting out vividness and interactivity as axial dimensions of that experience. Steuer's notability in diverse arenas as a scholar, architect, and instigator of new media is documented in multiple, independent, non-trivial, published works.
Steuer has been a consultant and senior executive for a number of other online media startups: CNet, ZDTV, Sawyer Media Systems and Scient.
Steuer has an AB in philosophy from Harvard University, and a PhD in communication theory & research from Stanford University. There, his doctoral dissertation concerned Vividness and Source of Evaluation as Determinants of Social Responses Toward Mediated Representations of Agency.
Personal Life
He is married to Majorie Ingall. A longtime resident of the Bay Area, today Steuer resides in New York City.
References
Further reading
Some books and print articles that discuss Steuer's role in the web publishing industry that emerged in San Francisco in the 1990s:
Yoshihiro Kaneda, Net Voice in the City, ASCII Corporation, Japan, 1997, pp. 88–107
Robert H. Reid, Architects of the Web, John Wiley & Sons, New York, 1997
|
https://en.wikipedia.org/wiki/Keyswitch
|
A keyswitch is a type of small switch used for keys on keyboards.
Key switch is also used to describe a switch operated by a key, usually used in burglar alarm circuits. A car ignition is also a switch of this type.
Computer keyboards
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.