source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Polarization%20identity
|
In linear algebra, a branch of mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space.
If a norm arises from an inner product then the polarization identity can be used to express this inner product entirely in terms of the norm. The polarization identity shows that a norm can arise from at most one inner product; however, there exist norms that do not arise from any inner product.
The norm associated with any inner product space satisfies the parallelogram law:
In fact, as observed by John von Neumann, the parallelogram law characterizes those norms that arise from inner products.
Given a normed space , the parallelogram law holds for if and only if there exists an inner product on such that for all in which case this inner product is uniquely determined by the norm via the polarization identity.
Polarization identities
Any inner product on a vector space induces a norm by the equation
The polarization identities reverse this relationship, recovering the inner product from the norm.
Every inner product satisfies:
Solving for gives the formula If the inner product is real then and this formula becomes a polarization identity for real inner products.
Real vector spaces
If the vector space is over the real numbers then the polarization identities are:
These various forms are all equivalent by the parallelogram law:
This further implies that class is not a Hilbert space whenever , as the parallelogram law is not satisfied. For the sake of counterexample, consider and for any two disjoint subsets of general domain and compute the measure of both sets under parallelogram law.
Complex vector spaces
For vector spaces over the complex numbers, the above formulas are not quite correct because they do not describe the imaginary part of the (complex) inner product.
However, an analogous expression does ensure that both real and imaginary
|
https://en.wikipedia.org/wiki/216%20%28number%29
|
216 (two hundred [and] sixteen) is the natural number following 215 and preceding 217. It is a cube, and is often called Plato's number, although it is not certain that this is the number intended by Plato.
In mathematics
216 is the cube of 6, and the sum of three cubes:
It is the smallest cube that can be represented as a sum of three positive cubes, making it the first nontrivial example for Euler's sum of powers conjecture. It is, moreover, the smallest number that can be represented as a sum of any number of distinct positive cubes in more than one way. It is a highly powerful number: the product of the exponents in its prime factorization is larger than the product of exponents of any smaller number.
Because there is no way to express it as the sum of the proper divisors of any other integer, it is an untouchable number. Although it is not a semiprime, the three closest numbers on either side of it are, making it the middle number between twin semiprime-triples, the smallest number with this property. Sun Zhiwei has conjectured that each natural number not equal to 216 can be written as either a triangular number or as a triangular number plus a prime number; however, this is not possible for 216. If the conjecture is true, 216 would be the only number for which this is not possible.
There are 216 ordered pairs of four-element permutations whose products generate all the other permutations on four elements. There are also 216 fixed hexominoes, the polyominoes made from 6 squares, joined edge-to-edge. Here "fixed" means that rotations or mirror reflections of hexominoes are considered to be distinct shapes.
In other fields
216 is one common interpretation of Plato's number, a number described in vague terms by Plato in the Republic. Other interpretations include and .
There are 216 colors in the web-safe color palette, a color cube.
In the game of checkers, there are 216 different positions that can be reached by the first three moves.
The proto-Kab
|
https://en.wikipedia.org/wiki/Name%E2%80%93value%20pair
|
A name–value pair, also called an attribute–value pair, key–value pair, or field–value pair, is a fundamental data representation in computing systems and applications. Designers often desire an open-ended data structure that allows for future extension without modifying existing code or data. In such situations, all or part of the data model may be expressed as a collection of 2-tuples in the form <attribute name, value> with each element being an attribute–value pair. Depending on the particular application and the implementation chosen by programmers, attribute names may or may not be unique.
Examples of use
Some of the applications where information is represented as name-value pairs are:
E-mail, in RFC 2822 headers
Query strings, in URLs
Optional elements in network protocols, such as IP, where they often appear as TLV (type–length–value) triples
Bibliographic information, as in BibTeX and Dublin Core metadata
Element attributes in SGML, HTML and XML
Some kinds of database systems – namely a key–value database
OpenStreetMap map data
Windows registry entries
Environment variables
Use in computer languages
Some computer languages implement name–value pairs, or more frequently collections of attribute–value pairs, as standard language features. Most of these implement the general model of an associative array: an unordered list of unique attributes with associated values. As a result, they are not fully general; they cannot be used, for example, to implement electronic mail headers (which are ordered and non-unique).
In some applications, a name–value pair has a value that contains a nested collection of attribute–value pairs. Some data serialization formats such as JSON support arbitrarily deep nesting.
Other data representations are restricted to one level of nesting, such as INI file's section/name/value.
See also
Attribute (computing)
Entity–attribute–value model
Key–value database
Query string
References
4
Data modeling
|
https://en.wikipedia.org/wiki/PAST%20storage%20utility
|
PAST is a large-scale, distributed, persistent storage system based on the Pastry peer-to-peer overlay network.
See also
Pastry (DHT) (PAST section)
External links
A. Rowstron and P. Druschel. Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility. 18th ACM SOSP'01, Lake Louise, Alberta, Canada, October 2001.
PAST homepages: freepastry.org, Microsoft Research
http://www.cse.lehigh.edu/~brian/course/advanced-networking/reviews/weber-past.html
http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=990064 IEEE
Distributed data storage
|
https://en.wikipedia.org/wiki/Singularity%20%28operating%20system%29
|
Singularity is an experimental operating system developed by Microsoft Research between July 9, 2003, and February 7, 2015. It was designed as a high dependability OS in which the kernel, device drivers, and application software were all written in managed code. Internal security uses type safety instead of hardware memory protection.
Operation
The lowest-level x86 interrupt dispatch code is written in assembly language and C. Once this code has done its job, it invokes the kernel, which runtime system and garbage collector are written in Sing# (an extended version of Spec#, itself an extension of C#) and runs in unprotected mode. The hardware abstraction layer is written in C++ and runs in protected mode. There is also some C code to handle debugging. The computer's basic input/output system (BIOS) is invoked during the 16-bit real mode bootstrap stage; once in 32-bit mode, Singularity never invokes the BIOS again, but invokes device drivers written in Sing#. During installation, Common Intermediate Language (CIL) opcodes are compiled into x86 opcodes using the Bartok compiler.
Security design
Singularity is a microkernel operating system. Unlike most historic microkernels, its components execute in the same address space (process), which contains software-isolated processes (SIPs). Each SIP has its own data and code layout, and is independent from other SIPs. These SIPs behave like normal processes, but avoid the cost of task-switches.
Protection in this system is provided by a set of rules called invariants that are verified by static program analysis. For example, in the memory-invariant states there must be no cross-references (or memory pointers) between two SIPs; communication between SIPs occurs via higher-order communication channels managed by the operating system. Invariants are checked during installation of the application. (In Singularity, installation is managed by the operating system.)
Most of the invariants rely on the use of safer memory-man
|
https://en.wikipedia.org/wiki/Fetch-and-add
|
In computer science, the fetch-and-add (FAA) CPU instruction atomically increments the contents of a memory location by a specified value.
That is, fetch-and-add performs the operation
increment the value at address by , where is a memory location and is some value, and return the original value at .
in such a way that if this operation is executed by one process in a concurrent system, no other process will ever see an intermediate result.
Fetch-and-add can be used to implement concurrency control structures such as mutex locks and semaphores.
Overview
The motivation for having an atomic fetch-and-add is that operations that appear in programming languages as
are not safe in a concurrent system, where multiple processes or threads are running concurrently (either in a multi-processor system, or preemptively scheduled onto some single-core systems). The reason is that such an operation is actually implemented as multiple machine instructions:
load into a register;
add to register;
store register value back into .
When one process is doing and another is doing concurrently, there is a race condition. They might both fetch and operate on that, then both store their results with the effect that one overwrites the other and the stored value becomes either or , not as might be expected.
In uniprocessor systems with no kernel preemption supported, it is sufficient to disable interrupts before accessing a critical section. However, in multiprocessor systems (even with interrupts disabled) two or more processors could be attempting to access the same memory at the same time. The fetch-and-add instruction allows any processor to atomically increment a value in memory, preventing such multiple processor collisions.
Maurice Herlihy (1991) proved that fetch-and-add has a finite consensus number, in contrast to the compare-and-swap operation. The fetch-and-add operation can solve the wait-free consensus problem for no more than two concurrent processes.
Im
|
https://en.wikipedia.org/wiki/Managed%20retreat
|
Managed retreat involves the purposeful, coordinated movement of people and buildings away from risks. This may involve the movement of a person, infrastructure (e.g., building or road), or community. It can occur in response to a variety of hazards such as flood, wildfire, or drought.
Politicians, insurers and residents are increasingly paying attention to managed retreat from low-lying coastal areas because of the threat of sea-level rise due to climate warming. Trends in climate change predict substantial sea-level rises worldwide, causing damage to human infrastructure through coastal erosion and putting communities at risk of severe coastal flooding.
The type of managed retreat proposed depends on the location and type of natural hazard, and on local policies and practices for managed retreat. In the United Kingdom, managed realignment through removal of flood defences is often a response to sea-level rise exacerbated by local subsidence. In the United States, managed retreat often occurs through voluntary acquisition and demolition or relocation of at-risk properties by government. In the Global South, relocation may occur through government programs. Some low-lying countries, facing inundation due to sea-level rise, are planning for the relocation of their populations, such as Kiribati planning for "Migration with Dignity".
Managed realignment
In the United Kingdom, the main reason for managed realignment is to improve coastal stability, essentially replacing artificial 'hard' coastal defences with natural ‘soft’ coastal landforms. According to University of Southampton researchers Matthew M. Linham and Robert J. Nicholls, "one of the biggest drawbacks of managed realignment is that the option requires land to be yielded to the sea." One of its benefits is that it can help protect land further inland by creating natural spaces that act as buffers to absorb water or dampen the force of waves.
Managed realignment has also been used to mitigate for loss of
|
https://en.wikipedia.org/wiki/Globus%20Cassus
|
Globus Cassus is an art project and book by Swiss architect and artist Christian Waldvogel presenting a conceptual transformation of planet Earth into a much bigger, hollow, artificial world with an ecosphere on its inner surface. It was the Swiss contribution to the 2004 Venice Architecture Biennale and was awarded the gold medal in the category "Most beautiful books of the World" at the Leipzig Book Fair in 2005. It consists of a meticulous description of the transformation process, a narrative of its construction, and suggestions on the organizational workings on Globus Cassus.
Waldvogel described it as an "open source" art project and stated that anyone could contribute designs and narratives to it on the project wiki. As of August 2012, the Globus Cassus wiki is no longer operational.
Properties
The proposed megastructure would incorporate all of Earth's matter. Sunlight would enter through two large windows, and gravity would be simulated by the centrifugal effect. Humans would live on two vast regions that face each other and that are connected through the empty center. The hydrosphere and atmosphere would be retained on its inside. The ecosphere would be restricted to the equatorial zones, while at the low-gravity tropic zones a thin atmosphere would allow only for plantations. The polar regions would have neither gravity nor atmosphere and would therefore be used for storage of raw materials and microgravity production processes.
Geometric structure
Globus Cassus has the form of a compressed geodesic icosahedron with two diagonal openings. Along the edges of the icosahedron run the skeleton beams, the gaps between the beams contain a shell and, where there are windows, inward-curving domes.
Building material
Earth's crust, mantle and core are gradually excavated, transported outwards and then transformed to larger strength and reduced density. While the crust is mined from open pits in the continents' centers, magma and the liquid mantle are pumped acr
|
https://en.wikipedia.org/wiki/Astringent
|
An astringent (sometimes called adstringent) is a chemical that shrinks or constricts body tissues. The word derives from the Latin adstringere, which means "to bind fast". Calamine lotion, witch hazel, and yerba mansa, a Californian plant, are astringents, as are the powdered leaves of the myrtle.
Astringency, the dry, puckering or numbing mouthfeel caused by the tannins in unripe fruits, lets the fruit mature by deterring eating. Ripe fruits and fruit parts including blackthorn (sloe berries), Aronia chokeberry, chokecherry, bird cherry, rhubarb, quince, jabuticaba and persimmon fruits (especially when unripe), banana skins (or unripe bananas), cashew fruits and acorns are astringent. Citrus fruits, like lemons, are somewhat astringent. Tannins, being a kind of polyphenol, bind salivary proteins and make them precipitate and aggregate, producing a rough, "sandpapery", or dry sensation in the mouth. The tannins in some teas, coffee, and red grape wines like Cabernet Sauvignon and Merlot produce mild astringency.
Squirrels, wild boars, and insects can eat astringent food as their tongues are able to handle the taste.
In Ayurveda, astringent is the sixth taste (after sweet, sour, salty, pungent, bitter) represented by "air and earth".
Smoking tobacco is also reported to have an astringent effect.
In a scientific study, astringency was still detectable by subjects who had local anesthesia applied to their taste nerves, but not when both these and the trigeminal nerves were disabled.
Uses
In medicine, astringents cause constriction or contraction of mucous membranes and exposed tissues and are often used internally to reduce discharge of blood serum and mucous secretions. This can happen with a sore throat, hemorrhages, diarrhea, and peptic ulcers. Externally applied astringents, which cause mild coagulation of skin proteins, dry, harden, and protect the skin. People with acne are often advised to use astringents if they have oily skin. Mild astringents relieve s
|
https://en.wikipedia.org/wiki/Tom%20West
|
Joseph Thomas West III (November 22, 1939 – May 19, 2011) was an American technologist. West is notable for being the key figure in the Pulitzer Prize winning non-fiction book The Soul of a New Machine.
West began his career in computer design at RCA, after seven years at the Smithsonian Astrophysical Observatory, a job he'd gotten right out of college. He started working for Data General in 1974. He became the head of Data General's Eclipse group and then became the lead on the Eagle project, building a machine officially named the Eclipse MV/8000. After the publication of Soul of a New Machine, West was sent to Japan by Data General where he helped design DG-1, the first full-screen laptop. His last project in 1996, a thin Web server, was intended to be an internet-ready machine. West retired as Chief Technologist in 1998.
Personal life
West was married to Elizabeth West in 1965; they divorced in 1994. The couple had two daughters, Katherine West and librarian Jessamyn West. West married Cindy Woodward (his former assistant at Data General) in 2001; the couple divorced in 2011. West died at the age of 71 in his Westport, Massachusetts home of an apparent heart attack. His nephew, Christopher Schwarz, is a former editor of Popular Woodworking magazine, author of The Anarchist's Toolchest, and co-founder of Lost Art Press; West's death prompted Schwarz to "leave the magazine and do my own thing".
References
Further reading
Twenty-year retrospective of The Soul of a New Machine, with "where are they now?" segments on the people involved and on Data General.
1996 interview with West.
A decade after the events described in The Soul of a New Machine, West, still at Data General, briefly appears in this MV/9500 corporate announcement video.
Data General
Computer hardware engineers
1939 births
2011 deaths
Amherst College alumni
|
https://en.wikipedia.org/wiki/Environmental%20biotechnology
|
Environmental biotechnology is biotechnology that is applied to and used to study the natural environment. Environmental biotechnology could also imply that one try to harness biological process for commercial uses and exploitation. The International Society for Environmental Biotechnology defines environmental biotechnology as "the development, use and regulation of biological systems for remediation of contaminated environments (land, air, water), and for environment-friendly processes (green manufacturing technologies and sustainable development)".
Environmental biotechnology can simply be described as "the optimal use of nature, in the form of plants, animals, bacteria, fungi and algae, to produce renewable energy, food and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process".
Significance for agriculture, food security, climate change mitigation and adaptation and the MDGs
The IAASTD has called for the advancement of small-scale agro-ecological farming systems and technology in order to achieve food security, climate change mitigation, climate change adaptation and the realisation of the Millennium Development Goals. Environmental biotechnology has been shown to play a significant role in agroecology in the form of zero waste agriculture and most significantly through the operation of over 15 million biogas digesters worldwide.
Significance towards industrial biotechnology
Consider the effluents of starch plant which has mixed up with a local water body like a lake or pond. We find huge deposits of starch which are not so easily taken up for degradation by microorganisms except for a few exemptions. Microorganisms from the polluted site are scan for genomic changes that allow them to degrade/utilize the starch better than other microbes of the same genus. The modified genes are then identified. The resultant genes are cloned into industrially significant microorg
|
https://en.wikipedia.org/wiki/Histogenesis
|
Histogenesis is the formation of different tissues from undifferentiated cells. These cells are constituents of three primary germ layers, the endoderm, mesoderm, and ectoderm. The science of the microscopic structures of the tissues formed within histogenesis is termed histology.
Germ layers
A germ layer is a collection of cells, formed during animal and mammalian embryogenesis. Germ layers are typically pronounced within vertebrate organisms; however, animals or mammals more complex than sponges (eumetazoans and agnotozoans) produce two or three primary tissue layers. Animals with radial symmetry, such as cnidarians, produce two layers, called the ectoderm and endoderm. They are diploblastic. Animals with bilateral symmetry produce a third layer in-between called mesoderm, making them triploblastic. Germ layers will eventually give rise to all of an animal's or mammal's tissues and organs through a process called organogenesis.
Endoderm
The endoderm is one of the germ layers formed during animal embryogenesis. Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm. Initially, the endoderm consists of flattened cells, which subsequently become columnar...
Mesoderm
The mesoderm germ layer forms in the embryos of animals and mammals more complex than cnidarians, making them triploblastic. During gastrulation, some of the cells migrating inward to form the endoderm form an additional layer between the endoderm and the ectoderm. A theory suggests that this key innovation evolved hundreds of millions of years ago and led to the evolution of nearly all large, complex animals. The formation of a mesoderm led to the formation of a coelom. Organs formed inside a coelom can freely move, grow, and develop independently of the body wall while fluid cushions and protects them from shocks.
Ectoderm
The ectoderm is the start of a tissue that covers the body surfaces. It emerges first and forms from the outermost
|
https://en.wikipedia.org/wiki/Proportional%20control
|
Proportional control, in engineering and process control, is a type of linear feedback control system in which a correction is applied to the controlled variable, and the size of the correction is proportional to the difference between the desired value (setpoint, SP) and the measured value (process variable, PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.
The proportional control concept is more complex than an on–off control system such as a bi-metallic domestic thermostat, but simpler than a proportional–integral–derivative (PID) control system used in something like an automobile cruise control. On–off control will work where the overall system has a relatively long response time, but can result in instability if the system being controlled has a rapid response time. Proportional control overcomes this by modulating the output to the controlling device, such as a control valve at a level which avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional gain.
A drawback of proportional control is that it cannot eliminate the residual SP − PV error in processes with compensation e.g. temperature control, as it requires an error to generate a proportional output. To overcome this the PI controller was devised, which uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time to produce an "I" component for the controller output.
Theory
In the proportional control algorithm, the controller output is proportional to the error signal, which is the difference between the setpoint and the process variable. In other words, the output of a proportional controller is the multiplication product of the error signal and the proportional gain.
This can be mathematically expressed as
where
: Controller output with zero error.
: Output of the proportional control
|
https://en.wikipedia.org/wiki/Wafer-scale%20integration
|
Wafer-scale integration (WSI) is a rarely used system of building very-large integrated circuit (commonly called a "chip") networks from an entire silicon wafer to produce a single "super-chip". Combining large size and reduced packaging, WSI was expected to lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term very-large-scale integration, the state of the art when WSI was being developed.
Overview
In the normal integrated circuit manufacturing process, a single large cylindrical crystal (boule) of silicon is produced and then cut into disks known as wafers. The wafers are then cleaned and polished in preparation for the fabrication process. A photographic process is used to pattern the surface where material ought to be deposited on top of the wafer and where not to. The desired material is deposited and the photographic mask is removed for the next layer. From then on the wafer is repeatedly processed in this fashion, putting on layer after layer of circuitry on the surface.
Multiple copies of these patterns are deposited on the wafer in a grid fashion across the surface of the wafer. After all the possible locations are patterned, the wafer surface appears like a sheet of graph paper, with grid lines delineating the individual chips. Each of these grid locations is tested for manufacturing defects by automated equipment. Those locations that are found to be defective are recorded and marked with a dot of paint (this process is referred to as "inking a die" and more modern wafer fabrication techniques no longer require physical markings to identify defective die). The wafer is then sawed apart to cut out the individual chips. Those defective chips are thrown away, or recycled, while the working chips are placed into packaging and re-tested for any damage that might occur during the packaging process.
Flaws on the surface of the wafers and problems during the layering/depositing process a
|
https://en.wikipedia.org/wiki/USA%20Biolympiad
|
The USA Biolympiad (USABO), formerly called the USA Biology Olympiad before January 1, 2020, is a national competition sponsored by the Center for Excellence in Education to select the competitors for the International Biology Olympiad. Each year, twenty National Finalists gather at a nationally recognized institution for a two-week training camp. From the program's inception through 2009, the camp was held at George Mason University; from 2010 through 2015, the camp was held at Purdue University. It was then hosted at Marymount University for 2016 and 2017. As of 2018, it is being held at University of California, San Diego. At the end of the two weeks, four students are selected to represent the United States at the International Biology Olympiad.
History
The USA Biolympiad was first started in 2002, with nearly 10,000 students competing annually. Ever since the CEE (Center for Excellence in Education) started to administer the USABO exam, all four members of the Team USA in the years 2004, 2007, 2008, 2009, 2011, 2012, 2013, 2015, and 2017 were awarded gold medals in the International Biology Olympiad, with the US National Team able to accrue the most medals and subsequently win the IBO in 2011, 2013, 2015, and 2017, partly due to the rigorous selection process students undergo to compete in the IBO for the US. The USABO exam was held online in 2020 and 2021 due to the COVID-19 pandemic.
Organization and examination structure
USABO finalists are selected in two rounds of tests. The open exam is a 50-minute multiple-choice exam open to all high school students, who register through their school or an authorized USABO center. This exam is normally administered during the first week of February, though the exact test time may vary from year to year. During this round, there is no penalty for wrong answers, but students may only select one of the four or five choices for each question.
The top 500 students on the open exam, or roughly 10% of all participating st
|
https://en.wikipedia.org/wiki/Zaslavskii%20map
|
The Zaslavskii map is a discrete-time dynamical system introduced by George M. Zaslavsky. It is an example of a dynamical system that exhibits chaotic behavior. The Zaslavskii map takes a point () in the plane and maps it to a new point:
and
where mod is the modulo operator with real arguments. The map depends on four constants ν, μ, ε and r. Russel (1980) gives a Hausdorff dimension of 1.39 but Grassberger (1983) questions this value based on their difficulties measuring the correlation dimension.
See also
List of chaotic maps
References
(LINK)
(LINK)
(LINK)
Chaotic maps
|
https://en.wikipedia.org/wiki/Kaplan%E2%80%93Yorke%20map
|
The Kaplan–Yorke map is a discrete-time dynamical system. It is an example of a dynamical system that exhibits chaotic behavior. The Kaplan–Yorke map takes a point (xn, yn ) in the plane and maps it to a new point given by
where mod is the modulo operator with real arguments. The map depends on only the one constant α.
Calculation method
Due to roundoff error, successive applications of the modulo operator will yield zero after some ten or twenty iterations when implemented as a floating point operation on a computer. It is better to implement the following equivalent algorithm:
where the and are computational integers. It is also best to choose to be a large prime number in order to get many different values of .
Another way to avoid having the modulo operator yield zero after a short number of iterations is
which will still eventually return zero, albeit after many more iterations.
References
Chaotic maps
|
https://en.wikipedia.org/wiki/American%20and%20British%20English%20spelling%20differences
|
Despite the various English dialects spoken from country to country and within different regions of the same country, there are only slight regional variations in English orthography, the two most notable variations being British and American spelling. Many of the differences between American and British/Commonwealth English date back to a time before spelling standards were developed. For instance, some spellings seen as "American" today were once commonly used in Britain, and some spellings seen as "British" were once commonly used in the United States.
A "British standard" began to emerge following the 1755 publication of Samuel Johnson's A Dictionary of the English Language, and an "American standard" started following the work of Noah Webster and, in particular, his An American Dictionary of the English Language, first published in 1828. Webster's efforts at spelling reform were somewhat effective in his native country, resulting in certain well-known patterns of spelling differences between the American and British varieties of English. However, English-language spelling reform has rarely been adopted otherwise. As a result, modern English orthography varies only minimally between countries and is far from phonemic in any country.
Historical origins
In the early 18th century, English spelling was inconsistent. These differences became noticeable after the publication of influential dictionaries. Today's British English spellings mostly follow Johnson's A Dictionary of the English Language (1755), while many American English spellings follow Webster's An American Dictionary of the English Language ("ADEL", "Webster's Dictionary", 1828).
Webster was a proponent of English spelling reform for reasons both philological and nationalistic. In A Companion to the American Revolution (2008), John Algeo notes: "it is often assumed that characteristically American spellings were invented by Noah Webster. He was very influential in popularizing certain spellings in A
|
https://en.wikipedia.org/wiki/One-hot
|
In digital circuits and machine learning, a one-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). A similar implementation in which all bits are '1' except one '0' is sometimes called one-cold. In statistics, dummy variables represent a similar technique for representing categorical data.
Applications
Digital circuitry
One-hot encoding is often used for indicating the state of a state machine. When using binary, a decoder is needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in the nth state if, and only if, the nth bit is high.
A ring counter with 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15 flip flops chained in series with the Q output of each flip flop connected to the D input of the next and the D input of the first flip flop connected to the Q output of the 15th flip flop. The first flip flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip flop, which represents the last state. Upon reset of the state machine all of the flip flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip flops advances the one 'hot' bit to the second flip flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.
An address decoder converts from binary to one-hot representation.
A priority encoder converts from one-hot representation to binary.
Comparison with other encoding methods
Advantages
Determining the state has a low and constant cost of accessing one flip-flop
Changing the state has the constant cost of accessing two flip-flops
Easy to design and modify
Easy to detect illegal states
Takes advantage of an FPGA's abundant flip-flops
Using a one-hot implementation typically allows a sta
|
https://en.wikipedia.org/wiki/Coding%20by%20exception
|
Coding by exception is an accidental complexity in a software system in which the program handles specific errors that arise with unique exceptions. When an issue arises in a software system, an error is raised tracing the issue back to where it was caught and then where that problem came from, if applicable. Exceptions can be used to handle the error while the program is running and avoid crashing the system. Exceptions should be generalized and cover numerous errors that arise. Using these exceptions to handle specific errors that arise to continue the program is called coding by exception. This anti-pattern can quickly degrade software in performance and maintainability. Executing code even after the exception is raised resembles the goto method in many software languages, which is also considered poor practice.
See also
Accidental complexity
Creeping featurism
Test-driven development
Anti-patterns
|
https://en.wikipedia.org/wiki/Hypergeometric%20function
|
In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. It is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation.
For systematic lists of some of the many thousands of published identities involving the hypergeometric function, see the reference works by and . There is no known system for organizing all of the identities; indeed, there is no known algorithm that can generate all identities; a number of different algorithms are known that generate different series of identities. The theory of the algorithmic discovery of identities remains an active research topic.
History
The term "hypergeometric series" was first used by John Wallis in his 1655 book Arithmetica Infinitorum.
Hypergeometric series were studied by Leonhard Euler, but the first full systematic treatment was given by .
Studies in the nineteenth century included those of , and the fundamental characterisation by of the hypergeometric function by means of the differential equation it satisfies.
Riemann showed that the second-order differential equation for 2F1(z), examined in the complex plane, could be characterised (on the Riemann sphere) by its three regular singularities.
The cases where the solutions are algebraic functions were found by Hermann Schwarz (Schwarz's list).
The hypergeometric series
The hypergeometric function is defined for by the power series
It is undefined (or infinite) if equals a non-positive integer. Here is the (rising) Pochhammer symbol, which is defined by:
The series terminates if either or is a nonpositive integer, in which case the function reduces to a polynomial:
For complex arguments with it can be analytically continued along any path in the complex plane that avoids the
|
https://en.wikipedia.org/wiki/Digital%20room%20correction
|
Digital room correction (or DRC) is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system.
History
The use of analog filters, such as equalizers, to normalize the frequency response of a playback system has a long history; however, analog filters are very limited in their ability to correct the distortion found in many rooms. Although digital implementations of the equalizers have been available for some time, digital room correction is usually used to refer to the construction of filters which attempt to invert the impulse response of the room and playback system, at least in part. Digital correction systems are able to use acausal filters, and are able to operate with optimal time resolution, optimal frequency resolution, or any desired compromise along the Gabor limit. Digital room correction is a fairly new area of study which has only recently been made possible by the computational power of modern CPUs and DSPs.
Operation
The configuration of a digital room correction system begins with measuring the impulse response of the room at a reference listening position, and sometimes at additional locations for each of the loudspeakers. Then, computer software is used to compute a FIR filter, which reverses the effects of the room and linear distortion in the loudspeakers. In low performance conditions, a few IIR peaking filters are used instead of FIR filters, which require convolution, a relatively computation-heavy operation. Finally, the calculated filter is loaded into a computer or other room correction device which applies the filter in real time. Because most room correction filters are acausal, there is some delay. Most DRC systems allow the operator to control the added delay through
|
https://en.wikipedia.org/wiki/Television%20transmitter
|
A television transmitter is a transmitter that is used for terrestrial (over-the-air) television broadcasting. It is an electronic device that radiates radio waves that carry a video signal representing moving images, along with a synchronized audio channel, which is received by television receivers ('televisions' or 'TVs') belonging to a public audience, which display the image on a screen. A television transmitter, together with the broadcast studio which originates the content, is called a television station. Television transmitters must be licensed by governments, and are restricted to a certain frequency channel and power level. They transmit on frequency channels in the VHF and UHF bands. Since radio waves of these frequencies travel by line of sight, they are limited by the horizon to reception distances of 40–60 miles depending on the height of transmitter station.
Television transmitters use one of two different technologies: analog, in which the picture and sound are transmitted by analog signals modulated onto the radio carrier wave, and digital in which the picture and sound are transmitted by digital signals. The original television technology, analog television, began to be replaced in a transition beginning in 2006 in many countries with digital television (DTV) systems. These transmit pictures in a new format called HDTV (high-definition television) which has higher resolution and a wider screen aspect ratio than analog. DTV makes more efficient use of scarce radio spectrum bandwidth, as several DTV channels can be transmitted in the same bandwidth as a single analog channel. In both analog and digital television, different countries use several incompatible modulation standards to add the video and audio signals to the radio carrier wave.
The principles of primarily analog systems are summarized as they are typically more complex than digital transmitters due to the multiplexing of VSB and FM modulation stages.
Types of transmitters
The
|
https://en.wikipedia.org/wiki/Broadcast%20transmitter
|
A broadcast transmitter is an electronic device which radiates radio waves modulated with information content intended to be received by the general public. Examples are a radio broadcasting transmitter which transmits audio (sound) to broadcast radio receivers (radios) owned by the public, or a television transmitter, which transmits moving images (video) to television receivers (televisions). The term often includes the antenna which radiates the radio waves, and the building and facilities associated with the transmitter. A broadcasting station (radio station or television station) consists of a broadcast transmitter along with the production studio which originates the broadcasts. Broadcast transmitters must be licensed by governments, and are restricted to specific frequencies and power levels. Each transmitter is assigned a unique identifier consisting of a string of letters and numbers called a callsign, which must be used in all broadcasts.
Exciter
In broadcasting and telecommunication, the part which contains the oscillator, modulator, and sometimes audio processor, is called the "exciter". Most transmitters use the heterodyne principle, so they also have frequency conversion units. Confusingly, the high-power amplifier which the exciter then feeds into is often called the "transmitter" by broadcast engineers. The final output is given as transmitter power output (TPO), although this is not what most stations are rated by.
Effective radiated power (ERP) is used when calculating station coverage, even for most non-broadcast stations. It is the TPO, minus any attenuation or radiated loss in the line to the antenna, multiplied by the gain (magnification) which the antenna provides toward the horizon. This antenna gain is important, because achieving a desired signal strength without it would result in an enormous electric utility bill for the transmitter, and a prohibitively expensive transmitter. For most large stations in the VHF- and UHF-range, the t
|
https://en.wikipedia.org/wiki/Marine%20life
|
Marine life, sea life, or ocean life is the plants, animals, and other organisms that live in the salt water of seas or oceans, or the brackish water of coastal estuaries. At a fundamental level, marine life affects the nature of the planet. Marine organisms, mostly microorganisms, produce oxygen and sequester carbon. Marine life in part shape and protect shorelines, and some marine organisms even help create new land (e.g. coral building reefs).
Most life forms evolved initially in marine habitats. By volume, oceans provide about 90% of the living space on the planet. The earliest vertebrates appeared in the form of fish, which live exclusively in water. Some of these evolved into amphibians, which spend portions of their lives in water and portions on land. One group of amphibians evolved into reptiles and mammals and a few subsets of each returned to the ocean as sea snakes, sea turtles, seals, manatees, and whales. Plant forms such as kelp and other algae grow in the water and are the basis for some underwater ecosystems. Plankton forms the general foundation of the ocean food chain, particularly phytoplankton which are key primary producers.
Marine invertebrates exhibit a wide range of modifications to survive in poorly oxygenated waters, including breathing tubes as in mollusc siphons. Fish have gills instead of lungs, although some species of fish, such as the lungfish, have both. Marine mammals (e.g. dolphins, whales, otters, and seals) need to surface periodically to breathe air.
, more than 242,000 marine species have been documented, and perhaps two million marine species are yet to be documented. An average of 2,332 new species per year are being described.
Marine species range in size from the microscopic like phytoplankton, which can be as small as 0.02 micrometres, to huge cetaceans like the blue whale – the largest known animal, reaching in length. Marine microorganisms, including protists and bacteria and their associated viruses, have been var
|
https://en.wikipedia.org/wiki/Widlar%20current%20source
|
A Widlar current source is a modification of the basic two-transistor current mirror that incorporates an emitter degeneration resistor for only the output transistor, enabling the current source to generate low currents using only moderate resistor values.
The Widlar circuit may be used with bipolar transistors, MOS transistors, and even vacuum tubes. An example application is the 741 operational amplifier, and Widlar used the circuit as a part in many designs.
This circuit is named after its inventor, Bob Widlar, and was patented in 1967.
DC analysis
Figure 1 is an example Widlar current source using bipolar transistors, where the emitter resistor R2 is connected to the output transistor Q2, and has the effect of reducing the current in Q2 relative to Q1. The key to this circuit is that the voltage drop across the resistor R2 subtracts from the base-emitter voltage of transistor Q2, thereby turning this transistor off compared to transistor Q1. This observation is expressed by equating the base voltage expressions found on either side of the circuit in Figure 1 as:
where β2 is the beta-value of the output transistor, which is not the same as that of the input transistor, in part because the currents in the two transistors are very different. The variable IB2 is the base current of the output transistor, VBE refers to base-emitter voltage. This equation implies (using the Shockley diode equation):
Eq. 1
where VT is the thermal voltage.
This equation makes the approximation that the currents are both much larger than the scale currents, IS1 and IS2; an approximation valid except for current levels near cut off. In the following, the scale currents are assumed to be identical; in practice, this needs to be specifically arranged.
Design procedure with specified currents
To design the mirror, the output current must be related to the two resistor values R1 and R2. A basic observation is that the output transistor is in active mode only so long as its collector
|
https://en.wikipedia.org/wiki/Craig%20interpolation
|
In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.
Example
In propositional logic, let
.
Then tautologically implies . This can be verified by writing in conjunctive normal form:
.
Thus, if holds, then holds. In turn, tautologically implies . Because the two propositional variables occurring in occur in both and , this means that is an interpolant for the implication .
Lyndon's interpolation theorem
Suppose that S and T are two first-order theories. As notation, let S ∪ T denote the smallest theory including both S and T; the signature of S ∪ T is the smallest one containing the signatures of S and T. Also let S ∩ T be the intersection of the languages of the two theories; the signature of S ∩ T is the intersection of the signatures of the two languages.
Lyndon's theorem says that if S ∪ T is unsatisfiable, then there is an interpolating sentence ρ in the language of S ∩ T that is true in all models of S and false in all models of T. Moreover, ρ has the stronger property that every relation symbol that has a positive occurrence in ρ has a positive occurrence in some formula of S and a negative occurrence in some formula of T, and every relation symbol with a negative occurrence in ρ has a negative occurrence in some formula of S and a positive occurrence in some formula of T.
Proof o
|
https://en.wikipedia.org/wiki/Void%20type
|
The void type, in several programming languages derived from C and Algol68, is the return type of a function that returns normally, but does not provide a result value to its caller. Usually such functions are called for their side effects, such as performing some task or writing to their output parameters. The usage of the void type in such context is comparable to procedures in Pascal and syntactic constructs which define subroutines in Visual Basic. It is also similar to the unit type used in functional programming languages and type theory. See Unit type#In programming languages for a comparison.
C and C++ also support the pointer to void type (specified as void *), but this is an unrelated notion. Variables of this type are pointers to data of an unspecified type, so in this context (but not the others) void * acts roughly like a universal or top type. A program can convert a pointer to any type of data (except a function pointer) to a pointer to void and back to the original type without losing information, which makes these pointers useful for polymorphic functions. The C language standard does not guarantee that the different pointer types have the same size or alignment.
In C and C++
A function with void result type ends either by reaching the end of the function or by executing a return statement with no returned value. The void type may also replace the argument list of a function prototype to indicate that the function takes no arguments. Note that in all of these situations, void is not a type qualifier on any value. Despite the name, this is semantically similar to an implicit unit type, not a zero or bottom type (which is sometimes confusingly called the "void type"). Unlike a real unit type which is a singleton, the void type lacks a way to represent its value and the language does not provide any way to declare an object or represent a value with type void.
In the earliest versions of C, functions with no specific result defaulted to a return t
|
https://en.wikipedia.org/wiki/Internet%20Locator%20Server
|
An Internet Locator Server (abbreviated ILS) is a server that acts as a directory for Microsoft NetMeeting clients. An ILS is not necessary within a local area network and some wide area networks in the Internet because one participant can type in the IP address of the other participant's host and call them directly. An ILS becomes necessary when one participant is trying to contact a host who has a private IP address internal to a local area network that is inaccessible to the outside world, or when the host is blocked by a firewall. An ILS is also useful when a participant has a different IP address during each session, e.g., assigned by the Dynamic Host Configuration Protocol.
There are two main approaches to using Internet Location Servers: use a public server on the Internet, or run and use a private server.
Private Internet Location Server
The machine running an Internet Location Server must have a public IP address.
If the network running an Internet Location Server has a firewall, it is usually necessary to run the server in the demilitarized zone of the network.
Microsoft Windows includes an Internet Location Server. It can be installed in the Control Panel using Add/Remove Windows Components, under "Networking Services" (Site Server ILS Services).
The Internet Location Server (ILS) included in Microsoft Windows 2000 offers service on port 1002, while the latest version of NetMeeting requests service from port 389. The choice of 1002 was to avoid conflict with Windows 2000's domain controllers, which use LDAP and Active Directory on port 389, as well as Microsoft Exchange Server 2000, which uses port 389. If the server is running neither Active Directory nor Microsoft Exchange Server, the Internet Location Server's port can be changed to 389 using the following command at a command prompt:
ILSCFG [servername] /port 389
Additional firewall issues
Internet Location Servers do not address two other issues with using NetMeeting behind a firewall. First,
|
https://en.wikipedia.org/wiki/Packet%20Storm
|
Packet Storm Security is an information security website offering current and historical computer security tools, exploits, and security advisories. It is operated by a group of security enthusiasts that publish new security information and offer tools for educational and testing purposes.
Overview
The site was originally created by Ken Williams who sold it in 1999 to Kroll O'Gara and just over a year later, it was given back to the security community. While at Kroll O'Gara, Packet Storm awarded Mixter $10,000 in a whitepaper contest dedicated to the mitigation of distributed denial of service attacks. Today, they offer a suite of consulting services and the site is referenced in hundreds of books.
In 2013, Packet Storm launched a bug bounty program to buy working exploits that would be given back to the community for their own testing purposes. Later that year, they worked with a security researcher to help expose a large scale shadow profile issue with the popular Internet site Facebook. After Facebook claimed that only 6 million people were affected, additional testing by Packet Storm exposed that the numbers were not accurately reported.
References
External links
Computer security organizations
Computer network security
Internet properties established in 1998
|
https://en.wikipedia.org/wiki/Stable%20polynomial
|
In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either:
all its roots lie in the open left half-plane, or
all its roots lie in the open unit disk.
The first condition provides stability for continuous-time linear systems, and the second case relates to stability
of discrete-time linear systems. A polynomial with the first property is called at times a Hurwitz polynomial and with the second property a Schur polynomial. Stable polynomials arise in control theory and in mathematical theory
of differential and difference equations. A linear, time-invariant system (see LTI system theory) is said to be BIBO stable if every bounded input produces bounded output. A linear system is BIBO stable if its characteristic polynomial is stable. The denominator is required to be Hurwitz stable if the system is in continuous-time and Schur stable if it is in discrete-time. In practice, stability is determined by applying any one of several stability criteria.
Properties
The Routh–Hurwitz theorem provides an algorithm for determining if a given polynomial is Hurwitz stable, which is implemented in the Routh–Hurwitz and Liénard–Chipart tests.
To test if a given polynomial P (of degree d) is Schur stable, it suffices to apply this theorem to the transformed polynomial
obtained after the Möbius transformation which maps the left half-plane to the open unit disc: P is Schur stable if and only if Q is Hurwitz stable and . For higher degree polynomials the extra computation involved in this mapping can be avoided by testing the Schur stability by the Schur-Cohn test, the Jury test or the Bistritz test.
Necessary condition: a Hurwitz stable polynomial (with real coefficients) has coefficients of the same sign (either all positive or all negative).
Sufficient condition: a polynomial with (real) coefficients such that
is Schur stable.
Product rule: Two polynomials f and g are stable (of th
|
https://en.wikipedia.org/wiki/Substructure%20%28mathematics%29
|
In mathematical logic, an (induced) substructure or (induced) subalgebra is a structure whose domain is a subset of that of a bigger structure, and whose functions and relations are restricted to the substructure's domain. Some examples of subalgebras are subgroups, submonoids, subrings, subfields, subalgebras of algebras over a field, or induced subgraphs. Shifting the point of view, the larger structure is called an extension or a superstructure of its substructure.
In model theory, the term "submodel" is often used as a synonym for substructure, especially when the context suggests a theory of which both structures are models.
In the presence of relations (i.e. for structures such as ordered groups or graphs, whose signature is not functional) it may make sense to relax the conditions on a subalgebra so that the relations on a weak substructure (or weak subalgebra) are at most those induced from the bigger structure. Subgraphs are an example where the distinction matters, and the term "subgraph" does indeed refer to weak substructures. Ordered groups, on the other hand, have the special property that every substructure of an ordered group which is itself an ordered group, is an induced substructure.
Definition
Given two structures A and B of the same signature σ, A is said to be a weak substructure of B, or a weak subalgebra of B, if
the domain of A is a subset of the domain of B,
f A = f B|An for every n-ary function symbol f in σ, and
R A R B An for every n-ary relation symbol R in σ.
A is said to be a substructure of B, or a subalgebra of B, if A is a weak subalgebra of B and, moreover,
R A = R B An for every n-ary relation symbol R in σ.
If A is a substructure of B, then B is called a superstructure of A or, especially if A is an induced substructure, an extension of A.
Example
In the language consisting of the binary functions + and ×, binary relation <, and constants 0 and 1, the structure (Q, +, ×, <, 0, 1) is a substructure of (R, +, ×,
|
https://en.wikipedia.org/wiki/PICMG
|
PICMG, or PCI Industrial Computer Manufacturers Group, is a consortium of over 140 companies. Founded in 1994, the group was originally formed to adapt PCI technology for use in high-performance telecommunications, military, and industrial computing applications, but its work has grown to include newer technologies. PICMG is distinct from the similarly named and adjacently-focused PCI Special Interest Group (PCI-SIG).
PICMG currently focuses on developing and implementing specifications and guidelines for open standardsbased computer architectures from a wide variety of interconnects.
Background
PICMG is a standards development organization in the embedded computing industry. Members work collaboratively to develop new specifications and enhancements to existing ones. The members benefit from participating in standards development, gain early access to leading-edge technology, and forging relationships with thought leaders and suppliers in the industry.
The original PICMG mission was to provide extensions to the PCI standard developed by PCI-SIG for a range of applications. The organization's collaborations eventually expanded to include a variety of interconnect technologies for industrial computing and telecommunications. PICMG's specifications are used in a wide variety of industries including industrial automation, military, aerospace, telecommunications, medical, gaming, transportation, physics/research, test and measurement, energy, drone/robotics, and general embedded computing.
In 2011, PICMG completed its transfer of assets from the Communications Platforms Trade Association (CP-TA). Since 2006, CP-TA had been a collaboration of communications vendors, developing interoperability testing requirements, methodologies, and procedures based on open specifications from PICMG, The Linux Foundation, and the Service Availability Forum. PICMG has continued the educational and marketing outreach formerly conducted by members of the CP-TA marketing work group.
t
|
https://en.wikipedia.org/wiki/Twelf
|
Twelf is an implementation of the logical framework LF developed by Frank Pfenning and Carsten Schürmann at Carnegie Mellon University. It is used for logic programming and for the formalization of programming language theory.
Introduction
At its simplest, a Twelf program (called a "signature") is a collection of declarations of type families (relations) and constants that inhabit those type families. For example, the following is the standard definition of the natural numbers, with standing for zero and the successor operator.
nat : type.
z : nat.
s : nat -> nat.
Here is a type, and and are constant terms. As a dependently typed system, types can be indexed by terms, which allows the definition of more interesting type families. Here is a definition of addition:
plus : nat -> nat -> nat -> type.
plus_zero : {M:nat} plus M z M.
plus_succ : {M:nat} {N:nat} {P:nat}
plus M (s N) (s P)
<- plus M N P.
The type family is read as a relation between three natural numbers , and , such that . We then give the constants that define the relation: the constant indicates that . The quantifier can be read as "for all of type ".
The constant defines the case for when the second argument is the successor of some other number (see pattern matching). The result is the successor of , where is the sum of and . This recursive call is made via the subgoal , introduced with . The arrow can be understood operationally as Prolog's , or as logical implication ("if M + N = P, then M + (s N) = (s P)"), or most faithfully to the type theory, as the type of the constant ("when given a term of type , return a term of type ").
Twelf features type reconstruction and supports implicit parameters, so in practice, one usually does not need to explicitly write (etc.) above.
These simple examples do not display LF's higher-order features, nor any of its theorem checking capabilities. See the Twelf distribution for its included examples.
Uses
Twelf is used
|
https://en.wikipedia.org/wiki/ICT%201301
|
The ICT 1301 and its smaller derivative ICT 1300 were early business computers from International Computers and Tabulators. Typical of mid-sized machines of the era, they used core memory, drum storage and punched cards, but they were unusual in that they were based on decimal logic instead of binary.
Description
The 1301 was the main machine in the line. Its main memory came in increments of 400 words of 48 bits (12 decimal digits or 12 four-bit binary values, 0-15) plus two parity bits. The maximum size was 4,000 words. It was the first ICT machine to use core memory.
Backing store was magnetic drum and optionally one-inch-, half-inch- or quarter-inch-wide magnetic tape. Input was from 80-column punched cards and optionally 160-column punched cards and punched paper tape. Output was to 80-column punched cards, line printer, and optionally to punched paper tape.
The machine ran at a clock speed of 1 MHz and its arithmetic logic unit (ALU) operated on data in a serial-parallel fashion—the 48-bit words were processed sequentially four bits at a time. A simple addition took 21 clock cycles; hardware multiplication averaged 170 clock cycles per digit; and division was performed in software.
A typical 1301 requires 700 square feet (65 square metres) of floor space and weighs about . It consumes about 13kVA of three-phase electric power. The electronics consist of over 4,000 printed circuit boards each with many germanium diodes (mainly OA5), germanium transistors (mainly Mullard GET872), resistors, capacitors, inductors, and a handful of thermionic valves and a few dozen relays operated when buttons were pressed. Integrated circuits were not available commercially at the time.
History
The 1301 was designed by an ICT and GEC joint subsidiary, Computer Developments Limited (CDL) at GEC's Coventry site formed in 1956. CDL was taken over by ICT, but the 1301 was built at the GEC site as ICT lacked the manufacturing capability at that time.
The computer was announce
|
https://en.wikipedia.org/wiki/Pre-algebra
|
Prealgebra is a common name for a course in middle school mathematics in the United States, usually taught in the 7th grade or 8th grade. The objective of it is to prepare students for the study of algebra. Usually, algebra is taught in the 8th and 9th grade.
As an intermediate stage after arithmetic, prealgebra helps students pass specific conceptual barriers. Students are introduced to the idea that an equals sign, rather than just being the answer to a question as in basic arithmetic, means that two sides are equivalent and can be manipulated together. They also learn how numbers, variables, and words can be used in the same ways.
Subjects
Subjects taught in a prealgebra course may include:
Review of natural number arithmetic
Types of numbers such as integers, fractions, decimals and negative numbers
Ratios and percents
Factorization of natural numbers
Properties of operations such as associativity and distributivity
Simple (integer) roots and powers
Rules of evaluation of expressions, such as operator precedence and use of parentheses
Basics of equations, including rules for invariant manipulation of equations
Understanding of variable manipulation
Manipulation and plotting in the standard 4-quadrant Cartesian coordinate plane
Powers in scientific notation (example: 340,000,000 in scientific notation is 3.4 × 108)
Identifying Probability
Solving Square roots
Pythagorean Theorem
Prealgebra may include subjects from geometry, especially to further the understanding of algebra in applications to area and volume.
Prealgebra may also include subjects from statistics to identify probability and interpret data.
Proficiency in prealgebra is an indicator of college success. It can also be taught as a remedial course for college students.
See also
Precalculus
Mathematics education in the United States
References
Elementary mathematics
Algebra education
|
https://en.wikipedia.org/wiki/Scientific%20community%20metaphor
|
In computer science, the scientific community metaphor is a metaphor used to aid understanding scientific communities. The first publications on the scientific community metaphor in 1981 and 1982 involved the development of a programming language named Ether that invoked procedural plans to process goals and assertions concurrently by dynamically creating new rules during program execution. Ether also addressed issues of conflict and contradiction with multiple sources of knowledge and multiple viewpoints.
Development
The scientific community metaphor builds on the philosophy, history and sociology of science. It was originally developed building on work in the philosophy of science by Karl Popper and Imre Lakatos. In particular, it initially made use of Lakatos' work on proofs and refutations. Subsequently, development has been influenced by the work of Geof Bowker, Michel Callon, Paul Feyerabend, Elihu M. Gerson, Bruno Latour, John Law, Karl Popper, Susan Leigh Star, Anselm Strauss, and Lucy Suchman.
In particular Latour's Science in Action had great influence. In the book, Janus figures make paradoxical statements about scientific development. An important challenge for the scientific community metaphor is to reconcile these paradoxical statements.
Qualities of scientific research
Scientific research depends critically on monotonicity, concurrency, commutativity, and pluralism to propose, modify, support, and oppose scientific methods, practices, and theories.
Quoting from Carl Hewitt, scientific community metaphor systems have characteristics of monotonicity, concurrency, commutativity, pluralism, skepticism and provenance.
monotonicity: Once something is published it cannot be undone. Scientists publish their results so they are available to all. Published work is collected and indexed in libraries. Scientists who change their mind can publish later articles contradicting earlier ones.
concurrency: Scientists can work concurrently, overlapping in
|
https://en.wikipedia.org/wiki/SIGMA%20%28verification%20service%29
|
SIGMA is an electronic verification service offered by Nielsen Media Research and is generally used for commercials, infomercials, video news releases, public service announcements, satellite media tours, and electronic press kits.
It operates by encoding the SIGMA encoder ID, date of encoding, and time of encoding in lines 20 and 22 of the video signal, which is outside of the area displayed on a normal television screen (this is similar to how closed captioning is transmitted).
On a professional video monitor with underscan capability activated or a computer display of the entire video frame, the SIGMA data will look like small, moving white lines at the top of the frame. Nielsen provides overnight reports of airplay in all television markets in the country.
Television technology
|
https://en.wikipedia.org/wiki/List%20of%20set%20theory%20topics
|
This page is a list of articles related to set theory.
Articles on individual set theory topics
Lists related to set theory
Glossary of set theory
List of large cardinal properties
List of properties of sets of reals
List of set identities and relations
Set theorists
Societies and organizations
Association for Symbolic Logic
The Cabal
Topics
Set theory
|
https://en.wikipedia.org/wiki/Biological%20soil%20crust
|
Biological soil crusts are communities of living organisms on the soil surface in arid and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation and soil stabilization; they alter soil albedo and water relations and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing and other disturbances and can require long time periods to recover composition and function. Biological soil crusts are also known as biocrusts or as cryptogamic, microbiotic, microphytic, or cryptobiotic soils.
Natural history
Biology and composition
Biological soil crusts are most often composed of fungi, lichens, cyanobacteria, bryophytes, and algae in varying proportions. These organisms live in intimate association in the uppermost few millimeters of the soil surface, and are the biological basis for the formation of soil crusts.
Cyanobacteria
Cyanobacteria are the main photosynthetic component of biological soil crusts, in addition to other photosynthetic taxa such as mosses, lichens, and green algae. The most common cyanobacteria found in soil crusts belong to large filamentous species such as those in the genus Microcoleus. These species form bundled filaments that are surrounded by a gelatinous sheath of polysaccharides. These filaments bind soil particles throughout the uppermost soil layers, forming a 3-D net-like structure that holds the soil together in a crust. Other common cyanobacteria species are as those in the genus Nostoc, which can also form sheaths and sheets of filaments that stabilize the soil. Some Nostoc species are also able to fix atmospheric nitrogen gas into bio-available forms such as ammonia.
Bryophytes
Bryophytes in soil crusts include moss
|
https://en.wikipedia.org/wiki/Barnes%20G-function
|
In mathematics, the Barnes G-function G(z) is a function that is an extension of superfactorials to the complex numbers. It is related to the gamma function, the K-function and the Glaisher–Kinkelin constant, and was named after mathematician Ernest William Barnes. It can be written in terms of the double gamma function.
Formally, the Barnes G-function is defined in the following Weierstrass product form:
where is the Euler–Mascheroni constant, exp(x) = ex is the exponential function, and Π denotes multiplication (capital pi notation).
The integral representation, which may be deduced from the relation to the double gamma function, is
As an entire function, G is of order two, and of infinite type. This can be deduced from the asymptotic expansion given below.
Functional equation and integer arguments
The Barnes G-function satisfies the functional equation
with normalisation G(1) = 1. Note the similarity between the functional equation of the Barnes G-function and that of the Euler gamma function:
The functional equation implies that G takes the following values at integer arguments:
(in particular, )
and thus
where denotes the gamma function and K denotes the K-function. The functional equation uniquely defines the Barnes G-function if the convexity condition,
is added. Additionally, the Barnes G-function satisfies the duplication formula,
Characterisation
Similar to the Bohr-Mollerup theorem for the gamma function, for a constant , we have for
and for
as .
Value at 1/2
where is the Glaisher–Kinkelin constant.
Reflection formula 1.0
The difference equation for the G-function, in conjunction with the functional equation for the gamma function, can be used to obtain the following reflection formula for the Barnes G-function (originally proved by Hermann Kinkelin):
The logtangent integral on the right-hand side can be evaluated in terms of the Clausen function (of order 2), as is shown below:
The proof of this result hinges on the follow
|
https://en.wikipedia.org/wiki/Avalanche%20%28P2P%29
|
Avalanche is the name of a proposed peer-to-peer (P2P) network created by Pablo Rodriguez and Christos Gkantsidis at Microsoft, which claims to offer improved scalability and bandwidth efficiency compared to existing P2P systems.
The proposed system works in a similar way to BitTorrent, but aims to improve some of its shortfalls. Like BitTorrent, Avalanche splits the file to be distributed into small blocks. However, rather than peers simply transmitting the blocks, they transmit random linear combinations of the blocks along with the random coefficients of this linear combination - a technique known as 'network coding'. This technique removes the need for each peer to have complex knowledge of block distribution across the network (an aspect of BitTorrent-like protocols which the paper claims does not scale very well).
Bram Cohen, the creator of BitTorrent, criticized the proposed Avalanche system in a post to his blog. He mentions inaccuracies in the paper's analysis of the BitTorrent protocol (some of it being based on a 4-years-out-of-date version of the protocol which used an algorithm that "sucks") and describes the paper as "garbage."
References
External links
Avalanche: File Swarming with Network Coding, Avalanche official home page
File sharing networks
Microsoft Research
|
https://en.wikipedia.org/wiki/IBM%20SSEC
|
The IBM Selective Sequence Electronic Calculator (SSEC) was an electromechanical computer built by IBM. Its design was started in late 1944 and it operated from January 1948 to August 1952. It had many of the features of a stored-program computer, and was the first operational machine able to treat its instructions as data, but it was not fully electronic.
Although the SSEC proved useful for several high-profile applications, it soon became obsolete. As the last large electromechanical computer ever built, its greatest success was the publicity it provided for IBM.
History
During World War II, International Business Machines Corporation (IBM) funded and built an Automatic Sequence Controlled Calculator (ASCC) for Howard H. Aiken at Harvard University. The machine, formally dedicated in August 1944, was widely known as the Harvard Mark I. The President of IBM, Thomas J. Watson Sr., did not like Aiken's press release that gave no credit to IBM for its funding and engineering effort. Watson and Aiken decided to go their separate ways, and IBM began work on a project to build their own larger and more visible machine.
Astronomer Wallace John Eckert of Columbia University provided specifications for the new machine; the project budget of almost $1 million was an immense amount for the time.
Francis "Frank" E. Hamilton (1898–1972) supervised the construction of both the ASCC and the SSEC. Robert Rex Seeber Jr. was also hired away from the Harvard group, and became known as the chief architect of the new machine.
Modules were manufactured in IBM's facility at Endicott, New York, under Director of Engineering John McPherson after the basic design was ready in December 1945.
Construction
The February 1946 announcement of the fully electronic ENIAC energized the project.
The new machine, called the IBM Selective Sequence Electronic Calculator (SSEC), was ready to be installed by August 1947.
Watson called such machines calculators because computer then referred to humans e
|
https://en.wikipedia.org/wiki/Sequential%20game
|
In game theory, a sequential game is a game where one player chooses their action before the others choose theirs. The other players must have information on the first player's choice so that the difference in time has no strategic effect. Sequential games are governed by the time axis and represented in the form of decision trees.
Sequential games with perfect information can be analysed mathematically using combinatorial game theory.
Decision trees are the extensive form of dynamic games that provide information on the possible ways that a given game can be played. They show the sequence in which players act and the number of times that they can each make a decision. Decision trees also provide information on what each player knows or does not know at the point in time they decide on an action to take. Payoffs for each player are given at the decision nodes of the tree. Extensive form representations were introduced by Neumann and further developed by Kuhn in the earliest years of game theory between 1910–1930.
Repeated games are an example of sequential games. Players perform a stage game and the results will determine how the game continues. At every new stage, both players will have complete information on how the previous stages had played out. A discount rate between the values of 0 and 1 is usually taken into account when considering the payoff of each player. Repeated games illustrate the psychological aspect of games, such as trust and revenge, when each player makes a decision at every stage game based on how the game has been played out so far.
Unlike sequential games, simultaneous games do not have a time axis so players choose their moves without being sure of the other players' decisions. Simultaneous games are usually represented in the form of payoff matrices. One example of a simultaneous game is rock-paper-scissors, where each player draws at the same time not knowing whether their opponent will choose rock, paper, or scissors. Extensive form
|
https://en.wikipedia.org/wiki/Backward%20induction
|
Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation (i.e. for every possible information set) at every point in time. Backward induction was first used in 1875 by Arthur Cayley, who discovered the method while trying to solve the Secretary problem.
In the mathematical optimization method of dynamic programming, backward induction is one of the main methods for solving the Bellman equation. In game theory, backward induction is a method used to compute subgame perfect equilibria in sequential games. The only difference is that optimization involves just one decision maker, who chooses what to do at each point of time, whereas game theory analyzes how the decisions of several players interact. That is, by anticipating what the last player will do in each situation, it is possible to determine what the second-to-last player will do, and so on. In the related fields of automated planning and scheduling and automated theorem proving, the method is called backward search or backward chaining. In chess it is called retrograde analysis.
Backward induction has been used to solve games as long as the field of game theory has existed. John von Neumann and Oskar Morgenstern suggested solving zero-sum, two-person games by backward induction in their Theory of Games and Economic Behavior (1944), the book which established game theory as a field of study.
Decision making
Optimal-stopping problem
An unemployed person who will be able to work for ten more years t = 1,2,...,10 may be offered a 'good' job that pays $100, or a 'bad' job that pa
|
https://en.wikipedia.org/wiki/Kelvin%E2%80%93Voigt%20material
|
A Kelvin-Voigt material, also called a Voigt material, is the most simple model viscoelastic material showing typical rubbery properties. It is purely elastic on long timescales (slow deformation), but shows additional resistance to fast deformation. It is named after the British physicist and engineer Lord Kelvin and German physicist Woldemar Voigt.
Definition
The Kelvin-Voigt model, also called the Voigt model, is represented by a purely viscous damper and purely elastic spring connected in parallel as shown in the picture.
If, instead, we connect these two elements in series we get a model of a Maxwell material.
Since the two components of the model are arranged in parallel, the strains in each component are identical:
where the subscript D indicates the stress-strain in the damper and the subscript S indicates the stress-strain in the spring. Similarly, the total stress will be the sum of the stress in each component:
From these equations we get that in a Kelvin-Voigt material, stress σ, strain ε and their rates of change with respect to time t are governed by equations of the form:
or, in dot notation:
where E is a modulus of elasticity and is the viscosity. The equation can be applied either to the shear stress or normal stress of a material.
Effect of a sudden stress
If we suddenly apply some constant stress to Kelvin-Voigt material, then the deformations would approach the deformation for the pure elastic material with the difference decaying exponentially:
where t is time and is the retardation time.
If we would free the material at time , then the elastic element would retard the material back until the deformation becomes zero. The retardation obeys the following equation:
The picture shows the dependence of the dimensionless deformation
on dimensionless time . In the picture the stress on the material is loaded at time , and released at the later dimensionless time .
Since all the deformation is reversible (though not suddenly) the
|
https://en.wikipedia.org/wiki/Induction%20puzzles
|
Induction puzzles are logic puzzles, which are examples of multi-agent reasoning, where the solution evolves along with the principle of induction.
A puzzle's scenario always involves multiple players with the same reasoning capability, who go through the same reasoning steps. According to the principle of induction, a solution to the simplest case makes the solution of the next complicated case obvious. Once the simplest case of the induction puzzle is solved, the whole puzzle is solved subsequently.
Typical tell-tale features of these puzzles include any puzzle in which each participant has a given piece of information (usually as common knowledge) about all other participants but not themselves. Also, usually, some kind of hint is given to suggest that the participants can trust each other's intelligence — they are capable of theory of mind (that "every participant knows modus ponens" is common knowledge). Also, the inaction of a participant is a non-verbal communication of that participant's lack of knowledge, which then becomes common knowledge to all participants who observed the inaction.
The muddy children puzzle is the most frequently appearing induction puzzle in scientific literature on epistemic logic. Muddy children puzzle is a variant of the well known wise men or cheating wives/husbands puzzles.
Hat puzzles are induction puzzle variations that date back to as early as 1961. In many variations, hat puzzles are described in the context of prisoners. In other cases, hat puzzles are described in the context of wise men.
Muddy Children Puzzle
Description
There is a set of attentive children. They think perfectly logically. The children consider it possible to have a muddy face. None of the children can determine the state of their own face themselves. But, every child knows the state of all other children's faces. A custodian tells the children that at least one of them has a muddy face. The children are each told that they should step forward if th
|
https://en.wikipedia.org/wiki/List%20of%20optical%20disc%20authoring%20software
|
This is a list of optical disc authoring software.
Open source
Multi-platform
cdrtools, a comprehensive command line-based set of tools for creating and burning CDs, DVDs and Blu-rays
cdrkit, a fork of cdrtools by the Debian project
cdrdao, open source software for authoring and ripping of CDs in Disk-At-Once mode
DVDStyler, a GUI-based DVD authoring tool
libburnia, a collection of command line-based tools and libraries for burning discs
Linux and Unix
Brasero, a GNOME disc burning utility
dvd+rw-tools, a package for DVD and Blu-ray writing on Unix and Unix-like systems
K3b, the KDE disc authoring program
Nautilus, the GNOME file manager (includes basic disc burning capabilities)
Serpentine, the GNOME audio CD burning utility
Xfburn, the Xfce disc burning program
X-CD-Roast
Windows
InfraRecorder (based on cdrkit and cdrtools)
DVD Flick (ImgBurn is included)
Freeware
Windows
CDBurnerXP
ImgBurn
Ashampoo Burning Studio
DeepBurner Free
DVD Decrypter
DVD Shrink
macOS
Disco
Commercial proprietary
macOS
Adobe Encore
DVD Studio Pro
MacTheRipper
Roxio Toast
Linux
Nero Linux
Windows
Adobe Encore
Alcohol 120%
Ashampoo Burning Studio
AVS Video Editor
Blindwrite
CDRWIN
CloneCD
CloneDVD
DeepBurner
DiscJuggler
Roxio Creator
MagicISO
Nero Burning ROM
Netblender
SEBAS
TMPGEnc Authoring Works 7
UltraISO
See also
Comparison of disc authoring software
Optical disc authoring software
Optical disc authoring
es:Programas grabadores de discos ópticos
|
https://en.wikipedia.org/wiki/Alfred%20Tauber
|
Alfred Tauber (5 November 1866 – 26 July 1942) was an Austrian Empire-born Austrian mathematician, known for his contribution to mathematical analysis and to the theory of functions of a complex variable: he is the eponym of an important class of theorems with applications ranging from mathematical and harmonic analysis to number theory. He was murdered in the Theresienstadt concentration camp.
Life and academic career
Born in Pressburg, Kingdom of Hungary, Austrian Empire (now Bratislava, Slovakia), he began studying mathematics at Vienna University in 1884, obtained his Ph.D. in 1889, and his habilitation in 1891.
Starting from 1892, he worked as chief mathematician at the Phönix insurance company until 1908, when he became an a.o. professor at the University of Vienna, though, already from 1901, he had been honorary professor at TU Vienna and director of its insurance mathematics chair. In 1933, he was awarded the Grand Decoration of Honour in Silver for Services to the Republic of Austria, and retired as emeritus extraordinary professor. However, he continued lecturing as a privatdozent until 1938, when he was forced to resign as a consequence of the "Anschluss". On 28–29 June 1942, he was deported with transport IV/2, č. 621 to Theresienstadt, where he was murdered on 26 July 1942.
Work
list 35 publications in the bibliography appended to his obituary, and also a search performed on the "Jahrbuch über die Fortschritte der Mathematik" database results in a list 35 mathematical works authored by him, spanning a period of time from 1891 to 1940. However, cites two papers on actuarial mathematics which do not appear in these two bibliographical lists and Binder's bibliography of Tauber's works (1984, pp. 163–166), while listing 71 entries including the ones in the bibliography of and the two cited by Hlawka, does not includes the short note so the exact number of his works is not known. According to , his scientific research can be divided into three areas: t
|
https://en.wikipedia.org/wiki/Limes%20Germanicus
|
The (Latin for Germanic frontier) is the name given in modern times to a line of frontier () fortifications that bounded the ancient Roman provinces of Germania Inferior, Germania Superior and Raetia, dividing the Roman Empire and the unsubdued Germanic tribes from the years 83 to about 260 AD. The Limes used either a natural boundary such as a river or typically an earth bank and ditch with a wooden palisade and watchtowers at intervals. A system of linked forts was built behind the Limes.
The path of the limes changed over time following advances and retreats due to pressure from external threats. At its height, the Limes Germanicus stretched from the North Sea outlet of the Rhine to near Regensburg (Castra Regina) on the Danube. These two major rivers afforded natural protection from mass incursions into imperial territory, with the exception of a gap stretching roughly from (Mainz) on the Rhine to Castra Regina.
The Limes Germanicus was divided into:
The Lower Germanic Limes, which extended from the North Sea at Katwijk in the Netherlands along the then main Lower Rhine branches (modern Oude Rijn, Leidse Rijn, Kromme Rijn, Nederrijn)
The Upper Germanic Limes started from the Rhine at Rheinbrohl (Neuwied (district)) across the Taunus mountains to the river Main (East of Hanau), then along the Main to Miltenberg, and from Osterburken (Neckar-Odenwald-Kreis) south to Lorch (in Ostalbkreis, Württemberg) in a nearly perfect straight line of more than 70 km;
The Rhaetian Limes extended east from Lorch to Eining (close to Kelheim) on the Danube.
The total length was . It included at least 60 forts and 900 watchtowers. The potentially weakest, hence most heavily guarded, part of the was the aforementioned gap between the westward bend of the Rhine at modern-day Mainz and the main flow of the Danube at Regensburg. This land corridor between the two great rivers permitted movement of large groups of people without the need for water transport, hence the heavy conc
|
https://en.wikipedia.org/wiki/Duffing%20map
|
The Duffing map (also called as 'Holmes map') is a discrete-time dynamical system. It is an example of a dynamical system that exhibits chaotic behavior. The Duffing map takes a point (xn, yn) in the plane and maps it to a new point given by
The map depends on the two constants a and b. These are usually set to a = 2.75 and b = 0.2 to produce chaotic behaviour. It is a discrete version of the Duffing equation.
References
External links
Duffing oscillator on Scholarpedia
Chaotic maps
|
https://en.wikipedia.org/wiki/Symplectic%20integrator
|
In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in nonlinear dynamics, molecular dynamics, discrete element methods, accelerator physics, plasma physics, quantum physics, and celestial mechanics.
Introduction
Symplectic integrators are designed for the numerical solution of Hamilton's equations, which read
where denotes the position coordinates, the momentum coordinates, and is the Hamiltonian.
The set of position and momentum coordinates are called canonical coordinates.
(See Hamiltonian mechanics for more background.)
The time evolution of Hamilton's equations is a symplectomorphism, meaning that it conserves the symplectic 2-form . A numerical scheme is a symplectic integrator if it also conserves this 2-form.
Symplectic integrators also might possess, as a conserved quantity, a Hamiltonian which is slightly perturbed from the original one (only true for a small class of simple cases). By virtue of these advantages, the SI scheme has been widely applied to the calculations of long-term evolution of chaotic Hamiltonian systems ranging from the Kepler problem to the classical and semi-classical simulations in molecular dynamics.
Most of the usual numerical methods, like the primitive Euler scheme and the classical Runge–Kutta scheme, are not symplectic integrators.
Methods for constructing symplectic algorithms
Splitting methods for separable Hamiltonians
A widely used class of symplectic integrators is formed by the splitting methods.
Assume that the Hamiltonian is separable, meaning that it can be written in the form
This happens frequently in Hamiltonian mechanics, with T being the kinetic energy and V the potential energy.
For the notational simplicity, let us introduce the symbol to denote the canonical coordinates
including both the position and
|
https://en.wikipedia.org/wiki/Click-through%20rate
|
Click-through rate (CTR) is the ratio of clicks on a specific link to the number of times a page, email, or advertisement is shown. It is commonly used to measure the success of an online advertising campaign for a particular website, as well as the effectiveness of email campaigns.
Click-through rates for ad campaigns vary tremendously. The first online display ad, shown for AT&T on the website HotWired in 1994, had a 44% click-through rate. With time, the overall rate of user's clicks on webpage banner ads has decreased.
Purpose
The purpose of click-through rates is to measure the ratio of clicks to impressions of an online ad or email marketing campaign. Generally, the higher the CTR, the more effective the marketing campaign has been at bringing people to a website. Most commercial websites are designed to elicit some sort of action, whether it be to buy a book, read a news article, watch a music video, or search for a flight. People rarely visit websites with the intention of viewing advertisements, in the same way that few people watch television to view the commercials.
While marketers want to know the reaction of the web visitor, with current technology it is nearly impossible to quantify the emotional reaction to the site and the effect of that site on the firm's brand. In contrast, it is easy to determine the click-through rate, which measures the proportion of visitors who clicked on an advertisement that redirected them to another page. Forms of interaction with advertisements other than clicking are possible but rare; "click-through rate" is the most commonly used term to describe the efficacy of an advert.
Construction
The click-through rate of an advertisement is the number of times a click is made on the ad, divided by the number of times the ad is "served", that is, shown (also called impressions), expressed as a percentage:
Online advertising
Click-through rates for banner ads have decreased over time. When banner ads first started to ap
|
https://en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli%20beam%20theory
|
Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case corresponding to small deflections of a beam that is subjected to lateral loads only. By ignoring the effects of shear deformation and rotatory inertia, it is thus a special case of Timoshenko–Ehrenfest beam theory. It was first enunciated circa 1750, but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution.
Additional mathematical models have been developed, such as plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially structural and mechanical engineering.
History
Prevailing consensus is that Galileo Galilei made the first attempts at developing a theory of beams, but recent studies argue that Leonardo da Vinci was the first to make the crucial observations. Da Vinci lacked Hooke's law and calculus to complete the theory, whereas Galileo was held back by an incorrect assumption he made.
The Bernoulli beam is named after Jacob Bernoulli, who made the significant discoveries. Leonhard Euler and Daniel Bernoulli were the first to put together a useful theory circa 1750.
Static beam equation
The Euler–Bernoulli equation describes the relationship between the beam's deflection and the applied load:The curve describes the deflection of the beam in the direction at some position (recall that the beam is modeled as a one-dimensional object). is a distributed load, in other words a force per unit length (analogous to pressure being a force per area); it may be a function of , , or other variables. is the elastic modulus and is the second moment
|
https://en.wikipedia.org/wiki/Spam%20and%20Open%20Relay%20Blocking%20System
|
SORBS ("Spam and Open Relay Blocking System") is a list of e-mail servers suspected of sending or relaying spam (a DNS Blackhole List). It has been augmented with complementary lists that include various other classes of hosts, allowing for customized email rejection by its users.
History
The SORBS DNSbl project was created in November 2001. It was maintained as a private list until 6 January 2002 when the DNSbl was officially launched to the public. The list consisted of 78,000 proxy relays and rapidly grew to over 3,000,000 alleged compromised spam relays.
In November 2009 SORBS was acquired by GFI Software, to enhance their mail filtering solutions.
In July 2011 SORBS was re-sold to Proofpoint, Inc.
DUHL
SORBS adds IP ranges that belong to dialup modem pools, dynamically allocated wireless, and DSL connections as well as DHCP LAN ranges by using reverse DNS PTR records, WHOIS records, and sometimes by submission from the ISPs themselves. This is called the DUHL or Dynamic User and Host List. SORBS does not automatically rescan DUHL listed hosts for updated rDNS so to remove an IP address from the DUHL the user or ISP has to request a delisting or rescan. If other blocks are scanned in the region of listings and the scan includes listed netspace, SORBS automatically removes the netspace marked as static.
Matthew Sullivan of SORBS proposed in an Internet Draft that generic reverse DNS addresses include purposing tokens such as static or dynamic, abbreviations thereof, and more. That naming scheme would have allowed end users to classify IP addresses without the need to rely on third party lists, such as the SORBS DUHL. The Internet Draft has since expired. Generally it is considered more appropriate for ISPs to simply block outgoing traffic to port 25 if they wish to prevent users from sending email directly, rather than specifying it in the reverse DNS record for the IP.
SORBS' dynamic IP list originally came from Dynablock but has been developed independen
|
https://en.wikipedia.org/wiki/Ice%20algae
|
Ice algae are any of the various types of algal communities found in annual and multi-year sea, and terrestrial lake ice or glacier ice.
On sea ice in the polar oceans, ice algae communities play an important role in primary production. The timing of blooms of the algae is especially important for supporting higher trophic levels at times of the year when light is low and ice cover still exists. Sea ice algal communities are mostly concentrated in the bottom layer of the ice, but can also occur in brine channels within the ice, in melt ponds, and on the surface.
Because terrestrial ice algae occur in freshwater systems, the species composition differs greatly from that of sea ice algae. In particular, terrestrial glacier ice algae communities are significant in that they change the color of glaciers and ice sheets, impacting the reflectivity of the ice itself.
Sea ice algae
Adapting to the sea ice environment
Microbial life in sea ice is extremely diverse, and includes abundant algae, bacteria and protozoa. Algae in particular dominate the sympagic environment, with estimates of more than 1000 unicellular eukaryotes found to associate with sea ice in the Arctic. Species composition and diversity vary based on location, ice type, and irradiance. In general, pennate diatoms such as Nitschia frigida (in the Arctic) and Fragilariopsis cylindrus (in the Antarctic) are abundant. Melosira arctica, which forms up to meter-long filaments attached to the bottom of the ice, are also widespread in the Arctic and are an important food source for marine species.
While sea ice algae communities are found throughout the column of sea ice, abundance and community composition depends on the time of year. There are many microhabitats available to algae on and within sea ice, and different algal groups have different preferences. For example, in late winter/early spring, motile diatoms like N. frigida have been found to dominate the uppermost layers of the ice, as far as briny
|
https://en.wikipedia.org/wiki/Robert%20Sapolsky
|
Robert Morris Sapolsky (born April 6, 1957) is an American neuroendocrinology researcher and author. He is a professor of biology, neurology, neurological sciences, and neurosurgery at Stanford University. In addition, he is a research associate at the National Museums of Kenya.
Early life and education
Sapolsky was born in Brooklyn, New York, to immigrants from the Soviet Union. His father, Thomas Sapolsky, was an architect who renovated the restaurants Lüchow's and Lundy's. Robert was raised an Orthodox Jew and spent his time reading about and imagining living with silverback gorillas. By age twelve, he was writing fan letters to primatologists. He attended John Dewey High School and by that time was reading textbooks on the subject and teaching himself Swahili.
Sapolsky describes himself as an atheist. He said in his acceptance speech for the Emperor Has No Clothes Award, "I was raised in an Orthodox household and I was raised devoutly religious up until around age thirteen or so. In my adolescent years one of the defining actions in my life was breaking away from all religious belief whatsoever."
In 1978, Sapolsky received his B.A., summa cum laude, in biological anthropology from Harvard University. He then went to Kenya to study the social behaviors of baboons in the wild. When the Uganda–Tanzania War broke out in the neighboring countries, Sapolsky decided to travel into Uganda to witness the war up close, later commenting, "I was twenty-one and wanted adventure. [...] I was behaving like a late-adolescent male primate." He went to Uganda's capital Kampala, and from there to the border with Zaire (now the Democratic Republic of the Congo), and then back to Kampala, witnessing some fighting, including the Ugandan capital's conquest by the Tanzanian army and its Ugandan rebel allies on April 10-11 1979. Sapolsky then returned to New York and studied at Rockefeller University, where he received his Ph.D. in neuroendocrinology working in the lab of endocrinol
|
https://en.wikipedia.org/wiki/Antimicrobial%20peptides
|
Antimicrobial peptides (AMPs), also called host defence peptides (HDPs) are part of the innate immune response found among all classes of life. Fundamental differences exist between prokaryotic and eukaryotic cells that may represent targets for antimicrobial peptides. These peptides are potent, broad spectrum antimicrobials which demonstrate potential as novel therapeutic agents. Antimicrobial peptides have been demonstrated to kill Gram negative and Gram positive bacteria, enveloped viruses, fungi and even transformed or cancerous cells. Unlike the majority of conventional antibiotics it appears that antimicrobial peptides frequently destabilize biological membranes, can form transmembrane channels, and may also have the ability to enhance immunity by functioning as immunomodulators.
Structure
Antimicrobial peptides are a unique and diverse group of molecules, which are divided into subgroups on the basis of their amino acid composition and structure. Antimicrobial peptides are generally between 12 and 50 amino acids. These peptides include two or more positively charged residues provided by arginine, lysine or, in acidic environments, histidine, and a large proportion (generally >50%) of hydrophobic residues. The secondary structures of these molecules follow 4 themes, including i) α-helical, ii) β-stranded due to the presence of 2 or more disulfide bonds, iii) β-hairpin or loop due to the presence of a single disulfide bond and/or cyclization of the peptide chain, and iv) extended. Many of these peptides are unstructured in free solution, and fold into their final configuration upon partitioning into biological membranes. The peptides contain hydrophilic amino acid residues aligned along one side and hydrophobic amino acid residues aligned along the opposite side of a helical molecule. This amphipathicity of the antimicrobial peptides allows them to partition into the membrane lipid bilayer. The ability to associate with membranes is a definitive feature of a
|
https://en.wikipedia.org/wiki/Biopesticide
|
A biopesticide is a biological substance or organism that damages, kills, or repels organisms seen as pests. Biological pest management intervention involves predatory, parasitic, or chemical relationships.
They are obtained from organisms including plants, bacteria and other microbes, fungi, nematodes, etc. They are components of integrated pest management (IPM) programmes, and have received much practical attention as substitutes to synthetic chemical plant protection products (PPPs).
Definitions
The U.S. Environmental Protection Agency states that biopesticides "are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals, and currently, there are 299 registered biopesticide active ingredients and 1401 active biopesticide product registrations." The EPA also states that biopesticides "include naturally occurring substances that control pests (biochemical pesticides), microorganisms that control pests (microbial pesticides), and pesticidal substances produced by plants containing added genetic material (plant-incorporated protectants) or PIPs".
The European Environmental Agency defines a biopesticide as “a pesticide made from biological sources, that is from toxins which occur naturally. - naturally occurring biological agents used to kill pests by causing specific biological effects rather than by inducing chemical poisoning.” Furthermore, the EEA defines a biopesticide as a pesticide in which “the active ingredient is a virus, fungus, or bacteria, or a natural product derived from a plant source. A biopesticide's mechanism of action is based on specific biological effects and not on chemical poisons.”
Types
Biopesticides usually have no known function in photosynthesis, growth or other basic aspects of plant physiology. Many chemical compounds produced by plants protect them from pests; they are called antifeedants. These materials are biodegradable and renewable, which can be economical for practi
|
https://en.wikipedia.org/wiki/Cyanolichen
|
Cyanolichens are lichens that apart from the basic fungal component ("mycobiont"), contain cyanobacteria, otherwise known as blue-green algae, as the photosynthesizing component ("photobiont"). Overall, about a third of lichen photobionts are cyanobacteria and the other two thirds are green algae.
Some lichens contain both green algae and cyanobacteria apart from the fungal component, in which case they are called "tripartite". Normally the photobiont occupies an extensive layer covering much of the thallus, but in tripartite lichens, the cyanobacterium component may be enclosed in pustule-like outgrowths of the main thallus called cephalodia, which can take many forms. Apart from gaining energy through photosynthesis, the cyanobacteria which live in cephalodia may perform nitrogen fixation on behalf of the lichen community. These cyanobacteria are generally more rich in nitrogen-fixing cells called heterocysts than those which live in the main photobiont layer of lichens.
External links
Lichens of North America, by I. Brodo, S. Sharnoff and S.D. Sharnoff
References
Lichenology
|
https://en.wikipedia.org/wiki/Sticking%20probability
|
The sticking probability is the probability that molecules are trapped on surfaces and adsorb chemically. From Langmuir's adsorption isotherm, molecules cannot adsorb on surfaces when the adsorption sites are already occupied by other molecules, so the sticking probability can be expressed as follows:
where is the initial sticking probability and is the surface coverage fraction ranging from 0 to 1.
Similarly, when molecules adsorb on surfaces dissociatively, the sticking probability is
The square is owing to the fact that a disassociation of 1 molecule into 2 parts requires 2 adsorption sites. These equations are simple and can be easily understood but cannot explain experimental results.
In 1958, P. Kisliuk presented an equation for the sticking probability that can explain experimental results. In his theory, molecules are trapped in precursor states of physisorption before chemisorption. Then the molecules meet adsorption sites that molecules can adsorb to chemically, so the molecules behave as follows.
If these sites are not occupied, molecules do the following (with probability in parentheses):
adsorb on the surface chemically ()
desorb from the surface ()
move to the next precursor state ()
and if these sites are occupied, they
desorb from the surface ()
move to the next precursor state ()
Note that an occupied site is defined as one where there is a chemically bonded adsorbate so by definition it would be . Then the sticking probability is, according to equation (6) of the reference,
When , this equation is identical in result to Langmuir's adsorption isotherm.
Notes
References
The constitution and fundamental properties of solids and liquids. part i. solids. Irving Langmuir; J. Am. Chem. Soc. 38, 2221-95 1916
Physical chemistry
Materials science
|
https://en.wikipedia.org/wiki/Lithium%20tantalate
|
Lithium tantalate is the inorganic compound with the formula LiTaO3. It is a white, diamagnetic, water-insoluble solid. The compound has the perovskite structure. It has optical, piezoelectric, and pyroelectric properties that make it valuable for nonlinear optics, passive infrared sensors such as motion detectors, terahertz generation and detection, surface acoustic wave applications, cell phones. Considerable information is available from commercial sources about this material.
Applications and research
Lithium tantalate is a standard detector element in infrared spectrophotometers.
Pyroelectric fusion has been demonstrated using a lithium tantalate crystal producing a large enough charge to generate and accelerate a beam of deuterium nuclei into a deuteriated target resulting in the production of a small flux of helium-3 and neutrons through nuclear fusion without extreme heat or pressure.
The phenomenon of freezing water to ice, depending on the charge applied to a surface of pyroelectric LiTaO3 crystals.
References
Further reading
Also see: Lithium tantalate (data page)
Lithium salts
Tantalates
Nonlinear optical materials
Piezoelectric materials
Crystals
|
https://en.wikipedia.org/wiki/El%20Farol%20Bar%20problem
|
The El Farol bar problem is a problem in game theory. Every Thursday night, a fixed population want to go have fun at the El Farol Bar, unless it's too crowded.
If less than 60% of the population go to the bar, they'll all have more fun than if they stayed home.
If more than 60% of the population go to the bar, they'll all have less fun than if they stayed home.
Everyone must decide at the same time whether to go or not, with no knowledge of others' choices.
Paradoxically, if everyone uses a deterministic pure strategy which is symmetric (same strategy for all players), it is guaranteed to fail no matter what it is. If the strategy suggests it will not be crowded, everyone will go, and thus it will be crowded; but if the strategy suggests it will be crowded, nobody will go, and thus it will not be crowded, but again no one will have fun. Better success is possible with a probabilistic mixed strategy. For the single-stage El Farol Bar problem, there exists a unique symmetric Nash equilibrium mixed strategy where all players choose to go to the bar with a certain probability, determined according to the number of players, the threshold for crowdedness, and the relative utility of going to a crowded or uncrowded bar compared to staying home. There are also multiple Nash equilibria in which one or more players use a pure strategy, but these equilibria are not symmetric. Several variants are considered in Game Theory Evolving by Herbert Gintis.
In some variants of the problem, the players are allowed to communicate before deciding to go to the bar. However, they are not required to tell the truth.
Named after a bar in Santa Fe, New Mexico, the problem was created in 1994 by W. Brian Arthur. However, under another name, the problem was formulated and solved dynamically six years earlier by B. A. Huberman and T. Hogg.
Minority game
A variant is the Minority Game proposed by Yi-Cheng Zhang and Damien Challet from the University of Fribourg. An odd number of players
|
https://en.wikipedia.org/wiki/Pentation
|
In mathematics, pentation (or hyper-5) is the next hyperoperation (infinite sequence of arithmetic operations) after tetration and before hexation. It is defined as iterated (repeated) tetration (assuming right-associativity), just as tetration is iterated right-associative exponentiation. It is a binary operation defined with two numbers a and b, where a is tetrated to itself b-1 times. For instance, using hyperoperation notation for pentation and tetration, means 2 to itself 2 times, or . This can then be reduced to
Etymology
The word "pentation" was coined by Reuben Goodstein in 1947 from the roots penta- (five) and iteration. It is part of his general naming scheme for hyperoperations.
Notation
There is little consensus on the notation for pentation; as such, there are many different ways to write the operation. However, some are more used than others, and some have clear advantages or disadvantages compared to others.
Pentation can be written as a hyperoperation as . In this format, may be interpreted as the result of repeatedly applying the function , for repetitions, starting from the number 1. Analogously, , tetration, represents the value obtained by repeatedly applying the function , for repetitions, starting from the number 1, and the pentation represents the value obtained by repeatedly applying the function , for repetitions, starting from the number 1. This will be the notation used in the rest of the article.
In Knuth's up-arrow notation, is represented as or . In this notation, represents the exponentiation function and represents tetration. The operation can be easily adapted for hexation by adding another arrow.
In Conway chained arrow notation, .
Another proposed notation is , though this is not extensible to higher hyperoperations.
Examples
The values of the pentation function may also be obtained from the values in the fourth row of the table of values of a variant of the Ackermann function: if is defined by the Ackermann recu
|
https://en.wikipedia.org/wiki/List%20of%20chaotic%20maps
|
In mathematics, a chaotic map is a map (namely, an evolution function) that exhibits some sort of chaotic behavior. Maps may be parameterized by a discrete-time or a continuous-time parameter. Discrete maps usually take the form of iterated functions. Chaotic maps often occur in the study of dynamical systems.
Chaotic maps often generate fractals. Although a fractal may be constructed by an iterative procedure, some fractals are studied in and of themselves, as sets rather than in terms of the map that generates them. This is often because there are several different iterative procedures to generate the same fractal.
List of chaotic maps
List of fractals
Cantor set
de Rham curve
Gravity set, or Mitchell-Green gravity set
Julia set - derived from complex quadratic map
Koch snowflake - special case of de Rham curve
Lyapunov fractal
Mandelbrot set - derived from complex quadratic map
Menger sponge
Newton fractal
Nova fractal - derived from Newton fractal
Quaternionic fractal - three dimensional complex quadratic map
Sierpinski carpet
Sierpinski triangle
References
Chaotic maps
|
https://en.wikipedia.org/wiki/Tent%20map
|
In mathematics, the tent map with parameter μ is the real-valued function fμ defined by
the name being due to the tent-like shape of the graph of fμ. For the values of the parameter μ within 0 and 2, fμ maps the unit interval [0, 1] into itself, thus defining a discrete-time dynamical system on it (equivalently, a recurrence relation). In particular, iterating a point x0 in [0, 1] gives rise to a sequence :
where μ is a positive real constant. Choosing for instance the parameter μ = 2, the effect of the function fμ may be viewed as the result of the operation of folding the unit interval in two, then stretching the resulting interval [0, 1/2] to get again the interval [0, 1]. Iterating the procedure, any point x0 of the interval assumes new subsequent positions as described above, generating a sequence xn in [0, 1].
The case of the tent map is a non-linear transformation of both the bit shift map and the r = 4 case of the logistic map.
Behaviour
The tent map with parameter μ = 2 and the logistic map with parameter r = 4 are topologically conjugate, and thus the behaviours of the two maps are in this sense identical under iteration.
Depending on the value of μ, the tent map demonstrates a range of dynamical behaviour ranging from predictable to chaotic.
If μ is less than 1 the point x = 0 is an attractive fixed point of the system for all initial values of x i.e. the system will converge towards x = 0 from any initial value of x.
If μ is 1 all values of x less than or equal to 1/2 are fixed points of the system.
If μ is greater than 1 the system has two fixed points, one at 0, and the other at μ/(μ + 1). Both fixed points are unstable, i.e. a value of x close to either fixed point will move away from it, rather than towards it. For example, when μ is 1.5 there is a fixed point at x = 0.6 (since 1.5(1 − 0.6) = 0.6) but starting at x = 0.61 we get
If μ is between 1 and the square root of 2 the system maps a set of intervals between μ − μ2/2 and μ/2 to the
|
https://en.wikipedia.org/wiki/Costas%20array
|
In mathematics, a Costas array can be regarded geometrically as a set of n points, each at the center of a square in an n×n square tiling such that each row or column contains only one point, and all of the n(n − 1)/2 displacement vectors between each pair of dots are distinct. This results in an ideal "thumbtack" auto-ambiguity function, making the arrays useful in applications such as sonar and radar. Costas arrays can be regarded as two-dimensional cousins of the one-dimensional Golomb ruler construction, and, as well as being of mathematical interest, have similar applications in experimental design and phased array radar engineering.
Costas arrays are named after John P. Costas, who first wrote about them in a 1965 technical report. Independently, Edgar Gilbert also wrote about them in the same year, publishing what is now known as the logarithmic Welch method of constructing Costas arrays.
The general enumeration of Costas arrays is an open problem in computer science and finding an algorithm that can solve it in polynomial time is an open research question.
Numerical representation
A Costas array may be represented numerically as an n×n array of numbers, where each entry is either 1, for a point, or 0, for the absence of a point. When interpreted as binary matrices, these arrays of numbers have the property that, since each row and column has the constraint that it only has one point on it, they are therefore also permutation matrices. Thus, the Costas arrays for any given n are a subset of the permutation matrices of order n.
Arrays are usually described as a series of indices specifying the column for any row. Since it is given that any column has only one point, it is possible to represent an array one-dimensionally. For instance, the following is a valid Costas array of order N = 4:
or simply
There are dots at coordinates: (1,2), (2,1), (3,3), (4,4)
Since the x-coordinate increases linearly, we can write this in shorthand as the set of al
|
https://en.wikipedia.org/wiki/Stability%20radius
|
In mathematics, the stability radius of an object (system, function, matrix, parameter) at a given nominal point is the radius of the largest ball, centered at the nominal point, all of whose elements satisfy pre-determined stability conditions. The picture of this intuitive notion is this:
where denotes the nominal point, denotes the space of all possible values of the object , and the shaded area, , represents the set of points that satisfy the stability conditions. The radius of the blue circle, shown in red, is the stability radius.
Abstract definition
The formal definition of this concept varies, depending on the application area. The following abstract definition is quite useful
where denotes a closed ball of radius in centered at .
History
It looks like the concept was invented in the early 1960s. In the 1980s it became popular in control theory and optimization. It is widely used as a model of local robustness against small perturbations in a given nominal value of the object of interest.
Relation to Wald's maximin model
It was shown that the stability radius model is an instance of Wald's maximin model. That is,
where
The large penalty () is a device to force the player not to perturb the nominal value beyond the stability radius of the system. It is an indication that the stability model is a model of local stability/robustness, rather than a global one.
Info-gap decision theory
Info-gap decision theory is a recent non-probabilistic decision theory. It is claimed to be radically different from all current theories of decision under uncertainty. But it has been shown that its robustness model, namely
is actually a stability radius model characterized by a simple stability requirement of the form where denotes the decision under consideration, denotes the parameter of interest, denotes the estimate of the true value of and denotes a ball of radius centered at .
Since stability radius models are designed to deal with smal
|
https://en.wikipedia.org/wiki/Triangle%20group
|
In mathematics, a triangle group is a group that can be realized geometrically by sequences of reflections across the sides of a triangle. The triangle can be an ordinary Euclidean triangle, a triangle on the sphere, or a hyperbolic triangle. Each triangle group is the symmetry group of a tiling of the Euclidean plane, the sphere, or the hyperbolic plane by congruent triangles called Möbius triangles, each one a fundamental domain for the action.
Definition
Let l, m, n be integers greater than or equal to 2. A triangle group Δ(l,m,n) is a group of motions of the Euclidean plane, the two-dimensional sphere, the real projective plane, or the hyperbolic plane generated by the reflections in the sides of a triangle with angles π/l, π/m and π/n (measured in radians). The product of the reflections in two adjacent sides is a rotation by the angle which is twice the angle between those sides, 2π/l, 2π/m and 2π/n. Therefore, if the generating reflections are labeled a, b, c and the angles between them in the cyclic order are as given above, then the following relations hold:
It is a theorem that all other relations between a, b, c are consequences of these relations and that Δ(l,m,n) is a discrete group of motions of the corresponding space. Thus a triangle group is a reflection group that admits a group presentation
An abstract group with this presentation is a Coxeter group with three generators.
Classification
Given any natural numbers l, m, n > 1 exactly one of the classical two-dimensional geometries (Euclidean, spherical, or hyperbolic) admits a triangle with the angles (π/l, π/m, π/n), and the space is tiled by reflections of the triangle. The sum of the angles of the triangle determines the type of the geometry by the Gauss–Bonnet theorem: it is Euclidean if the angle sum is exactly π, spherical if it exceeds π and hyperbolic if it is strictly smaller than π. Moreover, any two triangles with the given angles are congruent. Each triangle group determines a
|
https://en.wikipedia.org/wiki/SUNMOS
|
SUNMOS (Sandia/UNM Operating System) is an operating system jointly developed by Sandia National Laboratories and the Computer Science Department at the University of New Mexico. The goal of the project, started in 1991, is to develop a highly portable, yet efficient, operating system for massively parallel-distributed memory systems.
SUNMOS uses a single-tasking kernel and does not provide demand paging. It takes control of all nodes in the distributed system. Once an application is loaded and running, it can manage all the available memory on a node and use the full resources provided by the hardware. Applications are started and controlled from a process called yod that runs on the host node. Yod runs on a Sun frontend for the nCUBE 2, and on a service node on the Intel Paragon.
SUNMOS was developed as a reaction to the heavy weight version of OSF/1 that ran as a single-system image on the Paragon and consumed 8-12 MB of the 16 MB available on each node, leaving little memory available for the compute applications. In comparison, SUNMOS used 250 KB of memory per node. Additionally, the overhead of OSF/1 limited the network bandwidth to 35 MB/s, while SUNMOS was able to use 170 MB/s of the peak 200 MB/s available.
The ideas in SUNMOS inspired PUMA, a multitasking variant that only ran on the i860 Paragon. Among the extensions in PUMA was the Portals API, a scalable, high performance message passing API. Intel ported PUMA and Portals to the Pentium Pro based ASCI Red system and named it Cougar. Cray ported Cougar to the Opteron based Cray XT3 and renamed it Catamount. A version of Catamount was released to the public named OpenCatamount.
In 2009, the Catamount lightweight kernel was selected for an R&D 100 Award.
See also
Compute Node Linux
CNK operating system
References
External links
SUNMOS FTP site
A humorous field guide to differences between SUNMOS and OSF
OpenCatamount.
Supercomputer operating systems
Sandia National Laboratories
|
https://en.wikipedia.org/wiki/SGI%20Crimson
|
The IRIS Crimson (code-named Diehard2) is a Silicon Graphics (SGI) computer released in 1992. It is the world's first 64-bit workstation.
Crimson is a member of Silicon Graphics's SGI IRIS 4D series of deskside systems; it is also known as the 4D/510 workstation. It is similar to other SGI IRIS 4D deskside workstations, and can use a wide range of graphics options (up to RealityEngine). It is also available as a file server with no graphics.
This machine makes a brief appearance in the movie Jurassic Park (1993) where Lex uses the machine to navigate the IRIX filesystem in 3D using the application fsn to restore power to the compound. The next year, Silicon Graphics released a rebadged, limited edition Crimson R4400/VGXT called the Jurassic Classic, with a special logo and SGI co-founder James H. Clark's signature on the drive door.
Features
One MIPS 100 MHz R4000 or 150 MHz R4400 processor
Choice of seven high performance 3D graphics subsystems
Up to 256 MB memory and internal disk capacity up to 7.2 GB, expandable to greater than 72 GB using additional enclosures
I/O subsystem includes four VMEbus expansion slots, Ethernet and two SCSI channels with disk striping support
Crimson memory is unique to this model.
References
External links
IRIS Crimson
Crimson
64-bit computers
|
https://en.wikipedia.org/wiki/Popular%20mathematics
|
Popular mathematics is mathematical presentation aimed at a general audience. Sometimes this is in the form of books which require no mathematical background and in other cases it is in the form of expository articles written by professional mathematicians to reach out to others working in different areas.
Notable works of popular mathematics
Some of the most prolific popularisers of mathematics include Keith Devlin, Rintu Nath, Martin Gardner, and Ian Stewart. Titles by these three authors can be found on their respective pages.
On zero
On infinity
Rucker, Rudy (1982), Infinity and the Mind: The Science and Philosophy of the Infinite; Princeton, N.J.: Princeton University Press. .
On constants
On complex numbers
On the Riemann hypothesis
On recently solved problems
On classification of finite simple groups
On higher dimensions
Rucker, Rudy (1984), The Fourth Dimension: Toward a Geometry of Higher Reality; Houghton Mifflin Harcourt.
On introduction to mathematics for the general reader
Biographies
Magazines and journals
Popular science magazines such as New Scientist and Scientific American sometimes carry articles on mathematics.
Plus Magazine is a free online magazine run under the Millennium Mathematics Project at the University of Cambridge.
The journals listed below can be found in many university libraries.
American Mathematical Monthly is designed to be accessible to a wide audience.
The Mathematical Gazette contains letters, book reviews and expositions of attractive areas of mathematics.
Mathematics Magazine offers lively, readable, and appealing exposition on a wide range of mathematical topics.
The Mathematical Intelligencer is a mathematical journal that aims at a conversational and scholarly tone.
Notices of the AMS - Each issue contains one or two expository articles that describe current developments in mathematical research, written by professional mathematicians. The Notices also carries articles on the history of mathemati
|
https://en.wikipedia.org/wiki/156%20%28number%29
|
156 (one hundred [and] fifty-six) is the natural number, following 155 and preceding 157.
In mathematics
156 is an abundant number, a pronic number, a dodecagonal number, and a refactorable number.
156 is the number of graphs on 6 unlabeled nodes.
156 is a repdigit in base 5 (1111), and also in bases 25, 38, 51, 77, and 155.
156 degrees is the internal angle of a pentadecagon.
In the military
Convoy HX-156 was the 156th of the numbered series of World War II HX convoys of merchant ships from Halifax, Nova Scotia to Liverpool during World War II
The Fieseler Fi 156 Storch was a small German liaison aircraft during World War II
The
was a United States Navy T2 tanker during World War II
was a United States Navy cargo ship during World War II
was a United States Navy during World War II
was a United States Navy ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy fast civilian yacht during World War I
In music
156, a song by the Danish rock band Mew appearing in both their 2000 album Half the World Is Watching Me and their 2003 album Frengers.
NM 156, a 1984 song by the heavy metal band Queensrÿche from the album The Warning
156, a song by the Polish Black Metal band Blaze of Perdition from the 2010 album Towards the Blaze of Perdition
In transportation
The Alfa Romeo 156 car produced from 1997 to 2006.
The Ferrari 156 was a racecar made by Ferrari from 1961 to 1963.
The Ferrari 156/85 was a Formula One car in the 1985 Formula One season.
The Class 156 "Super Sprinter" DMU train.
The Midland Railway 156 Class, a 2-4-0 tender engine built in the United Kingdom between 1866 and 1874.
London Buses route 156.
Martin 156, known as the Russian clipper, was a large flying boat aircraft intended for transoceanic service
|
https://en.wikipedia.org/wiki/Heun%20function
|
In mathematics, the local Heun function is the solution of Heun's differential equation that is holomorphic and 1 at the singular point z = 0. The local Heun function is called a Heun function, denoted Hf, if it is also regular at z = 1, and is called a Heun polynomial, denoted Hp, if it is regular at all three finite singular points z = 0, 1, a.
Heun's equation
Heun's equation is a second-order linear ordinary differential equation (ODE) of the form
The condition is taken so that the characteristic exponents for the regular singularity at infinity are α and β (see below).
The complex number q is called the accessory parameter. Heun's equation has four regular singular points: 0, 1, a and ∞ with exponents (0, 1 − γ), (0, 1 − δ), (0, 1 − ϵ), and (α, β). Every second-order linear ODE on the extended complex plane with at most four regular singular points, such as the Lamé equation or the hypergeometric differential equation, can be transformed into this equation by a change of variable.
Coalescence of various regular singularities of the Heun equation into irregular singularities give rise to several confluent forms of the equation, as shown in the table below.
{| class="wikitable"
|+Forms of the Heun Equation
|-
! Form !! Singularities !! Equation
|-
| General
| 0, 1, a, ∞
|
|-
| Confluent
| 0, 1, ∞ (irregular, rank 1)
|
|-
| Doubly Confluent
| 0 (irregular, rank 1), ∞ (irregular, rank 1)
|
|-
| Biconfluent
| 0, ∞ (irregular, rank 2)
|
|-
| Triconfluent
| ∞ (irregular, rank 3)
|
|}
q-analog
The q-analog of Heun's equation has been discovered by and studied by .
Symmetries
Heun's equation has a group of symmetries of order 192, isomorphic to the Coxeter group of the Coxeter diagram D4, analogous to the 24 symmetries of the hypergeometric differential equations obtained by Kummer.
The symmetries fixing the local Heun function form a group of order 24 isomorphic to the symmetric group on 4 points, so there are 192/24 = 8 = 2 × 4 essentially diffe
|
https://en.wikipedia.org/wiki/Oleoresin
|
Oleoresins are semi-solid extracts composed of resin and essential or fatty oil, obtained by evaporation of the solvents used for their production. The oleoresin of conifers is known as crude turpentine or gum turpentine, which consists of oil of turpentine and rosin.
Properties
In contrast to essential oils obtained by steam distillation, oleoresins abound in heavier, less volatile and lipophilic compounds, such as resins, waxes, fats and fatty oils. Gummo-oleoresins (oleo-gum resins, gum resins) occur mostly as crude balsams and contain also water-soluble gums. Processing of oleoresins is conducted on a large scale, especially in China (400,000 tons per year in the 1990s), but the technology is too labor-intensive to be viable in countries with high labor costs, such as the US.
Oleoresins are prepared from spices, such as basil, capsicum (paprika), cardamom, celery seed, cinnamon bark, clove bud, fenugreek, fir balsam, ginger, jambu, labdanum, mace, marjoram, nutmeg, parsley, pepper (black/white), pimenta (allspice), rosemary, sage, savory (summer/winter), thyme, turmeric, vanilla, and West Indian bay leaves. The solvents used are nonaqueous and may be polar (alcohols) or nonpolar (hydrocarbons, carbon dioxide).
Oleoresins are similar to perfumery concretes, obtained especially from flowers, and to perfumery resinoids, which are prepared also from animal secretions.
Use
Most oleoresins are used as flavors and perfumes, some are used medicinally (e. g., oleoresin of dry Cannabis infructescence). Oleoresin capsicum is commonly used as a basis for tear gases. There are also uses known in the manufacture of soaps of cosmetics, as well as coloring agents for foods.
References
Flavors
Resins
|
https://en.wikipedia.org/wiki/Luis%20Caffarelli
|
Luis Ángel Caffarelli (; born December 8, 1948) is an Argentine–American mathematician. He studies partial differential equations and their applications.
Career
Caffarelli was born and grew up in Buenos Aires. He obtained his Masters of Science (1968) and Ph.D. (1972) at the University of Buenos Aires. His Ph.D. advisor was Calixto Calderón. He currently holds the Sid Richardson Chair at the University of Texas at Austin. He also has been a professor at the University of Minnesota, the University of Chicago, and the Courant Institute of Mathematical Sciences at New York University. From 1986 to 1996 he was a professor at the Institute for Advanced Study in Princeton.
Research
Caffarelli received recognition with "The regularity of free boundaries in higher dimensions" published in 1977 in Acta Mathematica. He is considered an expert in free boundary problems and nonlinear partial differential equations. He proved several regularity results for fully nonlinear elliptic equations including the Monge-Ampere equation, and also contributed to homogenization. He is also interested in integro-differential equations.
One of his most cited results regards the Partial regularity of suitable weak solutions of the Navier–Stokes equations; it was obtained in 1982 in collaboration with Louis Nirenberg and Robert V. Kohn.
Awards and recognition
In 1991 he was elected to the U.S. National Academy of Sciences. He was awarded honorary doctorates by the École Normale Supérieure, Paris, the University of Notre Dame, the Universidad Autónoma de Madrid, and the Universidad de La Plata, Argentina. He received the Bôcher Memorial Prize in 1984. He is listed as an ISI highly cited researcher.
In 2003 Konex Foundation from Argentina granted him the Diamond Konex Award, one of the most prestigious awards in Argentina, as the most important Scientist of his country in the last decade. In 2005, he received the prestigious Rolf Schock Prize of the Royal Swedish Academy of Sciences "for his
|
https://en.wikipedia.org/wiki/Lebesgue%20constant
|
In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolant of a function (at the given nodes) is in comparison with the best polynomial approximation of the function (the degree of the polynomials are fixed). The Lebesgue constant for polynomials of degree at most and for the set of nodes is generally denoted by . These constants are named after Henri Lebesgue.
Definition
We fix the interpolation nodes and an interval containing all the interpolation nodes. The process of interpolation maps the function to a polynomial . This defines a mapping from the space C([a, b]) of all continuous functions on [a, b] to itself. The map X is linear and it is a projection on the subspace of polynomials of degree or less.
The Lebesgue constant is defined as the operator norm of X. This definition requires us to specify a norm on C([a, b]). The uniform norm is usually the most convenient.
Properties
The Lebesgue constant bounds the interpolation error: let denote the best approximation of f among the polynomials of degree or less. In other words, minimizes among all p in Πn. Then
We will here prove this statement with the maximum norm.
by the triangle inequality. But X is a projection on Πn, so
.
This finishes the proof since . Note that this relation comes also as a special case of Lebesgue's lemma.
In other words, the interpolation polynomial is at most a factor worse than the best possible approximation. This suggests that we look for a set of interpolation nodes with a small Lebesgue constant.
The Lebesgue constant can be expressed in terms of the Lagrange basis polynomials:
In fact, we have the Lebesgue function
and the Lebesgue constant (or Lebesgue number) for the grid is its maximum value
Nevertheless, it is not easy to find an explicit expression for .
Minimal Lebesgue constants
In the case of equidistant nodes, the Lebesgue constant grows exponentially. More precisely, we hav
|
https://en.wikipedia.org/wiki/Surrey%20Satellite%20Technology
|
Surrey Satellite Technology Ltd, or SSTL, is a company involved in the manufacture and operation of small satellites. A spin-off company of the University of Surrey, it is presently wholly owned by Airbus Defence and Space.
The company began out of research efforts centred upon amateur radio satellites, known by the UoSAT (University of Surrey Satellite) name or by an OSCAR (Orbital Satellite Carrying Amateur Radio) designation. SSTL was founded in 1985, following successful trials on the use of commercial off-the-shelf (COTS) components on satellites, cumulating in the UoSat-1 test satellite. It funds research projects with the university's Surrey Space Centre, which does research into satellite and space topics.
In April 2008, the University of Surrey agreed to sell its majority share in the company to European multinational conglomerate EADS Astrium. In August 2008, SSTL opened a US subsidiary, which included both offices and a production site in Denver, Colorado; in 2017, the company decided to discontinue manufacturing activity in the US, winding up this subsidiary.
SSTL was awarded the Queen's Award for Technological Achievement in 1998, and the Queen's Awards for Enterprise in 2005. In 2006 SSTL won the Times Higher Education award for outstanding contribution to innovation and technology. In 2009, SSTL ranked 89 out of the 997 companies that took part in the Sunday Times Top 100 companies to work for.
In 2020, SSTL started the creation of a telecommunications spacecraft called Lunar Pathfinder for lunar missions. It will be launched in 2025 and used for data transmission to Earth.
History
Background and early years
During the early decades of the Cold War era, access to space was effectively the privilege of a handful of superpowers; by the 1970s, only the most affluent of countries could afford to engage in space programmes due to extreme complexity and expenses involved. Despite the exorbitant costs to produce and launch, early satellites could only
|
https://en.wikipedia.org/wiki/Pumping%20lemma%20for%20regular%20languages
|
In the theory of formal languages, the pumping lemma for regular languages is a lemma that describes an essential property of all regular languages. Informally, it says that all sufficiently long strings in a regular language may be pumped—that is, have a middle section of the string repeated an arbitrary number of times—to produce a new string that is also part of the language.
Specifically, the pumping lemma says that for any regular language there exists a constant such that any string in with length at least can be split into three substrings , and (, with being non-empty), such that the strings constructed by repeating zero or more times are still in . This process of repetition is known as "pumping". Moreover, the pumping lemma guarantees that the length of will be at most , imposing a limit on the ways in which may be split.
Languages with a finite number of strings vacuously satisfy the pumping lemma by having equal to the maximum string length in plus one. By doing so, zero strings in have length greater than .
The pumping lemma is useful for disproving the regularity of a specific language in question. It was first proven by Michael Rabin and Dana Scott in 1959, and rediscovered shortly after by Yehoshua Bar-Hillel, Micha A. Perles, and Eli Shamir in 1961, as a simplification of their pumping lemma for context-free languages.
Formal statement
Let be a regular language. Then there exists an integer depending only on such that every string in of length at least ( is called the "pumping length") can be written as (i.e., can be divided into three substrings), satisfying the following conditions:
is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in ). (1) means the loop to be pumped must be of length at least one, that is, not an empty string; (2) means the loop must occur within the first characters. must be smaller than (conclusion of (1) and (2)), but apa
|
https://en.wikipedia.org/wiki/Rabinovich%E2%80%93Fabrikant%20equations
|
The Rabinovich–Fabrikant equations are a set of three coupled ordinary differential equations exhibiting chaotic behaviour for certain values of the parameters. They are named after Mikhail Rabinovich and Anatoly Fabrikant, who described them in 1979.
System description
The equations are:
where α, γ are constants that control the evolution of the system. For some values of α and γ, the system is chaotic, but for others it tends to a stable periodic orbit.
Danca and Chen note that the Rabinovich–Fabrikant system is difficult to analyse (due to the presence of quadratic and cubic terms) and that different attractors can be obtained for the same parameters by using different step sizes in the integration, see on the right an example of a solution obtained by two different solvers for the same parameter values and initial conditions. Also, recently, a hidden attractor was discovered in the Rabinovich–Fabrikant system.
Equilibrium points
The Rabinovich–Fabrikant system has five hyperbolic equilibrium points, one at the origin and four dependent on the system parameters α and γ:
where
These equilibrium points only exist for certain values of α and γ > 0.
γ = 0.87, α = 1.1
An example of chaotic behaviour is obtained for γ = 0.87 and α = 1.1 with initial conditions of (−1, 0, 0.5), see trajectory on the right. The correlation dimension was found to be 2.19 ± 0.01. The Lyapunov exponents, λ are approximately 0.1981, 0, −0.6581 and the Kaplan–Yorke dimension, DKY ≈ 2.3010
γ = 0.1
Danca and Romera showed that for γ = 0.1, the system is chaotic for α = 0.98, but progresses on a stable limit cycle for α = 0.14.
See also
List of chaotic maps
References
External links
Weisstein, Eric W. "Rabinovich–Fabrikant Equation." From MathWorld—A Wolfram Web Resource.
Chaotics Models a more appropriate approach to the chaotic graph of the system "Rabinovich–Fabrikant Equation"
Chaotic maps
Equations
|
https://en.wikipedia.org/wiki/Java%20Card
|
Java Card is a software technology that allows Java-based applications (applets) to be run securely on smart cards and more generally on similar secure small memory footprint devices which are called "secure elements" (SE). Today, a Secure Element is not limited to its smart cards and other removable cryptographic tokens form factors; embedded SEs soldered onto a device board and new security designs embedded into general purpose chips are also widely used. Java Card addresses this hardware fragmentation and specificities while retaining code portability brought forward by Java.
Java Card is the tiniest of Java platforms targeted for embedded devices. Java Card gives the user the ability to program the devices and make them application specific. It is widely used in different markets: wireless telecommunications within SIM cards and embedded SIM, payment within banking cards and NFC mobile payment and for identity cards, healthcare cards, and passports. Several IoT products like gateways are also using Java Card based products to secure communications with a cloud service for instance.
The first Java Card was introduced in 1996 by Schlumberger's card division which later merged with Gemplus to form Gemalto. Java Card products are based on the specifications by Sun Microsystems (later a subsidiary of Oracle Corporation). Many Java card products also rely on the GlobalPlatform specifications for the secure management of applications on the card (download, installation, personalization, deletion).
The main design goals of the Java Card technology are portability, security and backward compatibility.
Portability
Java Card aims at defining a standard smart card computing environment allowing the same Java Card applet to run on different smart cards, much like a Java applet runs on different computers. As in Java, this is accomplished using the combination of a virtual machine (the Java Card Virtual Machine), and a well-defined runtime library, which largely abstrac
|
https://en.wikipedia.org/wiki/Q-theta%20function
|
In mathematics, the q-theta function (or modified Jacobi theta function) is a type of q-series which is used to define elliptic hypergeometric series.
It is given by
where one takes 0 ≤ |q| < 1. It obeys the identities
It may also be expressed as:
where is the q-Pochhammer symbol.
See also
elliptic hypergeometric series
Jacobi theta function
Ramanujan theta function
References
Q-analogs
Theta functions
|
https://en.wikipedia.org/wiki/Mixminion
|
Mixminion is the standard implementation of the Type III anonymous remailer protocol. Mixminion can send and receive anonymous e-mail.
Mixminion uses a mix network architecture to provide strong anonymity, and prevent eavesdroppers and other attackers from linking senders and recipients. Volunteers run servers (called "mixes") that receive messages, decrypt them, re-order them, and re-transmit them toward their eventual destination. Every e-mail passes through several mixes so that no single mix can link message senders with recipients.
To send an anonymous message, mixminion breaks it into uniform-sized chunks (also called "packets"), pads the packets to a uniform size, and chooses a path through the mix network for each packet. The software encrypts every packet with the public keys for each server in its path, one by one. When it is time to transmit a packet, mixminion sends it to the first mix in the path. The first mix decrypts the packet, learns which mix will receive the packet, and relays it. Eventually, the packet arrives at a final (or "exit") mix, which sends it to the chosen recipient. Because no mix sees any more of the path besides the immediately adjacent mixes, they cannot link senders to recipients.
Mixminion supports Single-Use Reply Blocks (or SURBs) to allow anonymous recipients. A SURB encodes a half-path to a recipient, so that each mix in the sequence can unwrap a single layer of the path, and encrypt the message for the recipient. When the message reaches the recipient, the recipient can decode the message and learn which SURB was used to send it; the sender does not know which recipient has received the anonymous message.
The most current version of Mixminion Message Sender is 1.2.7 and was released on 11 February 2009.
On 2 September 2011, a news announcement was made that stated the source was uploaded to GitHub
See also
Anonymity
Anonymous P2P
Anonymous remailer
Cypherpunk anonymous remailer (Type I)
Mixmaster anonymous remail
|
https://en.wikipedia.org/wiki/Elliptic%20gamma%20function
|
In mathematics, the elliptic gamma function is a generalization of the q-gamma function, which is itself the q-analog of the ordinary gamma function. It is closely related to a function studied by , and can be expressed in terms of the triple gamma function. It is given by
It obeys several identities:
and
where θ is the q-theta function.
When , it essentially reduces to the infinite q-Pochhammer symbol:
Multiplication Formula
Define
Then the following formula holds with ().
References
Gamma and related functions
Q-analogs
|
https://en.wikipedia.org/wiki/Constitutional%20growth%20delay
|
Constitutional delay of growth and puberty (CDGP) is a term describing a temporary delay in the skeletal growth and thus height of a child with no physical abnormalities causing the delay. Short stature may be the result of a growth pattern inherited from a parent (familial) or occur for no apparent reason (idiopathic). Typically at some point during childhood, growth slows down, eventually resuming at a normal rate. CDGP is the most common cause of short stature and delayed puberty.
Synonyms
Constitutional Delay of Growth and Adolescence (CDGA)
Constitutional Growth Delay (CGD)
See also
Idiopathic short stature
Failure to thrive
References
Developmental biology
Pediatrics
Sexuality and age
Human height
|
https://en.wikipedia.org/wiki/Bessel%20filter
|
In electronics and signal processing, a Bessel filter is a type of analog linear filter with a maximally flat group delay (i.e., maximally linear phase response), which preserves the wave shape of filtered signals in the passband. Bessel filters are often used in audio crossover systems.
The filter's name is a reference to German mathematician Friedrich Bessel (1784–1846), who developed the mathematical theory on which the filter is based. The filters are also called Bessel–Thomson filters in recognition of W. E. Thomson, who worked out how to apply Bessel functions to filter design in 1949.
The Bessel filter is very similar to the Gaussian filter, and tends towards the same shape as filter order increases. While the time-domain step response of the Gaussian filter has zero overshoot, the Bessel filter has a small amount of overshoot, but still much less than other common frequency-domain filters, such as Butterworth filters. It has been noted that the impulse response of Bessel–Thomson filters tends towards a Gaussian as the order of the filter is increased.
Compared to finite-order approximations of the Gaussian filter, the Bessel filter has better shaping factor, flatter phase delay, and flatter group delay than a Gaussian of the same order, although the Gaussian has lower time delay and zero overshoot.
The transfer function
A Bessel low-pass filter is characterized by its transfer function:
where is a reverse Bessel polynomial from which the filter gets its name and is a frequency chosen to give the desired cut-off frequency. The filter has a low-frequency group delay of . Since is indeterminate by the definition of reverse Bessel polynomials, but is a removable singularity, it is defined that .
Bessel polynomials
The transfer function of the Bessel filter is a rational function whose denominator is a reverse Bessel polynomial, such as the following:
The reverse Bessel polynomials are given by:
where
Setting the cutoff attenuation
There is no sta
|
https://en.wikipedia.org/wiki/Framing%20%28construction%29
|
Framing, in construction, is the fitting together of pieces to give a structure support and shape. Framing materials are usually wood, engineered wood, or structural steel. The alternative to framed construction is generally called mass wall construction, where horizontal layers of stacked materials such as log building, masonry, rammed earth, adobe, etc. are used without framing.
Building framing is divided into two broad categories, heavy-frame construction (heavy framing) if the vertical supports are few and heavy such as in timber framing, pole building framing, or steel framing; or light-frame construction (light-framing) if the supports are more numerous and smaller, such as balloon, platform, or light-steel framing. Light-frame construction using standardized dimensional lumber has become the dominant construction method in North America and Australia due to the economy of the method; use of minimal structural material allows builders to enclose a large area at minimal cost while achieving a wide variety of architectural styles.
Modern light-frame structures usually gain strength from rigid panels (plywood and other plywood-like composites such as oriented strand board (OSB) used to form all or part of wall sections), but until recently carpenters employed various forms of diagonal bracing to stabilize walls. Diagonal bracing remains a vital interior part of many roof systems, and in-wall wind braces are required by building codes in many municipalities or by individual state laws in the United States. Special framed shear walls are becoming more common to help buildings meet the requirements of earthquake engineering and wind engineering.
History
Historically, people fitted naturally shaped wooden poles together as framework and then began using joints to connect the timbers, a method today called traditional timber framing or log framing. In the United States, timber framing was superseded by balloon framing beginning in the 1830s. Balloon framing makes
|
https://en.wikipedia.org/wiki/I-beam
|
An I-beam is any of various structural members with an or -shaped cross-section. Technical terms for similar items include H-beam (for universal column, UC), w-beam (for "wide flange"), universal beam (UB), rolled steel joist (RSJ), or double-T (especially in Polish, Bulgarian, Spanish, Italian and German). I-beams are typically made of structural steel and serve a wide variety of construction uses.
The horizontal elements of the are called flanges, and the vertical element is known as the "web".
The web resists shear forces, while the flanges resist most of the bending moment experienced by the beam. The Euler–Bernoulli beam equation shows that the I-shaped section is a very efficient form for carrying both bending and shear loads in the plane of the web. On the other hand, the cross-section has a reduced capacity in the transverse direction, and is also inefficient in carrying torsion, for which hollow structural sections are often preferred.
History
The method of producing an I-beam, as rolled from a single piece of wrought iron, was patented by Alphonse Halbou of the company Forges de la Providence in 1849.
Bethlehem Steel was a leading supplier of rolled structural steel of various cross-sections in American bridge and skyscraper work of the mid-twentieth century. Today, rolled cross-sections have been partially displaced in such work by fabricated cross-sections.
Overview
There are two standard I-beam forms:
Rolled I-beam, formed by hot rolling, cold rolling or extrusion (depending on material).
Plate girder, formed by welding (or occasionally bolting or riveting) plates.
I-beams are commonly made of structural steel but may also be formed from aluminium or other materials. A common type of I-beam is the rolled steel joist (RSJ)—sometimes incorrectly rendered as reinforced steel joist. British and European standards also specify Universal Beams (UBs) and Universal Columns (UCs). These sections have parallel flanges (shown as "W-Section" in the acco
|
https://en.wikipedia.org/wiki/Human%20Genetic%20Diversity%3A%20Lewontin%27s%20Fallacy
|
"Human Genetic Diversity: Lewontin's Fallacy" is a 2003 paper by A. W. F. Edwards. He criticises an argument first made in Richard Lewontin's 1972 article "The Apportionment of Human Diversity", that the practice of dividing humanity into races is taxonomically invalid because any given individual will often have more in common genetically with members of other population groups than with members of their own. Edwards argued that this does not refute the biological reality of race since genetic analysis can usually make correct inferences about the perceived race of a person from whom a sample is taken, and that the rate of success increases when more genetic loci are examined.
Edwards' paper was reprinted, commented upon by experts such as Noah Rosenberg, and given further context in an interview with philosopher of science Rasmus Grønfeldt Winther in a 2018 anthology. Edwards' critique is discussed in a number of academic and popular science books, with varying degrees of support.
Some scholars, including Winther and Jonathan Marks, dispute the premise of "Lewontin's fallacy", arguing that Edwards' critique does not actually contradict Lewontin's argument. A 2007 paper in Genetics by David J. Witherspoon et al. concluded that the two arguments are in fact compatible, and that Lewontin's observation about the distribution of genetic differences across ancestral population groups applies "even when the most distinct populations are considered and hundreds of loci are used".
Lewontin's argument
In the 1972 study "The Apportionment of Human Diversity", Richard Lewontin performed a fixation index (FST) statistical analysis using 17 markers, including blood group proteins, from individuals across classically defined "races" (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines). He found that the majority of the total genetic variation between humans (i.e., of the 0.1% of DNA that varies between individuals), 85.4%, is
|
https://en.wikipedia.org/wiki/Win%E2%80%93win%20game
|
In game theory, a win–win game (often called a win–win scenario) is a special case of a non-zero-sum game that produces a mutually beneficial outcome for two or more parties. If a win–win scenario is not achieved, the scenario becomes a lose–lose scenario by default, since all parties lose if the venture fails. It is also called a positive-sum game and is the opposite of a zero-sum game.
While she did not coin the term, Mary Parker Follett's process of integration described in her book Creative Experience (Longmans, Green & Co., 1924) forms the basis of what we now refer to as the idea of "win-win" conflict resolution.
See also
Abundance mentality
Game
Cooperative game
Group-dynamic game
Zero-sum game
No-win situation
References
Game theory game classes
Personal development
Negotiation
Dispute resolution
Metaphors referring to war and violence
|
https://en.wikipedia.org/wiki/Mitotoxin
|
A mitotoxin is a cytotoxic molecule targeted to specific cells by a mitogen. Generally found in snake venom. Mitotoxins are responsible for mediating cell death by interfering with protein or DNA synthesis. Some mechanisms by which mitotoxins can interfere with DNA or protein synthesis include the inactivation of ribosomes or the inhibition of complexes in the mitochondrial electron transport chain. These toxins have a very high affinity and level of specificity for the receptors that they bind to. Mitotoxins bind to receptors on cell surfaces and are then internalized into cells via receptor-mediated endocytosis. Once in the endosome, the receptor releases its ligand and a mitotoxin can mediate cell death.
There are different classes of mitotoxins, each acting on a different type of cell or system. The mitotoxin classes that have been identified thus far include: interleukin-based, transferrin based, epidermal growth factor-based, nerve growth factor-based, insulin-like growth factor-I-based, and fibroblast growth factor-based mitotoxins. Because of the high affinity and specificity of mitotoxin binding, they present the possibility of creating precise therapeutic agents. A major one of these possibilities is the potential usage of growth factor-based mitotoxins as anti-neoplastic agents that can modulate the growth of melanomas.
References
Molecular biology
|
https://en.wikipedia.org/wiki/Second-order%20cybernetics
|
Second-order cybernetics, also known as the cybernetics of cybernetics, is the recursive application of cybernetics to itself and the reflexive practice of cybernetics according to such a critique. It is cybernetics where "the role of the observer is appreciated and acknowledged rather than disguised, as had become traditional in western science". Second-order cybernetics was developed between the late 1960s and mid 1970s by Heinz von Foerster and others, with key inspiration coming from Margaret Mead. Foerster referred to it as "the control of control and the communication of communication" and differentiated first order cybernetics as "the cybernetics of observed systems" and second-order cybernetics as "the cybernetics of observing systems".
The concept of second-order cybernetics is closely allied to radical constructivism, which was developed around the same time by Ernst von Glasersfeld. While it is sometimes considered a break from the earlier concerns of cybernetics, there is much continuity with previous work and it can be thought of as a distinct tradition within cybernetics, with origins in issues evident during the Macy conferences in which cybernetics was initially developed. Its concerns include autonomy, epistemology, ethics, language, reflexivity, self-consistency, self-referentiality, and self-organizing capabilities of complex systems. It has been characterised as cybernetics where "circularity is taken seriously".
Overview
Terminology
Second-order cybernetics can be abbreviated as C2 or SOC, and is sometimes referred to as the cybernetics of cybernetics, or, more rarely, the new cybernetics, or second cybernetics.
These terms are often used interchangeably, but can also stress different aspects:
Most specifically, and especially where phrased as the cybernetics of cybernetics, second-order cybernetics is the recursive application of cybernetics to itself. This is closely associated with Mead's 1967 address to the American Society for Cyberne
|
https://en.wikipedia.org/wiki/Manifold
|
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an -dimensional manifold, or -manifold for short, is a topological space with the property that each point has a neighborhood that is homeomorphic to an open subset of -dimensional Euclidean space.
One-dimensional manifolds include lines and circles, but not lemniscates. Two-dimensional manifolds are also called surfaces. Examples include the plane, the sphere, and the torus, and also the Klein bottle and real projective plane.
The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described in terms of well-understood topological properties of simpler spaces. Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. The concept has applications in computer-graphics given the need to associate pictures with coordinates (e.g. CT scans).
Manifolds can be equipped with additional structure. One important class of manifolds are differentiable manifolds; their differentiable structure allows calculus to be done. A Riemannian metric on a manifold allows distances and angles to be measured. Symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity.
The study of manifolds requires working knowledge of calculus and topology.
Motivating examples
Circle
After a line, a circle is the simplest example of a topological manifold. Topology ignores bending, so a small piece of a circle is treated the same as a small piece of a line. Considering, for instance, the top part of the unit circle, x2 + y2 = 1, where the y-coordinate is positive (indicated by the yellow arc in Figure 1). Any point of this arc can be uniquely described by its x-coordinate. So, projection onto the first coordinate is a continuous and inv
|
https://en.wikipedia.org/wiki/Beta%20oxidation
|
In biochemistry and metabolism, beta oxidation (also β-oxidation) is the catabolic process by which fatty acid molecules are broken down in the cytosol in prokaryotes and in the mitochondria in eukaryotes to generate acetyl-CoA, which enters the citric acid cycle, and NADH and FADH2, which are co-enzymes used in the electron transport chain. It is named as such because the beta carbon of the fatty acid undergoes oxidation to a carbonyl group. Beta-oxidation is primarily facilitated by the mitochondrial trifunctional protein, an enzyme complex associated with the inner mitochondrial membrane, although very long chain fatty acids are oxidized in peroxisomes.
The overall reaction for one cycle of beta oxidation is:
Cn-acyl-CoA + FAD + + + CoA → Cn-2-acyl-CoA + + NADH + + acetyl-CoA
Activation and membrane transport
Free fatty acids cannot penetrate any biological membrane due to their negative charge. Free fatty acids must cross the cell membrane through specific transport proteins, such as the SLC27 family fatty acid transport protein. Once in the cytosol, the following processes bring fatty acids into the mitochondrial matrix so that beta-oxidation can take place.
Long-chain-fatty-acid—CoA ligase catalyzes the reaction between a fatty acid with ATP to give a fatty acyl adenylate, plus inorganic pyrophosphate, which then reacts with free coenzyme A to give a fatty acyl-CoA ester and AMP.
If the fatty acyl-CoA has a long chain, then the carnitine shuttle must be utilized:
Acyl-CoA is transferred to the hydroxyl group of carnitine by carnitine palmitoyltransferase I, located on the cytosolic faces of the outer and inner mitochondrial membranes.
Acyl-carnitine is shuttled inside by a carnitine-acylcarnitine translocase, as a carnitine is shuttled outside.
Acyl-carnitine is converted back to acyl-CoA by carnitine palmitoyltransferase II, located on the interior face of the inner mitochondrial membrane. The liberated carnitine is shuttled back to the cytosol, as
|
https://en.wikipedia.org/wiki/Television%20encryption
|
Television encryption, often referred to as scrambling, is encryption used to control access to pay television services, usually cable, satellite, or Internet Protocol television (IPTV) services.
History
Pay television exists to make revenue from subscribers, and sometimes those subscribers do not pay. The prevention of piracy on cable and satellite networks has been one of the main factors in the development of Pay TV encryption systems.
The early cable-based Pay TV networks used no security. This led to problems with people connecting to the network without paying. Consequently, some methods were developed to frustrate these self-connectors. The early Pay TV systems for cable television were based on a number of simple measures. The most common of these was a channel-based filter that would effectively stop the channel being received by those who had not subscribed. These filters would be added or removed according to the subscription. As the number of television channels on these cable networks grew, the filter-based approach became increasingly impractical.
Other techniques, such as adding an interfering signal to the video or audio, began to be used as the simple filter solutions were easily bypassed. As the technology evolved, addressable set-top boxes became common, and more complex scrambling techniques such as digital encryption of the audio or video cut and rotate (where a line of video is cut at a particular point and the two parts are then reordered around this point) were applied to signals.
Encryption was used to protect satellite-distributed feeds for cable television networks. Some of the systems used for cable feed distribution were expensive. As the DTH market grew, less secure systems began to be used. Many of these systems (such as Oak Orion) were variants of cable television scrambling systems that affected the synchronisation part of the video, inverted the video signal, or added an interfering frequency to the video. All of these analogue
|
https://en.wikipedia.org/wiki/Daniel%20Pedoe
|
Dan Pedoe (29 October 1910, London – 27 October 1998, St Paul, Minnesota, USA) was an English-born mathematician and geometer with a career spanning more than sixty years. In the course of his life he wrote approximately fifty research and expository papers in geometry. He is also the author of various core books on mathematics and geometry some of which have remained in print for decades and been translated into several languages. These books include the three-volume Methods of Algebraic Geometry (which he wrote in collaboration with W. V. D. Hodge), The Gentle Art of Mathematics, Circles: A Mathematical View, Geometry and the Visual Arts and most recently Japanese Temple Geometry Problems: San Gaku (with Hidetoshi Fukagawa).
Early life
Daniel Pedoe was born in London in 1910, the youngest of thirteen children of Szmul Abramski, a Jewish immigrant from Poland who found himself in London in the 1890s: he had boarded a cattleboat not knowing whether it was bound for New York or London, so his final destination was one of blind chance. Pedoe's mother, Ryfka Raszka Pedowicz, was the only child of Wolf Pedowicz, a corn merchant and his wife, Sarah Haimnovna Pecheska from Łomża then in Congress Poland (that part of Poland then under Russian control). The family name requires some explanation. The father, Abramski, was one of the Kohanim, a priestly group, and once in Britain, he changed his surname to Cohen. At first, all thirteen children took the surname Cohen, but later, to avoid any potential antisemitism, some of the Cohen children changed their surname to Pedoe, a contraction of their mother's maiden name; this happened while Daniel was at school, aged 12.
"Danny" was the youngest child in a family of thirteen children and his childhood was spent in relative poverty in the East End of London, despite their father being a skilled cabinetmaker. He attended the Central Foundation Boys' School where he was first influenced in his love of geometry by the headmaster N
|
https://en.wikipedia.org/wiki/Hydra%20%28chess%29
|
Hydra was a chess machine, designed by a team with Dr. Christian "Chrilly" Donninger, Dr. Ulf Lorenz, GM Christopher Lutz and Muhammad Nasir Ali. Since 2006 the development team consisted only of Donninger and Lutz. Hydra was under the patronage of the PAL Group and Sheikh Tahnoon Bin Zayed Al Nahyan of Abu Dhabi. The goal of the Hydra Project was to dominate the computer chess world, and finally have an accepted victory over humans.
Hydra represented a potentially significant leap in the strength of computer chess. Design team member Lorenz estimates its FIDE equivalent playing strength to be over Elo 3000, and this is in line with its results against Michael Adams and Shredder 8, the former micro-computer chess champion.
Hydra began competing in 2002 and played its last game in June 2006. In June 2009, Christopher Lutz stated that "unfortunately the Hydra project is discontinued." The sponsors decided to end the project.
Architecture
The Hydra team originally planned to have Hydra appear in four versions: Orthus, Chimera, Scylla and then the final Hydra version – the strongest of them all. The original version of Hydra evolved from an earlier design called Brutus and works in a similar fashion to Deep Blue, utilising large numbers of purpose-designed chips (in this case implemented as a field-programmable gate array or FPGA). In Hydra, there are multiple computers, each with its own FPGA acting as a chess coprocessor. These co-processors enabled Hydra to search enormous numbers of positions per second, making each processor more than ten times faster than an unaided computer.
Hydra ran on a 32-node Intel Xeon with a Xilinx FPGA accelerator card cluster, with a total of 64 gigabytes of RAM. It evaluates about 150,000,000 chess positions per second, roughly the same as the 1997 Deep Blue which defeated Garry Kasparov, but with several times more overall computing power. Whilst FPGAs generally have a lower performance level than ASIC chips, modern-day FPGAs run
|
https://en.wikipedia.org/wiki/Batcher%20odd%E2%80%93even%20mergesort
|
Batcher's odd–even mergesort
is a generic construction devised by Ken Batcher for sorting networks of size O(n (log n)2) and depth O((log n)2), where n is the number of items to be sorted. Although it is not asymptotically optimal, Knuth concluded in 1998, with respect to the AKS network that "Batcher's method is much better, unless n exceeds the total memory capacity of all computers on earth!"
It is popularized by the second GPU Gems book, as an easy way of doing reasonably efficient sorts on graphics-processing hardware.
Pseudocode
Various recursive and iterative schemes are possible to calculate the indices of the elements to be compared and sorted. This is one iterative technique to generate the indices for sorting n elements:
# note: the input sequence is indexed from 0 to (n-1)
for p = 1, 2, 4, 8, ... # as long as p < n
for k = p, p/2, p/4, p/8, ... # as long as k >= 1
for j = mod(k,p) to (n-1-k) with a step size of 2k
for i = 0 to min(k-1, n-j-k-1) with a step size of 1
if floor((i+j) / (p*2)) == floor((i+j+k) / (p*2))
compare and sort elements (i+j) and (i+j+k)
Non-recursive calculation of the partner node index is also possible.
See also
Bitonic sorter
Pairwise sorting network
References
External links
Odd–even mergesort at hs-flensburg.de
Odd-even mergesort network generator Interactive Batcher's Odd-Even merge-based sorting network generator.
Sorting algorithms
|
https://en.wikipedia.org/wiki/Correlation%20dimension
|
In chaos theory, the correlation dimension (denoted by ν) is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension.
For example, if we have a set of random points on the real number line between 0 and 1, the correlation dimension will be ν = 1, while if they are distributed on say, a triangle embedded in three-dimensional space (or m-dimensional space), the correlation dimension will be ν = 2. This is what we would intuitively expect from a measure of dimension. The real utility of the correlation dimension is in determining the (possibly fractional) dimensions of fractal objects. There are other methods of measuring dimension (e.g. the Hausdorff dimension, the box-counting dimension, and the
information dimension) but the correlation dimension has the advantage of being straightforwardly and quickly calculated, of being less noisy when only a small number of points is available, and is often in agreement with other calculations of dimension.
For any set of N points in an m-dimensional space
then the correlation integral C(ε) is calculated by:
where g is the total number of pairs of points which have a distance between them that is less than distance ε (a graphical representation of such close pairs is the recurrence plot). As the number of points tends to infinity, and the distance between them tends to zero, the correlation integral, for small values of ε, will take the form:
If the number of points is sufficiently large, and evenly distributed, a log-log graph of the correlation integral versus ε will yield an estimate of ν. This idea can be qualitatively understood by realizing that for higher-dimensional objects, there will be more ways for points to be close to each other, and so the number of pairs close to each other will rise more rapidly for higher dimensions.
Grassberger and Procaccia introduced the technique in 1983; the article gives the results of such estimates for a nu
|
https://en.wikipedia.org/wiki/Process%20%28anatomy%29
|
In anatomy, a process () is a projection or outgrowth of tissue from a larger body. For instance, in a vertebra, a process may serve for muscle attachment and leverage (as in the case of the transverse and spinous processes), or to fit (forming a synovial joint), with another vertebra (as in the case of the articular processes). The word is also used at the microanatomic level, where cells can have processes such as cilia or pedicels. Depending on the tissue, processes may also be called by other terms, such as apophysis, tubercle, or protuberance.
Examples
Examples of processes include:
The many processes of the human skull:
The mastoid and styloid processes of the temporal bone
The zygomatic process of the temporal bone
The zygomatic process of the frontal bone
The orbital, temporal, lateral, frontal, and maxillary processes of the zygomatic bone
The anterior, middle, and posterior clinoid processes and the petrosal process of the sphenoid bone
The uncinate process of the ethmoid bone
The jugular process of the occipital bone
The alveolar, frontal, zygomatic, and palatine processes of the maxilla
The ethmoidal and maxillary processes of the inferior nasal concha
The pyramidal, orbital, and sphenoidal processes of the palatine bone
The coronoid and condyloid processes of the mandible
The xiphoid process at the end of the sternum
The acromion and coracoid processes of the scapula
The coronoid process of the ulna
The radial and ulnar styloid processes
The uncinate processes of ribs found in birds and reptiles
The uncinate process of the pancreas
The spinous, articular, transverse, accessory, uncinate, and mammillary processes of the vertebrae
The trochlear process of the heel
The appendix, which is sometimes called the "vermiform process", notably in Gray's Anatomy
The olecranon process of the ulna
See also
Eminence
Tubercle
Appendage
Pedicle of vertebral arch
Notes
References
Dorland's Medical Dictionary
Anatomy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.