source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Factorial%20moment%20generating%20function
|
In probability theory and statistics, the factorial moment generating function (FMGF) of the probability distribution of a real-valued random variable X is defined as
for all complex numbers t for which this expected value exists. This is the case at least for all t on the unit circle , see characteristic function. If X is a discrete random variable taking values only in the set {0,1, ...} of non-negative integers, then is also called probability-generating function (PGF) of X and is well-defined at least for all t on the closed unit disk .
The factorial moment generating function generates the factorial moments of the probability distribution.
Provided exists in a neighbourhood of t = 1, the nth factorial moment is given by
where the Pochhammer symbol (x)n is the falling factorial
(Many mathematicians, especially in the field of special functions, use the same notation to represent the rising factorial.)
Examples
Poisson distribution
Suppose X has a Poisson distribution with expected value λ, then its factorial moment generating function is
(use the definition of the exponential function) and thus we have
See also
Moment (mathematics)
Moment-generating function
Cumulant-generating function
References
Factorial and binomial topics
Moment (mathematics)
Generating functions
|
https://en.wikipedia.org/wiki/Abrikosov%20vortex
|
In superconductivity, a fluxon (also called an Abrikosov vortex or quantum vortex) is a vortex of supercurrent in a type-II superconductor, used by Alexei Abrikosov to explain magnetic behavior of type-II superconductors. Abrikosov vortices occur generically in the Ginzburg–Landau theory of superconductivity.
Overview
The solution is a combination of fluxon solution by Fritz London, combined with a concept of core of quantum vortex by Lars Onsager.
In the quantum vortex, supercurrent circulates around the normal (i.e. non-superconducting) core of the vortex. The core has a size — the superconducting coherence length (parameter of a Ginzburg–Landau theory). The supercurrents decay on the distance about (London penetration depth) from the core. Note that in type-II superconductors . The circulating supercurrents induce magnetic fields with the total flux equal to a single flux quantum . Therefore, an Abrikosov vortex is often called a fluxon.
The magnetic field distribution of a single vortex far from its core can be described by the same equation as in the London's fluxoid
where is a zeroth-order Bessel function. Note that, according to the above formula, at the magnetic field , i.e. logarithmically diverges. In reality, for the field is simply given by
where κ = λ/ξ is known as the Ginzburg–Landau parameter, which must be in type-II superconductors.
Abrikosov vortices can be trapped in a type-II superconductor by chance, on defects, etc. Even if initially type-II superconductor contains no vortices, and one applies a magnetic field larger than the lower critical field (but smaller than the upper critical field ), the field penetrates into superconductor in terms of Abrikosov vortices. Each vortex obeys London's magnetic flux quantization and carries one quantum of magnetic flux . Abrikosov vortices form a lattice, usually triangular, with the average vortex density (flux density) approximately equal to the externally applied magnetic field. As wit
|
https://en.wikipedia.org/wiki/Sacral%20architecture
|
Sacral architecture (also known as sacred architecture or religious architecture) is a religious architectural practice concerned with the design and construction of places of worship or sacred or intentional space, such as churches, mosques, stupas, synagogues, and temples. Many cultures devoted considerable resources to their sacred architecture and places of worship. Religious and sacred spaces are amongst the most impressive and permanent monolithic buildings created by humanity. Conversely, sacred architecture as a locale for meta-intimacy may also be non-monolithic, ephemeral and intensely private, personal and non-public.
Sacred, religious and holy structures often evolved over centuries and were the largest buildings in the world, prior to the modern skyscraper. While the various styles employed in sacred architecture sometimes reflected trends in other structures, these styles also remained unique from the contemporary architecture used in other structures. With the rise of Christianity and Islam, religious buildings increasingly became centres of worship, prayer and meditation.
The Western scholarly discipline of the history of architecture itself closely follows the history of religious architecture from ancient times until the Baroque period, at least. Sacred geometry, iconography, and the use of sophisticated semiotics such as signs, symbols and religious motifs are endemic to sacred architecture.
Spiritual aspects of religious architecture
Sacred or religious architecture is sometimes called sacred space.
Architect Norman L. Koonce has suggested that the goal of sacred architecture is to make "transparent the boundary between matter and mind, flesh and the spirit." In discussing sacred architecture, Protestant minister Robert Schuller suggested that "to be psychologically healthy, human beings need to experience their natural setting—the setting we were designed for, which is the garden." Meanwhile, Richard Kieckhefer suggests that entering into
|
https://en.wikipedia.org/wiki/Taplow%20Court
|
Taplow Court is a Victorian house in the village of Taplow in Buckinghamshire, England. Its origins are an Elizabethan manor house, remodelled in the early 17th century. In the 18th century the court was owned by the Earls of Orkney. In the 1850s, the court was sold to Charles Pascoe Grenfell, whose descendants retained ownership until after the Second World War. The court then served as a corporate headquarters for British Telecommunications Research (BTR) an independent research company set up in 1946. BTR was subsequently acquired by Plessey Electronics. In 1988 it was bought by the Buddhist foundation, Soka Gakkai International and serves as their UK headquarters.
The court is a Grade II listed building, and its present appearance is due to a major rebuilding undertaken by William Burn for Charles Grenfell in 1855–1860. In the early 20th century, the court was home to William Grenfell and his wife Ettie. She was a noted Edwardian hostess, and Taplow Court became a gathering place for The Souls, a group of aristocratic intellectuals.
History
Pevsner and Williamson record the court's "complicated" history. Its origins are an Elizabethan manor, which was reconstructed after a fire in 1616 by Sir Henry Guildford. In the 18th and early 19th centuries the court was owned by the Earls of Orkney, who has also owned the adjacent Cliveden.
From 1852, Taplow Court became the home of the Grenfell family, purchased by Charles Pascoe Grenfell in August of that year. It was inherited in 1867 by his grandson William Grenfell, 1st Baron Desborough where a prominent social role was also played by his wife Ettie. Ettie was described by her nephew David Cecil as "the most brilliant hostess in an age of brilliant hostesses", and hosted an aristocratic group known as "the Souls" at the house. Visitors included Henry Irving, Vita Sackville-West, Edward VII then Prince of Wales, Winston Churchill, H. G. Wells, Patrick Shaw Stewart, Edith Wharton and Oscar Wilde.
During the Great Wa
|
https://en.wikipedia.org/wiki/Body%20plan
|
A body plan, (), or ground plan is a set of morphological features common to many members of a phylum of animals. The vertebrates share one body plan, while invertebrates have many.
This term, usually applied to animals, envisages a "blueprint" encompassing aspects such as symmetry, layers, segmentation, nerve, limb, and gut disposition. Evolutionary developmental biology seeks to explain the origins of diverse body plans.
Body plans have historically been considered to have evolved in a flash in the Ediacaran biota; filling the Cambrian explosion with the results, and a more nuanced understanding of animal evolution suggests gradual development of body plans throughout the early Palaeozoic. Recent studies in animals and plants started to investigate whether evolutionary constraints on body plan structures can explain the presence of developmental constraints during embryogenesis such as the phenomenon referred to as phylotypic stage.
History
Among the pioneering zoologists, Linnaeus identified two body plans outside the vertebrates; Cuvier identified three; and Haeckel had four, as well as the Protista with eight more, for a total of twelve. For comparison, the number of phyla recognised by modern zoologists has risen to 36.
Linnaeus, 1735
In his 1735 book Systema Naturæ, Swedish botanist Linnaeus grouped the animals into quadrupeds, birds, "amphibians" (including tortoises, lizards and snakes), fish, "insects" (Insecta, in which he included arachnids, crustaceans and centipedes) and "worms" (Vermes). Linnaeus's Vermes included effectively all other groups of animals, not only tapeworms, earthworms and leeches but molluscs, sea urchins and starfish, jellyfish, squid and cuttlefish.
Cuvier, 1817
In his 1817 work, Le Règne Animal, French zoologist Georges Cuvier combined evidence from comparative anatomy and palaeontology to divide the animal kingdom into four body plans. Taking the central nervous system as the main organ system which controlled all the othe
|
https://en.wikipedia.org/wiki/Viral%20life%20cycle
|
Viruses are only able to replicate themselves by commandeering the reproductive apparatus of cells and making them reproduce the virus's genetic structure and particles instead. How viruses do this depends mainly on the type of nucleic acid DNA or RNA they contain, which is either one or the other but never both. Viruses cannot function or reproduce outside a cell, and are totally dependent on a host cell to survive. Most viruses are species specific, and related viruses typically only infect a narrow range of plants, animals, bacteria, or fungi.
Life cycle process
Viral entry
For the virus to reproduce and thereby establish infection, it must enter cells of the host organism and use those cells' materials. To enter the cells, proteins on the surface of the virus interact with proteins of the cell. Attachment, or adsorption, occurs between the viral particle and the host cell membrane. A hole forms in the cell membrane, then the virus particle or its genetic contents are released into the host cell, where replication of the viral genome may commence.
Viral replication
Next, a virus must take control of the host cell's replication mechanisms. It is at this stage a distinction between susceptibility and permissibility of a host cell is made. Permissibility determines the outcome of the infection. After control is established and the environment is set for the virus to begin making copies of itself, replication occurs quickly by the millions.
Viral shedding
After a virus has made many copies of itself, the progeny may begin to leave the cell by several methods. This is called shedding and is the final stage in the viral life cycle.
Viral latency
Some viruses can "hide" within a cell, which may mean that they evade the host cell defenses or immune system and may increase the long-term "success" of the virus. This hiding is deemed latency. During this time, the virus does not produce any progeny, it remains inactive until external stimuli—such as light or stress
|
https://en.wikipedia.org/wiki/British%20Atomic%20Scientists%20Association
|
The British Atomic Scientists Association (ASA or BASA), was founded by Joseph Rotblat in 1946.
It was a politically neutral group, composed of eminent physicists and other scientists and was concerned with matters of British public policy regarding applications and dangers of nuclear physics (including nuclear weapons and nuclear power).
In so doing it also sought to inform fellow scientists and the public of the essential facts, usually via published papers and other documents.
Members
The vice-president (VP) was the executive head while the president (P) was the
honorary position.
Kathleen Lonsdale (VP, P 1967)
Harrie Massey
Nevill Mott
Joseph Rotblat (VP 1946)
Basil Schonland
See also
Atomic Energy Research Establishment
Nuclear physics
Pugwash group
Science policy
Franco-British Nuclear Forum
External links
Founding, activities and fall of BASA
1946 establishments in the United Kingdom
1959 establishments in the United Kingdom
Defunct organisations based in the United Kingdom
Scientific organisations based in the United Kingdom
Nuclear technology in the United Kingdom
Nuclear organizations
Organizations disestablished in 1959
Scientific organizations established in 1946
|
https://en.wikipedia.org/wiki/The%20Ground%20of%20Arts
|
Robert Recorde's Arithmetic: or, The Ground of Arts was one of the first printed English textbooks on arithmetic and the most popular of its time. The Ground of Arts appeared in London in 1543, and it was reprinted around 45 more editions until 1700. Editors and contributors of new sections included John Dee, John Mellis, Robert Hartwell, Thomas Willsford, and finally Edward Hatton.
The text is in the format of a dialogue between master and student to facilitate learning arithmetic without a teacher.
References
Further reading
John Denniss & Fenny Smith, "Robert Recorde and his remarkable Arithmetic", pages 25 to 38 in Gareth Roberts & Fenny Smith (editors) (2012) Robert Recorde: The Life and Times of a Tudor Mathematician, Cardiff: University of Wales Press
Mathematics books
British non-fiction literature
1540s books
|
https://en.wikipedia.org/wiki/P%C3%B3lya%20enumeration%20theorem
|
The Pólya enumeration theorem, also known as the Redfield–Pólya theorem and Pólya counting, is a theorem in combinatorics that both follows from and ultimately generalizes Burnside's lemma on the number of orbits of a group action on a set. The theorem was first published by J. Howard Redfield in 1927. In 1937 it was independently rediscovered by George Pólya, who then greatly popularized the result by applying it to many counting problems, in particular to the enumeration of chemical compounds.
The Pólya enumeration theorem has been incorporated into symbolic combinatorics and the theory of combinatorial species.
Simplified, unweighted version
Let X be a finite set and let G be a group of permutations of X (or a finite symmetry group that acts on X). The set X may represent a finite set of beads, and G may be a chosen group of permutations of the beads. For example, if X is a necklace of n beads in a circle, then rotational symmetry is relevant so G is the cyclic group Cn, while if X is a bracelet of n beads in a circle, rotations and reflections are relevant so G is the dihedral group Dn of order 2n. Suppose further that Y is a finite set of colors — the colors of the beads — so that YX is the set of colored arrangements of beads (more formally: YX is the set of functions .) Then the group G acts on YX. The Pólya enumeration theorem counts the number of orbits under G of colored arrangements of beads by the following formula:
where is the number of colors and c(g) is the number of cycles of the group element g when considered as a permutation of X.
Full, weighted version
In the more general and more important version of the theorem, the colors are also weighted in one or more ways, and there could be an infinite number of colors provided that the set of colors has a generating function with finite coefficients. In the univariate case, suppose that
is the generating function of the set of colors, so that there are fw colors of weight w for each integer w ≥
|
https://en.wikipedia.org/wiki/Coconut%20milk%20powder
|
Coconut milk powder is a fine, white powder used in Southeast Asian and other cuisines. Coconut milk powder is manufactured through the spray drying process of raw unsweetened coconut cream and is reconstituted with water for use in recipes that call for coconut milk. Many commercially available coconut milk powders list milk or casein among their ingredients.
See also
Powdered milk
Notes
Convenience foods
Food ingredients
Milk substitutes
Southeast Asian cuisine
Instant foods and drinks
Powdered drink mixes
|
https://en.wikipedia.org/wiki/Card%20reader
|
A card reader is a data input device that reads data from a card-shaped storage medium. The first were punched card readers, which read the paper or cardboard punched cards that were used during the first several decades of the computer industry to store information and programs for computer systems. Modern card readers are electronic devices that can read plastic cards embedded with either a barcode, magnetic strip, computer chip or another storage medium.
A memory card reader is a device used for communication with a smart card or a memory card.
A magnetic card reader is a device used to read magnetic stripe cards, such as credit cards.
A business card reader is a device used to scan and electronically save printed business cards.
Smart card readers
A smart card reader is an electronic device that reads smart cards and can be found in the following form:
Keyboards with a built-in card reader
External devices and internal drive bay card reader devices for personal computers (PC)
Laptop models containing a built-in smart card reader and/or using flash upgradeable firmware.
External devices that can read a Personal identification number (PIN) or other information may also be connected to a keyboard (usually called "card readers with PIN pad"). This model works by supplying the integrated circuit on the smart card with electricity and communicating via protocols, thereby enabling the user to read and write to a fixed address on the card.
If the card does not use any standard transmission protocol, but uses a custom/proprietary protocol, it has the communication protocol designation T=14.
The latest PC/SC CCID specifications define a new smart card framework. This framework works with USB devices with the specific device class 0x0B. Readers with this class do not need device drivers when used with PC/SC-compliant operating systems, because the operating system supplies the driver by default.
PKCS#11 is an API designed to be platform-independent, defining a
|
https://en.wikipedia.org/wiki/Scanned%20synthesis
|
Scanned synthesis represents a powerful and efficient technique for animating wave tables and controlling them in real-time . Developed by Bill Verplank, Rob Shaw, and Max Mathews between 1998 and 1999 at Interval Research, Inc., it is based on the psychoacoustics of how we hear and appreciate timbres and on our motor control (haptic) abilities to manipulate timbres during live performance
Scanned synthesis involves a slow dynamic system whose frequencies of vibration are below about 15 Hz . The ear cannot hear the low frequencies of the dynamic system. So, to make audible frequencies, the "shape" of the dynamic system, along a closed path, is scanned periodically. The "shape" is converted to a sound wave whose pitch is determined by the speed of the scanning function. Pitch control is completely separate from the dynamic system control. Thus timbre and pitch are independent. This system can be looked upon as a dynamic wave table. The model can be compared to a slowly vibrating string, or a two dimensional surface obeying the wave equation.
The following implementations of scanned synthesis are freely available:
Csound features the scanu and scans opcodes developed by Paris Smaragdis. This was the first publicly available implementation of scanned synthesis.
Pure Data features the 'pdp_scan~' and 'pdp_scanxy~' objects of the PDP extension.
Common Lisp Music in circular-scanned.clm
Scanned Synth VST from Humanoid Sound Systems was the first VST implementation of scanned synthesis, first released in March 2006 and still being actively developed. It is available from the Humanoid Sound Systems web site.
ScanSynthGL is another VST implementation of scanned synthesis by mdsp of Smartelectronix, also first released in March 2006. It is available from the KVRAudio forum. There is an unreleased beta version, some audio samples and a screenshot but no public version has been released yet.
References
Digital audio
Sound synthesis types
|
https://en.wikipedia.org/wiki/DDObjects
|
DDObjects is a remoting framework for Borland Delphi and C++ Builder. A main goal while developing DDObjects has not been only to keep the code one has to implement in order to utilize DDObjects as simple as possible but also very close to Delphi's usual style of event-driven programming.
DDObjects supports remote method calls, server callbacks, asynchronous calls, asynchronous callbacks, stateful and -less objects and other features. DDObjects doesn't mimic other implementations as DCOM or CORBA, which are generalized to a least common denominator, but makes use of Delphi's rich type system including Objects, Exceptions, Records, Sets and Enumerations.
DDObjects uses plain XML and HTTP as protocol, contains a broker component, a sourcecode generator as well as some new visual controls. DDObjects supports Delphi 5 to 7, 2005-XE2 (currently 32bit only) as well as C++ Builder 6, 2006 and 2009.
External links
Inter-process communication
|
https://en.wikipedia.org/wiki/Motty
|
Motty (11 July – 21 July 1978) was the only proven hybrid between an Asian and an African elephant. The male calf was born in Chester Zoo, to Asian mother Sheba and African father Jumbolino. He was named after George Mottershead, who founded the Chester Zoo in 1931.
Appearance
Motty's head and ears were morphologically like Loxodonta (African), while the toenail numbers, with five on the front feet and four on the hind were that of Elephas (Asian). The trunk had a single trunk finger as seen in Elephas but the trunk length was more similar to Loxodonta. His vertebral column showed an Loxodonta profile above the shoulders transitioning to the convex hump profile of Elephas below the shoulders.
Cause of death
Due to being born six weeks early, Motty was considered underweight by . Despite intensive human care, Motty died of an umbilical infection 10 days after his birth on 21 July. The necropsy revealed death to be due to necrotizing enterocolitis and E. coli septicaemia present in both his colon and the umbilical cord.
Preservation
His body was preserved by a private company, and is a mounted specimen at the Natural History Museum in London.
Other hybrids
The straight-tusked elephant, an extinct elephant whose closest extant relative is the African forest elephant, interbred with the Asian elephant, as recovered DNA has shown.
Although the Asian elephant Elephas maximus and the African elephant Loxodonta africana belong to different genera, they share the same number of chromosomes, thus making hybridisation possible.
See also
List of individual elephants
References
External links
Koehl D, Elephant Encyclopedia: Motty, the Hybrid Elephant
1978 animal births
1978 animal deaths
Individual elephants
Mammal hybrids
Intergeneric hybrids
|
https://en.wikipedia.org/wiki/Push-IMAP
|
Push-IMAP, which is otherwise known as P-IMAP or Push extensions for Internet Message Access Protocol, is an email protocol designed as a faster way to synchronise a mobile device like a PDA or smartphone to an email server.
It was developed by Oracle and other partners, and based on IMAP with additional enhancements for optimization in a mobile setting. It was submitted as input to the Lemonade Profile IETF Working Group - but was not included in the resulting RFC 4550.
The protocol
The protocol was designed to provide for a secure way to automatically keep communicating new messages between a server and a mobile device like a PDA or Smartphone. It should reduce the time and effort needed to synchronize messages between the two by using an open connection that is kept alive by some kind of heartbeat. To reduce necessary bandwidth, it uses compression and command macros.
Additionally, P-IMAP features a mechanism for sending email that is derived from (but not identical to) SMTP, and so a rich email service is provided using a single connection.
P-IMAP should not be viewed as an alternative to the IMAP IDLE command (RFC 2177). In fact, IDLE is one of the required mechanisms for a P-IMAP server to notify the client (optional notifications are SMS or WAP Push).
Other mobile technologies
Although they are both based on IMAP, the Yahoo! Mail and iCloud push email services for iPhone do not use a standard form of P-IMAP. Yahoo! Mail uses a special UDP message to trigger an email synchronization, while Apple's iCloud push email uses a variant of XMPP.
See also
IMAP
Push email
Lemonade Profile
SyncML
References
External links
Internet mail protocols
IMAP
|
https://en.wikipedia.org/wiki/BioGRID
|
The Biological General Repository for Interaction Datasets (BioGRID) is a curated biological database of protein-protein interactions, genetic interactions, chemical interactions, and post-translational modifications created in 2003 (originally referred to as simply the General Repository for Interaction Datasets (GRID) by Mike Tyers, Bobby-Joe Breitkreutz, and Chris Stark at the Lunenfeld-Tanenbaum Research Institute at Mount Sinai Hospital. It strives to provide a comprehensive curated resource for all major model organism species while attempting to remove redundancy to create a single mapping of data. Users of The BioGRID can search for their protein, chemical or publication of interest and retrieve annotation, as well as curated data as reported, by the primary literature and compiled by in house large-scale curation efforts. The BioGRID is hosted in Toronto, Ontario, Canada and Dallas, Texas, United States and is partnered with the Saccharomyces Genome Database, FlyBase, WormBase, PomBase, and the Alliance of Genome Resources. The BioGRID is funded by the NIH and CIHR. BioGRID is an observer member of the International Molecular Exchange Consortium (IMEx).
History
The BioGRID was originally published and released as simply the General Repository for Interaction Datasets but was later renamed to the BioGRID in order to more concisely describe the project, and help distinguish it from several GRID Computing projects with a similar name. Originally separated into organism specific databases, the newest version now provides a unified front end allowing for searches across several organisms simultaneously. The BioGRID was developed initially as a project at the Lunenfeld-Tanenbaum Research Institute at Mount Sinai Hospital but has since expanded to include teams at the Institut de Recherche en Immunologie et en Cancérologie at the Université de Montréal and the Lewis-Sigler Institute for Integrative Genomics at Princeton University. The BioGRID's original focus wa
|
https://en.wikipedia.org/wiki/Zfone
|
Zfone is software for secure voice communication over the Internet (VoIP), using the ZRTP protocol. It is created by Phil Zimmermann, the creator of the PGP encryption software. Zfone works on top of existing SIP- and RTP-programs, but should work with any SIP- and RTP-compliant VoIP-program.
Zfone turns many existing VoIP clients into secure phones. It runs in the Internet Protocol stack on any Windows XP, Mac OS X, or Linux PC, and intercepts and filters all the VoIP packets as they go in and out of the machine, and secures the call on the fly. A variety of different software VoIP clients can be used to make a VoIP call. The Zfone software detects when the call starts, and initiates a cryptographic key agreement between the two parties, and then proceeds to encrypt and decrypt the voice packets on the fly. It has its own separate GUI, telling the user if the call is secure. Zfone describes itself to end-users as a "bump on the wire" between the VoIP client and the Internet, which acts upon the protocol stack.
Zfone's libZRTP SDK libraries are released under either the Affero General Public License (AGPL) or a commercial license. Note that only the libZRTP SDK libraries are provided under the AGPL. The parts of Zfone that are not part of the libZRTP SDK libraries are not licensed under the AGPL or any other open source license. Although the source code of those components is published for peer review, they remain proprietary. The Zfone proprietary license also contains a time bomb provision.
It appears that Zfone development has stagnated, however, as the most recent version was released on 22 Mar 2009. In addition, since 29 Jan 2011, it has not been possible to download Zfone from the developer's website since the download server has gone offline.
Platforms and specification
Availability – Mac OS X, Linux, and Windows as compiled programs as well as an SDK.
Encryption standards – Based on ZRTP, which uses 128- or 256-bit AES together with a 3072-bit key ex
|
https://en.wikipedia.org/wiki/Principal%20type
|
In type theory, a type system is said to have the principal type property if, given a term and an environment, there exists a principal type for this term in this environment, i.e. a type such that all other types for this term in this environment are an instance of the principal type.
The principal type property is a desirable one for a type system, as it provides a way to type expressions in a given environment with a type which encompasses all of the expressions' possible types, instead of having several incomparable possible types. Type inference for systems with the principal type property will usually attempt to infer the principal type.
For instance, the ML system has the principal type property and principal types for an expression can be computed by Robinson's unification algorithm, which is used by the Hindley–Milner type inference algorithm. However, many extensions to the type system of ML, such as polymorphic recursion, can make the inference of the principal type undecidable. Other extensions, such as Haskell's generalized algebraic data types, destroy the principal type property of the language, requiring the use of type annotations or the compiler to "guess" the intended type from among several options.
The principal typing property requires that, given a term, there exist a typing (i.e. a pair with a context and a type) which is an instance of all possible typings of the term. The principal typing property can be confused with the principal type property but is distinct. The principal type property relies on the context as an input to determine the type, but the principal typing property outputs the context as a result.
References
Type theory
Type inference
|
https://en.wikipedia.org/wiki/Memory%20and%20aging
|
Age-related memory loss, sometimes described as "normal aging" (also spelled "ageing" in British English), is qualitatively different from memory loss associated with types of dementia such as Alzheimer's disease, and is believed to have a different brain mechanism.
Mild cognitive impairment
Mild cognitive impairment (MCI) is a condition in which people face memory problems more often than that of the average person their age. These symptoms, however, do not prevent them from carrying out normal activities and are not as severe as the symptoms for Alzheimer's disease (AD). Symptoms often include misplacing items, forgetting events or appointments, and having trouble finding words.
According to recent research, MCI is seen as the transitional state between cognitive changes of normal aging and Alzheimer's disease. Several studies have indicated that individuals with MCI are at an increased risk for developing AD, ranging from one percent to twenty-five percent per year; in one study twenty-four percent of MCI patients progressed to AD in two years and twenty percent more over three years, whereas another study indicated that the progression of MCI subjects was fifty-five percent in four and a half years. Some patients with MCI, however, never progress to AD.
Studies have also indicated patterns that are found in both MCI and AD. Much like patients with Alzheimer's disease, those with mild cognitive impairment have difficulty accurately defining words and using them appropriately in sentences when asked. While MCI patients had a lower performance in this task than the control group, AD patients performed worse overall. The abilities of MCI patients stood out, however, due to the ability to provide examples to make up for their difficulties. AD patients failed to use any compensatory strategies and therefore exhibited the difference in use of episodic memory and executive functioning.
Normal aging
Normal aging is associated with a decline in various memory abili
|
https://en.wikipedia.org/wiki/Connectedness%20locus
|
In one-dimensional complex dynamics, the connectedness locus of a parameterized family of one-variable holomorphic functions is a subset of the parameter space which consists of those parameters for which the corresponding Julia set is connected.
Examples
Without doubt, the most famous connectedness locus is the Mandelbrot set, which arises from the family of complex quadratic polynomials :
The connectedness loci of the higher-degree unicritical families,
(where ) are often called 'Multibrot sets'.
For these families, the bifurcation locus is the boundary of the connectedness locus. This is no longer true in settings, such as the full parameter space of cubic polynomials, where there is more than one free critical point. For these families, even maps with disconnected Julia sets may display nontrivial dynamics. Hence here the connectedness locus is generally of less interest.
External links
Complex analysis
Fractals
|
https://en.wikipedia.org/wiki/Joan%20Roughgarden
|
Joan Roughgarden (born 13 March 1946) is an American ecologist and evolutionary biologist. She has engaged in theory and observation of coevolution and competition in Anolis lizards of the Caribbean, and recruitment limitation in the rocky intertidal zones of California and Oregon. She has more recently become known for her rejection of sexual selection, her theistic evolutionism, and her work on holobiont evolution.
Personal life and education
Roughgarden was born in Paterson, New Jersey, United States. She received a Bachelor of Science in biology (with Distinction and Phi Beta Kappa) and a Bachelor of Arts in Philosophy with highest honors from University of Rochester in 1968 and later a Ph.D. in biology from Harvard University in 1971. In 1998, Roughgarden came out as transgender and changed her name to Joan, making a coming out post on her website on her 52nd birthday.
Career
Roughgarden worked as an instructor and Assistant Professor of Biology at the University of Massachusetts Boston from 1970 to 1972. In 1972 she joined the faculty of the Department of Biology at Stanford University. After becoming full professor she retired in 2011, and became Emeritus Professor. She founded and directed the Earth Systems Program at Stanford and has received awards for service to undergraduate education. In 2012 she moved to Hawaii, where she became an adjunct professor at the Hawaiʻi Institute of Marine Biology. In her academic career, Roughgarden advised 20 Ph.D. students and 15 postdoctoral fellows.
Roughgarden has authored books and over 180 scientific articles. In addition to a textbook on ecological and evolutionary theory in 1979, Roughgarden has carried out ecological field studies with Caribbean lizards and with barnacles and their larvae along the California coast. In 2015, she wrote the fiction novel Ram-2050, a science-fiction retelling of the Ramayana.
Research
Caribbean Anoles & Interspecific Competition
Roughgarden's early work in the 1970s and 80s h
|
https://en.wikipedia.org/wiki/Home%20Work%20Convention
|
Home Work Convention, created in 1996, is an International Labour Organization (ILO) Convention, which came into force in 2000. It offers protection to workers who are employed in their own homes.
Overview
It was established in 1996, with the preamble stating:
The Convention provides protection for home workers, giving them equal rights with regard to workplace health and safety, social security rights, access to training, remuneration, minimum age of employment, maternity protection, and other rights.
Objectives of the Home Work Convention
The term home work means remote work done by a person in a place other than the workplace of the employer. The term employer describes a person, who, either directly or through an intermediary, provides home work in pursuance of his or her business.
Each member of the Convention aims the continuous improving the situation of homeworkers. The intention of the convention is to strengthen the principle of equal treatment, in particular to guarantee the establishment of the rights of homeworkers.
In addition, the convention has the specific purpose of protecting against discrimination in the following areas of employment: occupational safety, remuneration, social security protection, access to training, minimum age for taking up employment and maternity benefits.
Safety and health at work
National laws and regulations on safety and health at work also apply to home work. When working at home, certain conditions must be adapted so that a safe and healthy working environment is ensured.
Ratifications
The convention has been ratified by 13 countries as of 2022:
References
- ILO Convention C177
External links
Text.
Ratifications.
International Labour Organization conventions
Telecommuting
Treaties concluded in 1996
Treaties entered into force in 2000
Treaties of Albania
Treaties of Argentina
Treaties of Belgium
Treaties of Bosnia and Herzegovina
Treaties of Bulgaria
Treaties of Finland
Treaties of Ireland
Treaties of the
|
https://en.wikipedia.org/wiki/HP%20Xpander
|
The HP Xpander (F1903A) aka "Endeavour" was to be Hewlett-Packard's newest graphing calculator in 2002, but the project was cancelled in November 2001 months before it was scheduled to go into production. It had both a keyboard and a pen-based interface, measured 162.6 mm by 88.9 mm by 22.9 mm, with a large grayscale screen, and ran on two rechargeable AA batteries. It had a semi-translucent green cover on a gray case and an expansion slot.
The underlying operating system was Windows CE 3.0. It had 8 MB RAM, 16 MB ROM, a geometry application, a 240×320 display, a Hitachi SH3 processor, and e-lessons. One of the obvious omissions in the Xpander was the lack of a computer algebra system (CAS).
Math Xpander
After discontinuing the Xpander, HP decided to release the Xpander software, named the Math Xpander, as a free-of-charge application that ran on Windows CE-based Pocket PC devices. It was hosted by Saltire Software, who had been involved in its design.
See also
List of Hewlett-Packard products: Pocket calculators
HP calculators
Casio ClassPad 300 — a similar device by Casio
TI PLT SHH1
HP Jornada X25
References
External links
HP Xpander at hpmuseum.org
HP Xpander at hpcalc.org
HP Xpander at rskey.org
HP Xpander at evilmadscientist.com
Math Xpander user's guide
Graphing calculators
Windows CE devices
xpander
|
https://en.wikipedia.org/wiki/Chronology%20of%20computation%20of%20%CF%80
|
The table below is a brief chronology of computed numerical values of, or bounds on, the mathematical constant pi (). For more detailed explanations for some of these calculations, see Approximations of .
The last 100 decimal digits of the latest 2022 world record computation are:
4658718895 1242883556 4671544483 9873493812 1206904813 2656719174 5255431487 2142102057 7077336434 3095295560
Before 1400
1400–1949
1949–2009
2009–present
See also
History of pi
Approximations of π
References
External links
Borwein, Jonathan, "The Life of Pi"
Kanada Laboratory home page
Stu's Pi page
Takahashi's page
Pi
History of mathematics
Pi
Pi algorithms
|
https://en.wikipedia.org/wiki/Sun%20SPOT
|
Sun SPOT (Sun Small Programmable Object Technology) was a sensor node for a wireless sensor network developed by Sun Microsystems announced in 2007. The device used the IEEE 802.15.4 standard for its networking, and unlike other available sensor nodes, used the Squawk Java virtual machine.
After the acquisition of Sun Microsystems by Oracle Corporation, the SunSPOT platform was supported but its forum was shut down in 2012. A mirror of the old site is maintained for posterity.
Hardware
The completely assembled device fit in the palm of a hand.
Its first processor board included an ARM architecture 32 bit CPU with ARM920T core running at 180 MHz. It had 512 KB RAM and 4 MB flash memory. A 2.4 GHz IEEE 802.15.4 radio had an integrated antenna and a USB interface was included.
A sensor board included a three-axis accelerometer (with 2G and 6G range settings), temperature sensor, light sensor, 8 tri-color LEDs, analog and digital inputs, two momentary switches, and 4 high current output pins.
The unit used a 3.7V rechargeable 750 mAh lithium-ion battery, had a 30 uA deep sleep mode, and battery management provided by software.
Software
The device's use of Java device drivers is unusual since Java is generally hardware-independent. Sun SPOT uses a small Java ME Squawk which ran directly on the processor without an operating system. Both the Squawk VM and the Sun SPOT code are open source.
Standard Java development environments such as NetBeans can be used to create SunSPOT applications.
The management and deployment of application are handled by ant scripts, which can be called from a development environment, command line, or the tool provided with the SPOT SDK, "solarium".
The nodes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. Protocols such as Zigbee can be built on 802.15.4.
Sun Labs reported implementations of RSA and elliptic curve cryptography (ECC) optimized for small embedded devices.
Availabi
|
https://en.wikipedia.org/wiki/McCumber%20cube
|
In 1991, John McCumber created a model framework for establishing and evaluating information security (information assurance) programs, now known as The McCumber Cube.
This security model is depicted as a three-dimensional Rubik's Cube-like grid.
The concept of this model is that, in developing information assurance systems, organizations must consider the interconnectedness of all the different factors that impact them. To devise a robust information assurance program, one must consider not only the security goals of the program (see below), but also how these goals relate specifically to the various states in which information can reside in a system and the full range of available security safeguards that must be considered in the design. The McCumber model helps one to remember to consider all important design aspects without becoming too focused on any one in particular (i.e., relying exclusively on technical controls at the expense of requisite policies and end-user training).
Dimensions and attributes
Desired goals
Confidentiality: assurance that sensitive information is not intentionally or accidentally disclosed to unauthorized individuals.
Integrity: assurance that information is not intentionally or accidentally modified in such a way as to call into question its reliability.
Availability: ensuring that authorized individuals have both timely and reliable access to data and other resources when needed.
Information states
Storage: Data at rest (DAR) in an information system, such as that stored in memory or on a magnetic tape or disk.
Transmission: transferring data between information systems - also known as data in transit (DIT).
Processing: performing operations on data in order to achieve the desired objective.
Safeguards
Policy and practices: administrative controls, such as management directives, that provide a foundation for how information assurance is to be implemented within an organization. (examples: acceptable use policies or inci
|
https://en.wikipedia.org/wiki/Extreme%20Networks
|
Extreme Networks is an American networking company based in Morrisville, North Carolina. Extreme Networks designs, develops, and manufactures wired and wireless network infrastructure equipment and develops the software for network management, policy, analytics, security and access controls.
History
Extreme Networks was established by co-founders Gordon Stitt, Herb Schneider, and Stephen Haddock in 1996 in California, United States, with its first offices located in Cupertino, which later moved to Santa Clara, and later to San Jose. Early investors included Norwest Venture Partners, AVI Capital Management, Trinity Ventures, and Kleiner Perkins Caufield & Byers. Gordon Stitt was a co-founder and served as chief executive officer until August 2006, when he retired and became chairman of the board of directors.
The initial public offering in April 1999 was listed on the NASDAQ stock exchange as ticker "EXTR."
In April 2013, Charles W. Berger (from ParAccel as it was acquired by Actian) replaced Oscar Rodriguez as CEO.
In November 2014, Extreme Networks was named the first Official Wi-Fi solutions provider of the NFL.
On April 19, 2015, Charles W. Berger resigned as CEO, and was replaced by Board Chairman Ed Meyercord.
In September 2020, analyst firm Omdia named Extreme Networks the fast-growing vendor in cloud-managed networking.
In November 2021, Extreme Networks was named a Leader in the 2021 Gartner Magic Quadrant for Wired and Wireless LAN Access Infrastructure for the fourth consecutive year by Gartner analysts.
Acquisitions
In October 1996, Extreme Networks acquired Mammoth Technology.
Extreme Networks acquired Optranet in February 2001 and Webstacks in March 2001. Extreme had invested in both companies, which were purchased for about $73 million and $74 million respectively.
On September 12, 2013, Extreme Networks announced it would acquire Enterasys Networks for about $180 million.
On October 31, 2016, Extreme Networks announced that it completed th
|
https://en.wikipedia.org/wiki/Cross-fostering
|
Cross-fostering is a technique used in animal husbandry, animal science, genetic and nature versus nurture studies, and conservation, whereby offspring are removed from their biological parents at birth and raised by surrogates, typically of a different species, hence 'cross.' This can also occasionally occur in nature.
Animal husbandry
Cross-fostering young animals is usually done to equalize litter size. Individual animals born in large litters are faced with much more competition for resources, such as breast milk, food and space, than individuals born in smaller litters. Herd managers will typically move some individuals from a large litter to a smaller litter where they will be raised by a non-biological parent. This is typically done in pig farming because litters with up to 15 piglets are common. A sow with a large litter may have difficulty producing enough milk for all piglets, or the sow may not have enough functional teats to feed all piglets simultaneously. When this occurs, smaller or weaker piglets are at risk of starving to death. Herd managers will often transfer some piglets from a large litter to another lactating sow which either has a smaller litter or has had her own biological piglets recently weaned. Herd managers will typically try to equalize litters by number and also weight of individuals. When done successfully, cross-fostering reduces piglet mortality.
In research
Cross-fostering can be used to study the impact of postnatal environment on genetic-linked diseases as well as on behavioural pattern. In behavioral studies, if cross-fostered offspring show a behavioral trait similar to their biological parents and dissimilar from their foster parents, a behavior can be shown to have a genetic basis. Similarly if the offspring develops traits dissimilar to their biological parents and similar to their foster parents environmental factors are shown to be dominant. In many cases there is a blend of the two, which shows both genes and environme
|
https://en.wikipedia.org/wiki/Optical%20lens%20design
|
Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types (spherical, aspheric, holographic, diffractive, etc.), as well as radius of curvature, distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it.
Design requirements
Performance requirements can include:
Optical performance (image quality): This is quantified by various metrics, including encircled energy, modulation transfer function, Strehl ratio, ghost reflection control, and pupil performance (size, location and aberration control); the choice of the image quality metric is application specific.
Physical requirements such as weight, static volume, dynamic volume, center of gravity and overall configuration requirements.
Environmental requirements: ranges for temperature, pressure, vibration and electromagnetic shielding.
Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties.
Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently
|
https://en.wikipedia.org/wiki/FKM
|
FKM is a family of fluorocarbon-based fluoroelastomer materials defined by ASTM International standard D1418, and ISO standard 1629. It is commonly called fluorine rubber or fluoro-rubber. FKM is an abbreviation of Fluorine Kautschuk Material. All FKMs contain vinylidene fluoride as a monomer. Originally developed by DuPont (under the brand name Viton, now owned by Chemours), FKMs are today also produced by many companies, including: Daikin (Dai-El), 3M (Dyneon), Solvay S.A. (Tecnoflon), HaloPolymer (Elaftor), Gujarat Fluorochemicals (Fluonox), and several Chinese manufacturers. Fluoroelastomers are more expensive than neoprene or nitrile rubber elastomers. They provide additional heat and chemical resistance. FKMs can be divided into different classes on the basis of either their chemical composition, their fluorine content, or their cross-linking mechanism.
Types
On the basis of their chemical composition FKMs can be divided into the following types:
Type-1 FKMs are composed of vinylidene fluoride (VDF) and hexafluoropropylene (HFP). Copolymers are the standard type of FKMs showing a good overall performance. Their fluorine content is approximately 66 weight percent.
Type-2 FKMs are composed of VDF, HFP, and tetrafluoroethylene (TFE). Terpolymers have a higher fluorine content compared to copolymers (typically between 68 and 69 weight percent fluorine), which results in better chemical and heat resistance. Compression set and low temperature flexibility may be affected negatively.
Type-3 FKMs are composed of VDF, TFE, and perfluoromethylvinylether (PMVE). The addition of PMVE provides better low temperature flexibility compared to copolymers and terpolymers. Typically, the fluorine content of type-3 FKMs ranges from 62 to 68 weight percent.
Type-4 FKMs are composed of propylene, TFE, and VDF. While base resistance is increased in type-4 FKMs, their swelling properties, especially in hydrocarbons, are worsened. Typically, they have a fluorine content of about
|
https://en.wikipedia.org/wiki/Lavasoft
|
Adaware, formerly known as Lavasoft, is a software development company that produces spyware and malware detection software, including Adaware. It operates as a subsidiary of Avanquest, a division of Claranova.
The company offers products Adaware Antivirus, Adaware Protect, Adaware Safe Browser, Adaware Privacy, Adaware AdBlock, Adaware PC Cleaner and Adaware Driver Manager.
Adaware's headquarters are in Montreal, Canada, having previously been located in Gothenburg, Sweden since 2002. Nicolas Stark and Ann-Christine Åkerlund established the company in Germany in 1999 with its flagship Adaware antivirus product. In 2011, Adware was acquired by the Solaria Fund, a private equity fund front for entrepreneurs Daniel Assouline and Michael Dadoun, who have been accused of selling software that is available for free, including Adaware antivirus prior to acquiring the company itself.
Adaware antivirus
An anti-spyware and anti-virus software program, Adaware Antivirus, according to its developer, supposedly detects and removes malware, spyware and adware, computer viruses, dialers, Trojans, bots, rootkits, data miners,, parasites, browser hijackers and tracking components. Adaware Web Companion, a component of the Adaware antivirus, is frequently packaged alongside potentially unwanted programs. Adaware accomplishes this by striking deals with malware operators and site owners to distribute its software in exchange for money. Adaware Web Companion is known to collect user data and send it back to remote servers.
History
Adaware antivirus was originally developed, as Ad-Aware, in 1999 to highlight web beacons inside of Internet Explorer. On many websites, users would see a tiny pixelated square next to each web beacon, warning the user that the computer's IP address and other non-essential information was being tracked by this website. Over time, Ad-Aware added the ability to block those beacons, or ads.
In the 2008 Edition, Lavasoft bundled Ad-Aware Pro and Plus for
|
https://en.wikipedia.org/wiki/Adinkra%20symbols
|
Adinkra are symbols from Ghana that represent concepts or aphorisms. Adinkra are used extensively in fabrics, logos and pottery. They are incorporated into walls and other architectural features. Adinkra symbols appear on some traditional Akan goldweights. The symbols are also carved on stools for domestic and ritual use. Tourism has led to new departures in the use of the symbols in items such as T-shirts and jewellery.
The symbols have a decorative function but also represent objects that encapsulate evocative messages conveying traditional wisdom, aspects of life, or the environment. There are many symbols with distinct meanings, often linked with proverbs. In the words of Kwame Anthony Appiah, they were one of the means for "supporting the transmission of a complex and nuanced body of practice and belief".
History
Adinkra symbols were originally created by the Bono people of Gyaman. The Gyaman king, Nana Kwadwo Agyemang Adinkra, originally created or designed these symbols, naming it after himself. The Adinkra symbols were largely used on pottery, stools etc. by the people of Bono. Adinkra cloth was worn by the king of Gyaman, and its usage spread from Bono Gyaman to Asante and other Akan kingdoms following its defeat. It is said that the guild designers who designed this cloth for the Kings were forced to teach the Asantes the craft. Gyaman king Nana Kwadwo Agyemang Adinkra's first son, Apau, who was said to be well versed in the Adinkra craft, was forced to teach more about Adinkra cloths. Oral accounts have attested to the fact that Adinkra Apau taught the process to a man named Kwaku Dwaku in a town near Kumasi. Over time, all Akan people including the Fante, Akuapem and Akyem all made Adinkra symbols a major part of their culture, as they all originated from the ancient Bono Kingdom.
The oldest surviving adinkra cloth was made in 1817. The cloth features 15 stamped symbols, including nsroma (stars), dono ntoasuo (double Dono drums), and diamonds. The p
|
https://en.wikipedia.org/wiki/Mountain%20pass%20theorem
|
The mountain pass theorem is an existence theorem from the calculus of variations, originally due to Antonio Ambrosetti and Paul Rabinowitz. Given certain conditions on a function, the theorem demonstrates the existence of a saddle point. The theorem is unusual in that there are many other theorems regarding the existence of extrema, but few regarding saddle points.
Statement
The assumptions of the theorem are:
is a functional from a Hilbert space H to the reals,
and is Lipschitz continuous on bounded subsets of H,
satisfies the Palais–Smale compactness condition,
,
there exist positive constants r and a such that if , and
there exists with such that .
If we define:
and:
then the conclusion of the theorem is that c is a critical value of I.
Visualization
The intuition behind the theorem is in the name "mountain pass." Consider I as describing elevation. Then we know two low spots in the landscape: the origin because , and a far-off spot v where . In between the two lies a range of mountains (at ) where the elevation is high (higher than a>0). In order to travel along a path g from the origin to v, we must pass over the mountains—that is, we must go up and then down. Since I is somewhat smooth, there must be a critical point somewhere in between. (Think along the lines of the mean-value theorem.) The mountain pass lies along the path that passes at the lowest elevation through the mountains. Note that this mountain pass is almost always a saddle point.
For a proof, see section 8.5 of Evans.
Weaker formulation
Let be Banach space. The assumptions of the theorem are:
and have a Gateaux derivative which is continuous when and are endowed with strong topology and weak* topology respectively.
There exists such that one can find certain with
.
satisfies weak Palais–Smale condition on .
In this case there is a critical point of satisfying . Moreover, if we define
then
For a proof, see section 5.5 of Aubin and Ekeland.
Refere
|
https://en.wikipedia.org/wiki/French%20video%20game%20policy
|
French video game policy refers to the strategy and set of measures laid out by France since 2002 to maintain and develop a local video game development industry in order to preserve European market diversity.
History
Proposals for government support
The French game developer trade group, known as Association des Producteurs d'Oeuvres Multimedia (APOM, now "Syndicat National du Jeu Video") was founded in 2001 by Eden Studios' Stéphane Baudet, Kalisto's Nicolas Gaume, former cabinet member and author Alain Le Diberder, financier and former journalist Romain Poirot-Lellig and Darkworks' Antoine Villette. APOM was for established for game developers only, since game publishers were already grouped under the umbrella of the Syndicat des Editeurs de Logiciels de Loisirs (SELL).
In November 2002, the Prime Minister Jean-Pierre Raffarin visited Darkworks, and formally asked game developers to submit him a set of proposals, promising to meet again in Spring 2003 to give his feedback.
Confronted by the bankruptcies or difficulties of many studios such as Cryo, Kalisto, Arxel Tribe, APOM had to propose short term solutions as well as long term, growth-oriented measures to the French government. Video game professionals responded in March 2003 with a set of proposals, including several options to set up a long term financing system to develop quality video games for the European and international market.
Era of government support
On April 19, 2003, the Prime Minister announced the creation of the Ecole Nationale du Jeu Video et des Medias Interactifs, a national school dedicated to the education of game development executives and project managers. He also announced the creation of a 4 million euro prototyping fund for games managed by the Centre National de la Cinematographie, the "Fonds d'Aide pour l'Edition Multimédia" ("FAEM"), and that he would order a report to be drafted in order to determine and to answer the needs of the game development industry with regards to
|
https://en.wikipedia.org/wiki/SWAR
|
SIMD within a register (SWAR), also known by the name "packed SIMD" is a technique for performing parallel operations on data contained in a processor register. SIMD stands for single instruction, multiple data. Flynn's 1972 taxonomy categorises SWAR as "pipelined processing".
Many modern general-purpose computer processors have some provisions for SIMD, in the form of a group of registers and instructions to make use of them. SWAR refers to the use of those registers and instructions, as opposed to using specialized processing engines designed to be better at SIMD operations. It also refers to the use of SIMD with general-purpose registers and instructions that were not meant to do it at the time, by way of various novel software tricks.
SWAR architectures
A SWAR architecture is one that includes instructions explicitly intended to perform parallel operations across data that is stored in the independent subwords or fields of a register. A SWAR-capable architecture is one that includes a set of instructions that is sufficient to allow data stored in these fields to be treated independently even though the architecture does not include instructions that are explicitly intended for that purpose.
An early example of a SWAR architecture was the Intel Pentium with MMX, which implemented the MMX extension set. The Intel Pentium, by contrast, did not include such instructions, but could still act as a SWAR architecture through careful hand-coding or compiler techniques.
Early SWAR architectures include DEC Alpha , Hewlett-Packard's PA-RISC MAX, Silicon Graphics Incorporated's MIPS MDMX, and Sun's SPARC V9 VIS. Like MMX, many of the SWAR instruction sets are intended for faster video coding.
History of the SWAR programming model
Wesley A. Clark introduced partitioned subword data operations in the 1950s. This can be seen as a very early predecessor to SWAR. Leslie Lamport presented SWAR techniques in his paper titled "Multiple byte processing with full-word instruct
|
https://en.wikipedia.org/wiki/Any-source%20multicast
|
Any-source multicast (ASM) is the older and more usual form of multicast where multiple senders can be on the same group/channel, as opposed to source-specific multicast where a single particular source is specified.
Any-source multicast allows a host computer to map IPs and then sends IPs to a number of groups via IP address. This method of multicasting allows hosts to transmit to/from groups without any restriction on the location of end-user computers by allowing any receiving host group computer to become a transmission source. Bandwidth usage is nominal allowing Video Conferencing to be used extensively. However, this type of multicast is vulnerable in that it allows for unauthorized traffic and denial-of-service attacks.
Commonly, any-source multicast is used in IGMP version 2; however, it can also be used in PIM-SM, MSDP, and MBGP. ASM utilizes IPv4 in association with the previously stated protocols; in addition, MLDv1 protocol is used for IPv6 addresses.
Benefits
Scalability for large tasks
The reduction of group management
Ability to use existing technologies
See also
IP multicast
Internet Group Management Protocol
RTCP
Xcast
References
Internet architecture
Internet broadcasting
Internet Protocol
Routing
|
https://en.wikipedia.org/wiki/Archicad
|
ArchiCAD is an architectural BIM CAD software for Mac and Windows developed by the Hungarian company Graphisoft. ArchiCAD offers computer aided solutions for common aspects of aesthetics and engineering during the design process of the built environment—buildings, interiors, urban areas, etc.
History
Development of Archicad started in 1982 for the Apple Lisa, the predecessor of the original Apple Macintosh. Following its launch in 1987, with Graphisoft's "Virtual Building" concept, Archicad became regarded by some as the first implementation of BIM. However, ArchiCAD founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter that Sonata "was more advanced in 1986 than ArchiCAD at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later". ArchiCAD has been recognized as the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers and considered "revolutionary" for the ability to store large amounts of information within the 3D model.
Product overview
Archicad is a complete design suite with 2D and 3D drafting, visualization and other building information modeling functions for architects, designers and planners. A wide range of software applications are integrated in Archicad to cover most of the design needs of an architectural office:
2D modeling CAD software — drawing tools for creating accurate and detailed technical drawings
3D Modeling software — a 3D CAD interface specially developed for architects capable of creating various kind of building forms
Architectural rendering and Visualization software — a high performance rendering tool to produce photo-realistic pictures or videos
Desktop publishing software — with similar features to mainstream DTP software to compose printed materials using technical drawings pixel-based images and texts
Document management tool — a central data
|
https://en.wikipedia.org/wiki/Ecological%20stoichiometry
|
Ecological stoichiometry (more broadly referred to as biological stoichiometry) considers how the balance of energy and elements influences living systems. Similar to chemical stoichiometry, ecological stoichiometry is founded on constraints of mass balance as they apply to organisms and their interactions in ecosystems. Specifically, how does the balance of energy and elements affect and how is this balance affected by organisms and their interactions. Concepts of ecological stoichiometry have a long history in ecology with early references to the constraints of mass balance made by Liebig, Lotka, and Redfield. These earlier concepts have been extended to explicitly link the elemental physiology of organisms to their food web interactions and ecosystem function.
Most work in ecological stoichiometry focuses on the interface between an organism and its resources. This interface, whether it is between plants and their nutrient resources or large herbivores and grasses, is often characterized by dramatic differences in the elemental composition of each part. The difference, or mismatch, between the elemental demands of organisms and the elemental composition of resources leads to an elemental imbalance. Consider termites, which have a tissue carbon:nitrogen ratio (C:N) of about 5 yet consume wood with a C:N ratio of 300–1000. Ecological stoichiometry primarily asks:
why do elemental imbalances arise in nature?
how is consumer physiology and life-history affected by elemental imbalances? and
what are the subsequent effects on ecosystem processes?
Elemental imbalances arise for a number of physiological and evolutionary reasons related to the differences in the biological make up of organisms, such as differences in types and amounts of macromolecules, organelles, and tissues. Organisms differ in the flexibility of their biological make up and therefore in the degree to which organisms can maintain a constant chemical composition in the face of variations in their
|
https://en.wikipedia.org/wiki/Lemote
|
Jiangsu Lemote Tech Co., Ltd or Lemote () is a computer company established as a joint venture between the Jiangsu Menglan Group and the Chinese Institute of Computing Technology, involved in computer hardware and software products, services, and projects.
History
In June 2006, shortly after Institute of Computing Technology of the Chinese Academy of Sciences developed Loongson 2E they need a company to build end product, so the Jiangsu Menglan Group began a joint venture with the Institute of Computing Technology of the Chinese Academy of Sciences. The venture was named Jiangsu Lemote Tech Co., Ltd.
A computer was announced by Fuxin Zhang, an ICT researcher also a Lemote staff, who said the purpose of this project was to "provide everyone with a personal computer". The device is intended for low income groups and rural area students.
Hardware
Lemote builds small form factor computers including network computers and netbooks with Loongson Processors.
Netbook computers
The Yeeloong netbook computer is intended to be built on free software from the BIOS upwards, and for this reason is used and recommended by the founder of Free Software Foundation, Richard Stallman as of September 2008 and 23 January 2010.
The specifications are:
Loongson 3A laptop
Loongson insiders revealed a new model based on the Loongson 3A quad-core laptop has been developed and was expected to launch in August 2011. With a similar design to the MacBook Pro from Apple Inc., it will carry a Linux operating system by default.
In September 2011, Lemote announced the Yeeloong-8133 13.3" laptop featuring 900 MHz, quad-core Loongson-3A/2GQ CPU.
Desktop computers
Lynloong, all-in-one desktop computer, combined computer and monitor, without keyboard.
Myloong, desktop diskless network computer (NC), without monitor or keyboard.
Fuloong, see below.
Products in development
Hiloong, SOHO and family storage center.
Fuloong 2 series of small desktop computers
The Fuloong 2 series is a desktop com
|
https://en.wikipedia.org/wiki/Friability
|
In materials science, friability ( ), the condition of being friable, describes the tendency of a solid substance to break into smaller pieces under duress or contact, especially by rubbing. The opposite of friable is indurate.
Substances that are designated hazardous, such as asbestos or crystalline silica, are often said to be friable if small particles are easily dislodged and become airborne, and hence respirable (able to enter human lungs), thereby posing a health hazard.
Tougher substances, such as concrete, may also be mechanically ground down and reduced to finely divided mineral dust. However, such substances are not generally considered friable because of the degree of difficulty involved in breaking the substance's chemical bonds through mechanical means. Some substances, such as polyurethane foams, show an increase in friability with exposure to ultraviolet radiation, as in sunlight.
Friable is sometimes used metaphorically to describe "brittle" personalities who can be "rubbed" by seemingly-minor stimuli to produce extreme emotional responses.
General
A friable substance is any substance that can be reduced to fibers or finer particles by the action of a small amount of pressure or friction, such as rubbing or inadvertently brushing up against the substance. The term could also apply to any material that exhibits these properties, such as:
Ionically bound substances that are less than 1 kg/L in density
Clay tablets
Crackers
Mineral fibers
Polyurethane (foam)
Aerogel
Geological
Friable and indurated are terms used commonly in soft-rock geology, especially with sandstones, mudstones, and shales to describe how well the component rock fragments are held together.
Examples:
Clumps of dried clay
Chalk
Perlite
Medical
The term friable is also used to describe tumors in medicine. This is an important determination because tumors that are easily torn apart have a higher risk of malignancy and metastasis.
Examples:
Some forms of cancer, such
|
https://en.wikipedia.org/wiki/Receiver%20%28information%20theory%29
|
The receiver in information theory is the receiving end of a communication channel. It receives decoded messages/information from the sender, who first encoded them. Sometimes the receiver is modeled so as to include the decoder. Real-world receivers like radio receivers or telephones can not be expected to receive as much information as predicted by the noisy channel coding theorem.
References
Information theory
|
https://en.wikipedia.org/wiki/Conformational%20change
|
In biochemistry, a conformational change is a change in the shape of a macromolecule, often induced by environmental factors.
A macromolecule is usually flexible and dynamic. Its shape can change in response to changes in its environment or other factors; each possible shape is called a conformation, and a transition between them is called a conformational change. Factors that may induce such changes include temperature, pH, voltage, light in chromophores, concentration of ions, phosphorylation, or the binding of a ligand. Transitions between these states occur on a variety of length scales (tenths of Å to nm) and time scales (ns to s),
and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis.
Laboratory analysis
Many biophysical techniques such as crystallography, NMR, electron paramagnetic resonance (EPR) using spin label techniques, circular dichroism (CD), hydrogen exchange, and FRET can be used to study macromolecular conformational change. Dual-polarization interferometry is a benchtop technique capable of providing information about conformational changes in biomolecules.
A specific nonlinear optical technique called second-harmonic generation (SHG) has been recently applied to the study of conformational change in proteins. In this method, a second-harmonic-active probe is placed at a site that undergoes motion in the protein by mutagenesis or non-site-specific attachment, and the protein is adsorbed or specifically immobilized to a surface. A change in protein conformation produces a change in the net orientation of the dye relative to the surface plane and therefore the intensity of the second harmonic beam. In a protein sample with a well-defined orientation, the tilt angle of the probe can be quantitatively determined, in real space and real time. Second-harmonic-active unnatural amino acids can also be used as probes.
Another method applies electro-switchable biosurfaces where proteins are place
|
https://en.wikipedia.org/wiki/Dynamical%20genetics
|
Dynamical genetics concerns the study and the interpretation of those phenomena in which physiological enzymatic protein complexes alter the DNA, in a more or less sophisticated way.
The study of such mechanisms is important firstly since they promote useful functions, as for example the immune system recombination (on individual scale) and the crossing-over (on evolutionary scale); secondly since they may sometimes become harmful because of some malfunctioning, causing for example neurodegenerative disorders.
Typical examples of dynamical genetics subjects are:
dynamic mutations, term introduced by Robert I. Richards and Grant R. Sutherland to indicate mutations caused by other mutations; this phenomenon often involves the variable number tandem repeats, closely related to many neurodegenerative diseases, as the trinucleotide repeat disorders (interpreted by Anita Harding).
dynamic genome, term introduced by Nina Fedoroff and David Botstein to indicate the transposition discovered by Barbara McClintock.
immune V(D)J recombination (discovered by Susumu Tonegawa) and isotype class switching, terms introduced to indicate two kinds of immune system recombinations, which are the main cause of the enormous variety of antibodies.
horizontal DNA transfer (discovered by Frederick Griffith) that indicates the DNA transfer between two organisms.
crossing-over (discovered by Thomas Hunt Morgan) mediated by formation and unwinding (by means of peculiar enzymatic complexes such as helicase) of uncommon four-helix DNA structures known as G-quadruplexes (discovered by Martin Gellert, Marie N. Lipsett, and David R. Davies).
References
Genetics
|
https://en.wikipedia.org/wiki/Turgor%20pressure
|
Turgor pressure is the force within the cell that pushes the plasma membrane against the cell wall.
It is also called hydrostatic pressure, and is defined as the pressure in a fluid measured at a certain point within itself when at equilibrium. Generally, turgor pressure is caused by the osmotic flow of water and occurs in plants, fungi, and bacteria. The phenomenon is also observed in protists that have cell walls. This system is not seen in animal cells, as the absence of a cell wall would cause the cell to lyse when under too much pressure. The pressure exerted by the osmotic flow of water is called turgidity. It is caused by the osmotic flow of water through a selectively permeable membrane. Movement of water through a semipermeable membrane from a volume with a low solute concentration to one with a higher solute concentration is called osmotic flow. In plants, this entails the water moving from the low concentration solute outside the cell into the cell's vacuole.
Etymology
1610s, from Latin turgidus "swollen, inflated, distended," from turgere "to swell," of unknown origin. Figurative use in reference to prose is from 1725. Related: Turgidly; turgidness.
Mechanism
Osmosis is the process in which water flows from a volume with a low solute concentration (osmolarity), to an adjacent region with a higher solute concentration until equilibrium between the two areas is reached. It is usually accompanied by a favorable increase in the entropy of the solvent. All cells are surrounded by a lipid bi-layer cell membrane which permits the flow of water into and out of the cell while limiting the flow of solutes. When the cell is in a hypertonic solution, water flows out of the cell, which decreases the cell's volume. When in a hypotonic solution, water flows into the membrane and increases the cell's volume, while in an isotonic solution, water flows in and out of the cell at an equal rate.
Turgidity is the point at which the cell's membrane pushes against the cell
|
https://en.wikipedia.org/wiki/William%20Shanks
|
William Shanks (25 January 1812 – June 1882) was an English amateur mathematician. He is famous for his calculation of (pi) to 707 places in 1873, which was correct up to the first 527 places. The error was discovered in 1944 by D. F. Ferguson (using a mechanical desk calculator). Nevertheless, Shanks's approximation was the longest expansion of until the advent of the digital electronic computer in the 1940s.
Biography
Shanks was born in 1812 in Corsenside. He may have been a student of William Rutherford as a young boy in the 1820s, and he dedicated a book on published in 1853 to Rutherford. After his marriage in 1846, Shanks earned his living by owning a boarding school at Houghton-le-Spring, which left him enough time to spend on his hobby of calculating mathematical constants.
In addition to calculating , Shanks also calculated e and the Euler–Mascheroni constant γ to many decimal places. He published a table of primes (and the periods of their reciprocals) up to 110,000 and found the natural logarithms of 2, 3, 5 and 10 to 137 places. During his calculations, which took many tedious days of work, Shanks was said to have calculated new digits all morning and would then spend all afternoon checking his morning's work.
Shanks died in Houghton-le-Spring, County Durham, England in June 1882, aged 70, and was buried at the local Hillside Cemetery on 17 June 1882.
Calculations of pi
To calculate , Shanks used Machin's formula:
Shanks calculated to 530 decimal places in January 1853, of which the first 527 were correct (the last few likely being incorrect because of round-off errors). He subsequently expanded his calculation to 607 decimal places in April 1853, but an error introduced at the start of the new calculation, right at the 530th decimal place where his previous calculation ended, rendered the rest of his calculation erroneous. Given the nature of Machin's formula, the error propagated back to the 528th decimal place, leaving only the first 527
|
https://en.wikipedia.org/wiki/Overlap%20extension%20polymerase%20chain%20reaction
|
The overlap extension polymerase chain reaction (or OE-PCR) is a variant of PCR. It is also referred to as Splicing by overlap extension / Splicing by overhang extension (SOE) PCR. It is used assemble multiple smaller double stranded DNA fragments into a larger DNA sequence. OE-PCR is widely used to insert mutations at specific points in a sequence or to assemble custom DNA sequence from smaller DNA fragments into a larger polynucleotide.
Splicing of DNA molecules
As in most PCR reactions, two primers—one for each end—are used per sequence. To splice two DNA molecules, special primers are used at the ends that are to be joined. For each molecule, the primer at the end to be joined is constructed such that it has a 5' overhang complementary to the end of the other molecule. Following annealing when replication occurs, the DNA is extended by a new sequence that is complementary to the molecule it is to be joined to. Once both DNA molecules are extended in such a manner, they are mixed and a PCR is carried out with only the primers for the far ends. The overlapping complementary sequences introduced will serve as primers and the two sequences will be fused. This method has an advantage over other gene splicing techniques in not requiring restriction sites.
To get higher yields, some primers are used in excess as in asymmetric PCR.
Introduction of mutations
To insert a mutation into a DNA sequence, a specific primer is designed. The primer may contain a single substitution or contain a new sequence at its 5' end. If a deletion is required, a sequence that is 5' of the deletion is added, because the 3' end of the primer must have complementarity to the template strand so that the primer can sufficiently anneal to the template DNA.
Following annealing of the primer to the template, DNA replication proceeds to the end of the template. The duplex is denatured and the second primer anneals to the newly formed DNA strand, containing sequence from the first primer. Repli
|
https://en.wikipedia.org/wiki/YouOS
|
YouOS was a web desktop and web integrated development environment, developed by Webshaka until June 2008.
From 2006 to 2008 YouOS replicated the desktop environment of a modern operating system on a webpage, using JavaScript to communicate with the remote server. This allowed users to save their current desktop state to return to later, much like the hibernation feature in many true operating systems, and for multiple users to collaborate using a single environment. YouOS featured built-in sharing of music, documents and other files. The software was in alpha stage, and was referred to as a "web operating system" by WebShaka.
An application programming interface and an IDE (integrated development environment) were in development.
Over 700 applications were created using this API.
In 2006, YouOS was listed on the 7th position of PC World's list of "The 20 Most Innovative Products of the Year".
YouOS was shut down on July 30, 2008 because the developers had not actively developed it since November 2006. They have since moved on to other projects.
The domain youos.com domain name was acquired by a German startup company, Dynacrowd, in May 2015. The project name YouOS now represents a mobile platform for hyperlocal interaction used to operate the German refugee assistance system AngelaApp.
Parent Company
Webshaka was a messaging company most notable for making YouOS. It was founded by Samuel Hsiung, Jeff Mullen, Srini Panguluri and Joseph Wong.
References
External links
YouOS web site
Web desktops
|
https://en.wikipedia.org/wiki/Linear%20canonical%20transformation
|
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(R) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space.
The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group.
The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates.
Definition
The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix with ad − bc = 1, the corresponding integral transform from a function to is defined as
Special cases
Many classical transforms are special cases of the linear canonical transform:
Scaling
Scaling, , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks):
Fourier transform
The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix
Fractional Fourier transform
The fractional Fourier transform
|
https://en.wikipedia.org/wiki/DNA%20origami
|
DNA origami is the nanoscale folding of DNA to create arbitrary two- and three-dimensional shapes at the nanoscale. The specificity of the interactions between complementary base pairs make DNA a useful construction material, through design of its base sequences. DNA is a well-understood material that is suitable for creating scaffolds that hold other molecules in place or to create structures all on its own.
DNA origami was the cover story of Nature on March 16, 2006. Since then, DNA origami has progressed past an art form and has found a number of applications from drug delivery systems to uses as circuitry in plasmonic devices; however, most commercial applications remain in a concept or testing phase.
Overview
The idea of using DNA as a construction material was first introduced in the early 1980s by Nadrian Seeman. The current method of DNA origami was developed by Paul Rothemund at the California Institute of Technology. The process involves the folding of a long single strand of viral DNA (typically the 7,249 bp genomic DNA of M13 bacteriophage) aided by multiple smaller "staple" strands. These shorter strands bind the longer in various places, resulting in the formation of a pre-defined two- or three-dimensional shape. Examples include a smiley face and a coarse map of China and the Americas, along with many three-dimensional structures such as cubes.
To produce a desired shape, images are drawn with a raster fill of a single long DNA molecule. This design is then fed into a computer program that calculates the placement of individual staple strands. Each staple binds to a specific region of the DNA template, and thus due to Watson-Crick base pairing, the necessary sequences of all staple strands are known and displayed. The DNA is mixed, then heated and cooled. As the DNA cools, the various staples pull the long strand into the desired shape. Designs are directly observable via several methods, including electron microscopy, atomic force microscopy, or
|
https://en.wikipedia.org/wiki/Composite%20application
|
In computing, a composite application is a software application built by combining multiple existing functions into a new application. The technical concept can be compared to mashups. However, composite applications use business sources (e.g., existing modules or even Web services ) of information, while mashups usually rely on web-based, and often free, sources.
It is wrong to assume that composite applications are by definition part of a service-oriented architecture (SOA). Composite applications can be built using any technology or architecture.
A composite application consists of functionality drawn from several different sources. The components may be individual selected functions from within other applications, or entire systems whose outputs have been packaged as business functions, modules, or web services.
Composite applications often incorporate orchestration of "local" application logic to control how the composed functions interact with each other to produce the new, derived functionality. For composite applications that are based on SOA, WS-CAF is a Web services standard for composite applications.
See also
Web 2.0
Composite Application Service Assembly (CASA)
Enterprise service bus (ESB)
Service-oriented architecture (SOA)
Service component architecture (SCA)
Mashup (web application hybrid)
External links
Composite application guidance from patterns & practices
NetBeans SOA Composite Application Project Home
camelse
Running Apache Camel in OpenESB
eclipse sirius - Free and GPL eclipse tool to build your own arbitrary complex military grade modeling tools on one hour
eclipse SCA Tools - Gnu free composite tool
Free GPL obeodesigner made with eclipse sirius
References
Web services
Service-oriented (business computing)
|
https://en.wikipedia.org/wiki/Approximations%20of%20%CF%80
|
Approximations for the mathematical constant pi () in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era. In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
Further progress was not made until the 15th century (through the efforts of Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics.
The record of manual approximation of is held by William Shanks, who calculated 527 digits correctly in 1853. Since the middle of the 20th century, the approximation of has been the task of electronic digital computers (for a comprehensive account, see Chronology of computation of ). On 8 June 2022, the current record was established by Emma Haruka Iwao with Alexander Yee's y-cruncher with 100 trillion () digits.
Early history
The best known approximations to dating to before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.
Some Egyptologists
have claimed that the ancient Egyptians used an approximation of as = 3.142857 (about 0.04% too high) from as early as the Old Kingdom.
This claim has been met with skepticism.
Babylonian mathematics usually approximated to 3, sufficient for the architectural projects of the time (notably also reflected in the description of Solomon's Temple in the Hebrew Bible). The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BCE) gives a better appr
|
https://en.wikipedia.org/wiki/AT%26T%20CallVantage
|
AT&T CallVantage was a voice over Internet Protocol telephone service first offered in 2004 by AT&T Corp., upon the heels of its announcement that it would stop seeking traditional local and long-distance landline customers.
Renaming
After SBC Communications purchased AT&T Corp. in 2005 and renamed itself AT&T Inc., CallVantage was offered as an option with AT&T Yahoo! DSL service, formerly known as SBC Yahoo! DSL.
Competition
AT&T CallVantage competed with other VoIP providers, such as Vonage. When AT&T U-verse Voice was unveiled January 28, 2008, AT&T continued to market CallVantage to customers without U-verse, particularly customers outside AT&T's local phone service territory. However, AT&T suspended new business later in 2008 "to evaluate CallVantage service."
In a letter dated April 17, 2009, AT&T notified all existing CallVantage subscribers that the service would be discontinued and no longer available later in 2009, which occurred October 20, 2009.
References
External links
VoIP Service & Solutions
AT&T subsidiaries
VoIP companies
|
https://en.wikipedia.org/wiki/Runoff%20%28hydrology%29
|
Runoff is the flow of water across the earth, and is a major component in the hydrological cycle. Runoff that flows over land before reaching a watercourse is referred to as surface runoff or overland flow. Once in a watercourse, runoff is referred to as streamflow, channel runoff, or river runoff.
Urban runoff is surface runoff created by urbanization.
Background
Surface runoff
Urban runoff
Channel runoff
Model
Curve number
References
Hydrology
|
https://en.wikipedia.org/wiki/Alternant%20matrix
|
In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.
Generally, if are functions from a set to a field , and , then the alternant matrix has size and is defined by
or, more compactly, . (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which , and Moore matrices, for which .
Properties
The alternant can be used to check the linear independence of the functions in function space. For example, let and choose . Then the alternant is the matrix and the alternant determinant is Therefore M is invertible and the vectors form a basis for their spanning set: in particular, and are linearly independent.
Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let and choose . Then the alternant is and the alternant determinant is 0, but we have already seen that and are linearly independent.
Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which Choosing and we obtain the alternant . Therefore, is in the nullspace of the matrix: that is, . Moving to the other side of the equation gives the partial fraction decomposition
If and for any then the alternant determinant is zero (as a row is repeated).
If and the functions are all polynomials, then divides the alternant determinant for all In particular, if V is a Vandermonde matrix, then divides such polynomial alternant determinants. The ratio is therefore a polynomial in called the bialternant. The Schur polynomial is classically defined as the bialternant of the polynomials .
Applications
Alternant matrices are used in c
|
https://en.wikipedia.org/wiki/Capsa%20%28software%29
|
Capsa is the name for a family of packet analyzers developed by Colasoft for network administrators to monitor, troubleshoot and analyze wired & wireless networks. The company provides a free edition for individuals, but paid licenses are available for businesses and enterprises. The software includes Ethernet packet analysis, diagnostics and a security monitoring system.
References
External links
Colasoft Official website
Colasoft Official Blog
Colasoft Capsa FAQ
- lists Colasoft Capsa Free edition
Network analyzers
|
https://en.wikipedia.org/wiki/Pintos
|
Pintos is computer software, a simple instructional operating system framework for the x86 instruction set architecture. It supports kernel threads, loading and running user programs, and a file system, but it implements all of these in a very simple way.
Pintos is currently used by multiple institutions, including UT Austin, UC Berkeley and Imperial College London, as an academic aid in Operating Systems class curriculums.
History
It was created at Stanford University by Ben Pfaff in 2004. It originated as a replacement for Not Another Completely Heuristic Operating System (Nachos), a similar system originally developed at UC Berkeley by Thomas E. Anderson, and was designed along similar lines.
Comparison to Nachos
Like Nachos, Pintos is intended to introduce undergraduates to concepts in operating system design and implementation by requiring them to implement significant portions of a real operating system, including thread and memory management and file system access. Pintos also teaches students valuable debugging skills.
Unlike Nachos, Pintos can run on actual x86 hardware, though it is often run atop an x86 emulator, such as Bochs or QEMU. Nachos, by contrast, runs as a user process on a host operating system, and targets the MIPS architecture (Nachos code must run atop a MIPS simulator). Pintos and its accompanying assignments are also written in the programming language C instead of C++ (used for original Nachos) or Java (used for Nachos 5.0j).
References
External links
Free software operating systems
X86 operating systems
Educational operating systems
Software using the BSD license
2004 software
|
https://en.wikipedia.org/wiki/ZRTP
|
ZRTP (composed of Z and Real-time Transport Protocol) is a cryptographic key-agreement protocol to negotiate the keys for encryption between two end points in a Voice over IP (VoIP) phone telephony call based on the Real-time Transport Protocol. It uses Diffie–Hellman key exchange and the Secure Real-time Transport Protocol (SRTP) for encryption. ZRTP was developed by Phil Zimmermann, with help from Bryce Wilcox-O'Hearn, Colin Plumb, Jon Callas and Alan Johnston and was submitted to the Internet Engineering Task Force (IETF) by Zimmermann, Callas and Johnston on March 5, 2006 and published on April 11, 2011 as .
Overview
ZRTP ("Z" is a reference to its inventor, Zimmermann; "RTP" stands for Real-time Transport Protocol) is described in the Internet Draft as a "key agreement protocol which performs Diffie–Hellman key exchange during call setup in-band in the Real-time Transport Protocol (RTP) media stream which has been established using some other signaling protocol such as Session Initiation Protocol (SIP). This generates a shared secret which is then used to generate keys and salt for a Secure RTP (SRTP) session." One of ZRTP's features is that it does not rely on SIP signaling for the key management, or on any servers at all. It supports opportunistic encryption by auto-sensing if the other VoIP client supports ZRTP.
This protocol does not require prior shared secrets or rely on a Public key infrastructure (PKI) or on certification authorities, in fact ephemeral Diffie–Hellman keys are generated on each session establishment: this allows the complexity of creating and maintaining a trusted third-party to be bypassed.
These keys contribute to the generation of the session secret, from which the session key and parameters for SRTP sessions are derived, along with previously shared secrets (if any): this gives protection against man-in-the-middle (MiTM) attacks, so long as the attacker was not present in the first session between the two endpoints.
ZRTP can be u
|
https://en.wikipedia.org/wiki/Sum-product%20number
|
A sum-product number in a given number base is a natural number that is equal to the product of the sum of its digits and the product of its digits.
There are a finite number of sum-product numbers in any given base . In base 10, there are exactly four numbers : 0, 1, 135, and 144.
Definition
Let be a natural number. We define the sum-product function for base , , to be the following:
where is the number of digits in the number in base , and
is the value of each digit of the number. A natural number is a number if it is a fixed point for , which occurs if . The natural numbers 0 and 1 are trivial numbers for all , and all other numbers are nontrivial numbers.
For example, the number 144 in base 10 is a sum-product number, because , , and .
A natural number is a sociable sum-product number if it is a periodic point for , where for a positive integer , and forms a cycle of period . A number is a sociable number with , and an amicable number is a sociable number with
All natural numbers are preperiodic points for , regardless of the base. This is because for any given digit count , the minimum possible value of is and the maximum possible value of is The maximum possible digit sum is therefore and the maximum possible digit product is Thus, the function value is This suggests that or dividing both sides by , Since this means that there will be a maximum value where because of the exponential nature of and the linearity of Beyond this value , always. Thus, there are a finite number of numbers, and any natural number is guaranteed to reach a periodic point or a fixed point less than making it a preperiodic point.
The number of iterations needed for to reach a fixed point is the function's persistence of , and undefined if it never reaches a fixed point.
Any integer shown to be a sum-product number in a given base must, by definition, also be a Harshad number in that base.
Sum-product numbers and cycles of Fb for specif
|
https://en.wikipedia.org/wiki/EKA1
|
EKA1 (EPOC Kernel Architecture 1) is the first-generation kernel for the operating system Symbian OS. EKA1 originated in the earlier operating system EPOC. It offers preemptive computer multitasking and memory protection, but no real-time computing guarantees, and a single-threaded device driver model. It was largely been superseded by EKA2.
Much of EKA1 was developed by a single software engineer, Colly Myers, when he was working for Psion Software in the early 1990s. Myers went on to act as CEO for Symbian Ltd., when it was formed to license this kernel and associated operating system to mobile phone makers. He is now CEO of Issuebits Ltd.
See also
Psion (company)
Operating system kernels
Symbian OS
Microkernels
Computer-related introductions in 1989
|
https://en.wikipedia.org/wiki/EKA2
|
EKA2 (EPOC Kernel Architecture 2) is the second-generation Symbian platform real-time operating system kernel, which originated in the earlier operating system EPOC.
EKA2 began with a proprietary software license. In October 2009, it was released as free and open-source software under an Eclipse Public License. In April 2011, it was reverted to a proprietary license.
Like its predecessor, EKA1, it has preemptive multithreading and full memory protection. The main differences are:
Real-time guarantees: each application programming interface (API) call is fast, but more importantly, time-bound
Multiple threads inside the kernel, and outside
Pluggable memory models, allowing better support for later generations of ARM instruction set architecture.
A nanokernel which provides the most basic OS facilities upon which other personality layers can be built
The user interface of EKA2 is almost fully compatible with EKA1. EKA1 was not used after Symbian OS version 8.1, and was superseded in 2005.
The main advantage of EKA2 was its ability to run full telephone signalling protocol stacks. Previously, on Symbian phones, these had to run on a separate central processing unit (CPU). Such signalling stacks are very complex and rewriting them to work natively on Symbian OS is typically not an option. EKA2 thus allows personality layers to emulate the basic primitives of other operating systems, thus allowing existing signalling stacks to run largely unchanged.
Real-time guarantees are a prerequisite of signalling stacks, and also help with multimedia tasks. However, as with any RTOS, a full analysis of all threads is needed before any real-time guarantees can be offered to anything except the highest-priority thread; because higher priority threads may prevent lower-priority threads from running. Any multimedia task is likely to involve graphics, storage and/or networking activity, all of which are more likely to disrupt the stream than the kernel is.
Inside the kernel,
|
https://en.wikipedia.org/wiki/DES%20Challenges
|
The DES Challenges were a series of brute force attack contests created by RSA Security to highlight the lack of security provided by the Data Encryption Standard.
The Contests
The first challenge began in 1997 and was solved in 96 days by the DESCHALL Project.
DES Challenge II-1 was solved by distributed.net in 39 days in early 1998. The plaintext message being solved for was "The secret message is: Many hands make light work."
DES Challenge II-2 was solved in just 56 hours in July 1998, by the Electronic Frontier Foundation (EFF), with their purpose-built Deep Crack machine. EFF won $10,000 for their success, although their machine cost $250,000 to build. The contest demonstrated how quickly a rich corporation or government agency, having built a similar machine, could decrypt ciphertext encrypted with DES. The text was revealed to be "The secret message is: It's time for those 128-, 192-, and 256-bit keys."
DES Challenge III was a joint effort between distributed.net and Deep Crack. The key was found in just 22 hours 15 minutes in January 1999, and the plaintext was "See you in Rome (second AES Conference, March 22-23, 1999)".
Reaction
After the DES had been shown to be breakable, FBI director Louis Freeh told Congress, "That is not going to make a difference in a kidnapping case. It is not going to make a difference in a national security case. We don't have the technology or the brute force capability to get to this information."
It was not until special purpose hardware brought the time down below 24 hours that both industry and federal authorities had to admit that the DES was no longer viable. Although the National Institute of Standards and Technology started work on what became the Advanced Encryption Standard in 1997, they continued to endorse the DES as late as October 1999, with FIPS 46-3. However, Triple DES was preferred.
See also
RSA Factoring Challenge
RSA Secret-Key Challenge
References
Cryptography contests
Data Encryption Standard
Recurr
|
https://en.wikipedia.org/wiki/Agda%20%28programming%20language%29
|
Agda is a dependently typed functional programming language originally developed by Ulf Norell at Chalmers University of Technology with implementation described in his PhD thesis. The original Agda system was developed at Chalmers by Catarina Coquand in 1999. The current version, originally known as Agda 2, is a full rewrite, which should be considered a new language that shares a name and tradition.
Agda is also a proof assistant based on the propositions-as-types paradigm, but unlike Coq, has no separate tactics language, and proofs are written in a functional programming style. The language has ordinary programming constructs such as data types, pattern matching, records, let expressions and modules, and a Haskell-like syntax. The system has Emacs, Atom, and VS Code interfaces but can also be run in batch mode from the command line.
Agda is based on Zhaohui Luo's unified theory of dependent types (UTT), a type theory similar to Martin-Löf type theory.
Agda is named after the Swedish song "Hönan Agda", written by Cornelis Vreeswijk, which is about a hen named Agda. This alludes to the name of the theorem prover Coq, which was named after Thierry Coquand, Catarina Coquand's husband.
Features
Inductive types
The main way of defining data types in Agda is via inductive data types which are similar to algebraic data types in non-dependently typed programming languages.
Here is a definition of Peano numbers in Agda:
data ℕ : Set where
zero : ℕ
suc : ℕ → ℕ
Basically, it means that there are two ways to construct a value of type , representing a natural number. To begin, zero is a natural number, and if n is a natural number, then suc n, standing for the successor of n, is a natural number too.
Here is a definition of the "less than or equal" relation between two natural numbers:
data _≤_ : ℕ → ℕ → Set where
z≤n : {n : ℕ} → zero ≤ n
s≤s : {n m : ℕ} → n ≤ m → suc n ≤ suc m
The first constructor, z≤n, corresponds to the axiom that zero is less than o
|
https://en.wikipedia.org/wiki/Gross%20%28unit%29
|
In English and related languages, several terms involving the words "great" or "gross" relate to numbers involving a multiple of exponents of twelve (dozen):
A gross refers to a group of 144 items (a dozen dozen or a square dozen, 122).
A great gross refers to a group of 1,728 items (a dozen gross or a cubic dozen, 123).
A small gross or a great hundred refers to a group of 120 items (ten dozen, 10×12).
The term can be abbreviated gr. or gro., and dates from the early 15th century. It derives from the Old French grosse douzaine, meaning "large dozen”. The continued use of these terms in measurement and counting represents the duodecimal number system. This has led groups such as the Dozenal Society of America to advocate for wider use of "gross" and related terms instead of the decimal system.
See also
Long hundred
References
Integers
Units of amount
|
https://en.wikipedia.org/wiki/Co-stimulation
|
Co-stimulation is a secondary signal which immune cells rely on to activate an immune response in the presence of an antigen-presenting cell. In the case of T cells, two stimuli are required to fully activate their immune response. During the activation of lymphocytes, co-stimulation is often crucial to the development of an effective immune response. Co-stimulation is required in addition to the antigen-specific signal from their antigen receptors.
T cell co-stimulation
T cells require two signals to become fully activated. A first signal, which is antigen-specific, is provided through the T cell receptor (TCR) which interacts with peptide-MHC molecules on the membrane of an antigen presenting cell (APC). A second signal, the co-stimulatory signal, is antigen nonspecific and is provided by the interaction between co-stimulatory molecules expressed on the membrane of the APC and the T cell. This interaction promotes and enhances the TCR signaling, but can also be bi-directional. The co-stimulatory signal is necessary for T cell proliferation, differentiation and survival. Activation of T cells without co-stimulation may lead to the unresponsiveness of the T cell (also called anergy), apoptosis or the acquisition of the immune tolerance.
The counterpart of the co-stimulatory signal is a (co-)inhibitory signal, where inhibitory molecules interact with different signaling pathways in order to arrest T cell activation. Mostly known inhibitory molecules are CTLA4 and PD1, used in cancer immunotherapy.
In T cell biology there are several co-stimulatory molecules from different protein families. Mostly studied are those belonging to Immunoglobulin super-family (IgSF) (such as CD28, B7, ICOS, CD226 or CRTAM) and TNF receptor super-family (TNFRSF) (such as 41-BB, OX40, CD27, GITR, HVEM, CD40, BAFFR, BAFF and others). Additionally, some co-stimulatory molecules belong to TIM family, CD2/SLAM family or BTN/BTN-like family.
The surface expression of different co-stimulator
|
https://en.wikipedia.org/wiki/1728%20%28number%29
|
1728 is the natural number following 1727 and preceding 1729. It is a dozen gross, or one great gross (or grand gross). It is also the number of cubic inches in a cubic foot.
In mathematics
1728 is the cube of 12, and therefore equal to the product of the six divisors of 12 (1, 2, 3, 4, 6, 12). It is also the product of the first four composite numbers (4, 6, 8, and 9), which makes it a compositorial. As a cubic perfect power, it is also a highly powerful number that has a record value (18) between the product of the exponents (3 and 6) in its prime factorization.
It is also a Jordan–Pólya number such that it is a product of factorials:
1728 has twenty-eight divisors, which is a perfect count (as with 12, with six divisors). It also has a Euler totient of 576 or 242, which divides 1728 thrice over.
1728 is an abundant and semiperfect number, as it is smaller than the sum of its proper divisors yet equal to the sum of a subset of its proper divisors.
It is a practical number as each smaller number is the sum of distinct divisors of 1728, and an integer-perfect number where its divisors can be partitioned into two disjoint sets with equal sum.
1728 is 3-smooth, since its only distinct prime factors are 2 and 3. This also makes 1728 a regular number which are most useful in the context of powers of 60, the smallest number with twelve divisors:
1728 is also an untouchable number since there is no number whose sum of proper divisors is 1728.
Many relevant calculations involving 1728 are computed in the duodecimal number system, in-which it is represented as "1000".
Modular j-invariant
1728 occurs in the algebraic formula for the j-invariant of an elliptic curve, as a function over a complex variable on the upper half-plane ,
Inputting a value of for , where is the imaginary number, yields another cubic integer:
In moonshine theory, the first few terms in the Fourier q-expansion of the normalized j-invariant exapand as,
The Griess algebra (which contains
|
https://en.wikipedia.org/wiki/Apple%20II%20peripheral%20cards
|
The Apple II line of computers supported a number of Apple II peripheral cards. In an era before plug and play USB or Bluetooth connections, these were expansion cards that plugged into slots on the motherboard. They added to and extended the functionality of the base motherboard when paired with specialized software that enabled the computer to read the input/output of the devices on the other side of the cable (the peripheral) or to take advantage of chips on the board - as was the case with memory expansion cards.
All Apple II models except the Apple IIc had at least seven 50-pin expansion slots, labeled Slots 1 though 7. These slots could hold printed circuit board cards with double-sided edge connectors, 25 "fingers" on each side, with 100 mil (0.1 inch) spacing between centers. Slot 3 in an Apple IIe that has an 80-column card fitted (which is usually the case) and Slots 1 through 6 in a normally configured Apple IIgs are "virtually" filled with on-board devices which means that the physical slots cannot be used at all, or only with certain specific cards, unless the conflicting "virtual" device is disabled.
In addition to the seven standard expansion slots, the following computers contained additional, largely special-purpose expansion slots:
Apple II and Apple II Plus: Slot 0 (50-pin, for the firmware card or the 16 kB Apple II Language Card)
Apple IIe: Auxiliary Slot (60-pin; primarily for 80-column display and memory expansion)
Apple IIgs: Memory Expansion Slot (40-pin)
Perhaps the most common cards found on early Apple II systems were the Disk II Controller Card, which allowed users of earlier Apple IIs to use the Apple Disk II, a 5¼ inch, 140 kB floppy disk drive; and the Apple 16K Language Card, which increased the base memory of late-model Apple II and standard Apple II Plus units from 48 kB to 64 kB. The Z-80 SoftCard, making the computer compatible with CP/M software, was also very popular.
Both Apple and dozens of third-party vendors created
|
https://en.wikipedia.org/wiki/Zest%20%28ingredient%29
|
Zest is a food ingredient that is prepared by scraping or cutting from the rind of unwaxed citrus fruits such as lemon, orange, citron, and lime. Zest is used to add flavor to foods.
In terms of fruit anatomy, the zest is obtained from the flavedo (exocarp) which is also referred to as zest. The flavedo and white pith (albedo) of a citrus fruit together makes up its peel. The amounts of both flavedo and pith are variable among citrus fruits, and may be adjusted by the manner in which they are prepared. Citrus peel may be used fresh, dried, candied, or pickled in salt.
Preparation
For culinary use, a zester, grater, vegetable peeler, paring knife, or even a surform tool is used to scrape or cut zest from the fruit. Alternatively, the peel is sliced, then excess pith (if any) cut away.
The white portion of the peel under the zest (pith, albedo or mesocarp) may be unpleasantly bitter and is generally avoided by limiting the peeling depth. Some citrus fruits have so little white mesocarp that their peel can be used whole.
Variation between fruit
The zest and mesocarp vary with the genetics of the fruit. Fruit with peels that are almost all flavedo are generally mandarines; relatives of pomelos and citrons tend to have thicker mesocarp. The mesocarp of pomelo relatives (grapefruit, orange, etc.) is generally more bitter; the mesocarp of citron relatives (Mexican and Persian limes, alemows etc.) is milder. The lemon is a hybrid of pummelo, citron, and mandarin. The mesocarp is also edible, and is used to make succade.
Uses
Zest is often used to add flavor to different pastries and sweets, such as pies (e.g., lemon meringue pie), cakes, cookies, biscuits, puddings, confectionery, candy and chocolate. Zest also is added to certain dishes (including ossobuco alla milanese), marmalades, sauces, sorbets and salads.
Zest is a key ingredient in a variety of sweet and sour condiments, including lemon pickle, lime chutney, and marmalade. Lemon liqueurs and liquors such as
|
https://en.wikipedia.org/wiki/Open%20Source%20Tripwire
|
Open Source Tripwire is a free software security and data integrity tool for monitoring and alerting on specific file change(s) on a range of systems. The project is based on code originally contributed by Tripwire, Inc. in 2000.
See also
AIDE
Host-based intrusion detection system comparison
OSSEC
Samhain
References
External links
Tripwire, Inc.
Free security software
Intrusion detection systems
Linux security software
|
https://en.wikipedia.org/wiki/Tripwire%20%28company%29
|
Tripwire, Inc. is a software company based in Portland, Oregon, that focuses on security and compliance automation. It is a subsidiary of technology company Fortra.
History
Tripwire's intrusion detection software was created in the 1990s by Purdue University graduate student Gene Kim and his professor Gene Spafford. In 1997, Gene Kim co-founded Tripwire, Inc. with rights to the Tripwire name and technology, and produced a commercial version, Tripwire for Servers.
In 2000, Tripwire released Open Source Tripwire.
In 2005, the firm released Tripwire Enterprise, a product for configuration control by detecting, assessing, reporting and remediating file and configuration changes. In January 2010, it announced the release of Tripwire Log Center, a log and security information and event management (SIEM) software that stores, correlates and reports log and security event data. The two products can be integrated to enable correlation of change and event data. August 21, 2009, the firm acquired Activeworx technologies from CrossTec Corporation.
Revenues grew to $74 million in 2009. In October 2009, the company had 261 employees; that number grew to 336 by June 2010.
By May–June 2010, the company had over 5,500 customers and had announced that it had filed a registration statement with the Securities and Exchange Commission for a proposed initial public offering of its common stock. A year later, the company announced its sale to the private equity firm Thoma Bravo, ending its $86 million IPO plans. CEO Jim Johnson cited the firm's failure to reach the $100 million revenue milestone in 2010 as well as changing IPO market expectations as reasons for not going through with the IPO. The day following the acquisition, the company laid off about 50 of its 350 employees.
Tripwire acquired nCircle, which focused on asset discovery and vulnerability management, in 2013.
In December 2014, Belden announced plans to buy Tripwire for $710 million. The acquisition was completed
|
https://en.wikipedia.org/wiki/Stuart%20Schreiber
|
Stuart L. Schreiber (born 6 February 1956) is a scientist at Harvard University and co-founder of the Broad Institute. He has been active in chemical biology, especially the use of small molecules as probes of biology and medicine. Small molecules are the molecules of life most associated with dynamic information flow; these work in concert with the macromolecules (DNA, RNA, proteins) that are the basis for inherited information flow.
Education and training
Schreiber obtained a Bachelor of Science degree in chemistry from the University of Virginia in 1977, after which he entered Harvard University as a graduate student in chemistry. He joined the research group of Robert B. Woodward and after Woodward's death continued his studies under the supervision of Yoshito Kishi. In 1980, he joined the faculty of Yale University as an assistant professor in chemistry, and in 1988 he moved to Harvard University as the Morris Loeb Professor.
Work in 1980s and 1990s
Schreiber started his research work in organic synthesis, focusing on concepts such as the use of [2 + 2] photocycloadditions to establish stereochemistry in complex molecules, the fragmentation of hydroperoxides to produce macrolides, ancillary stereocontrol, group selectivity and two-directional synthesis. Notable accomplishments include the total syntheses of complex natural products such as talaromycin B, asteltoxin, avenaciolide, gloeosporone, hikizimicin, mycoticin A, epoxydictymene and the immunosuppressant FK-506.
Following his work on the FK506-binding protein FKBP12 in 1988, Schreiber reported that the small molecules FK506 and cyclosporin inhibit the activity of the phosphatase calcineurin by forming the ternary complexes FKBP12-FK506-calcineurin and cyclophilin-ciclosporin-calcineurin. This work, together with work by Gerald Crabtree at Stanford University concerning the NFAT proteins, led to the elucidation of the calcium-calcineurin-NFAT signaling pathway. The Ras-Raf-MAPK pathway was not elucidate
|
https://en.wikipedia.org/wiki/Martingale%20difference%20sequence
|
In probability theory, a martingale difference sequence (MDS) is related to the concept of the martingale. A stochastic series X is an MDS if its expectation with respect to the past is zero. Formally, consider an adapted sequence on a probability space . is an MDS if it satisfies the following two conditions:
, and
,
for all . By construction, this implies that if is a martingale, then will be an MDS—hence the name.
The MDS is an extremely useful construct in modern probability theory because it implies much milder restrictions on the memory of the sequence than independence, yet most limit theorems that hold for an independent sequence will also hold for an MDS.
A special case of MDS, denoted as {Xt,t}0 is known as innovative sequence of Sn; where Sn and are corresponding to random walk and filtration of the random processes .
In probability theory innovation series is used to emphasize the generality of Doob representation. In signal processing the innovation series is used to introduce Kalman filter. The main differences of innovation
terminologies are in the applications. The later application aims to introduce the nuance of samples to the model by random sampling.
References
James Douglas Hamilton (1994), Time Series Analysis, Princeton University Press.
James Davidson (1994), Stochastic Limit Theory, Oxford University Press.
Martingale theory
random walk
filtration
Doob decomposition theorem
signal processing
Kalman filter
[[:Category:Innovation (signal processing)]
|
https://en.wikipedia.org/wiki/Tyrannosaurus%20in%20popular%20culture
|
Tyrannosaurus rex is unique among dinosaurs in its place in modern culture; paleontologist Robert Bakker has called it "the most popular dinosaur among people of all ages, all cultures, and all nationalities". Paleontologists Mark Norell and Lowell Dingus have likewise called it "the most famous dinosaur of all times."<ref>Lowell Dingus and Mark Norell, Barnum Brown: The Man who discovered Tyrannosaurs rex, (Los Angeles: University of California Press, 2010, pg 94)</ref> Paleoartist Gregory S. Paul has called it "the theropod. [...] This is the public's favorite dinosaur [...] Even the formations it is found in have fantastic names like Hell Creek and Lance." Other paleontologists agree with that and note that whenever a museum erects a new skeleton or bring in an animatronic model, visitor numbers go up. "Jurassic Park and King Kong would not have been the same without it." In the public mind, T. rex sets the standard of what a dinosaur should be. Science writer Riley Black similarly states, "In all of prehistory, there is no animal that commands our attention quite like Tyrannosaurus rex, the tyrant lizard king. Since the time this dinosaur was officially named in 1905, the enormous carnivore has stood as the ultimate dinosaur."Tyrannosaurus was first discovered by paleontologist Barnum Brown in the badlands of Hell Creek, Montana, in 1902 and has since been frequently represented in film and on television, in literature, on the Internet and in many kinds of games. Brown himself, despite having discovered many other prehistoric animals for the American Museum of Natural History before and after, always referred to Tyrannosaurus rex as "my favorite child". In Brown's own words, Tyrannosaurus rex was indeed "king of the period and monarch of its race... He is now the dominant figure in the Cretaceous Hall to awe and inspire young boys when they grow up."
General impact
On finding Tyrannosaurus, Barnum Brown wrote to Henry Fairfield Osborn, his employer and the Pr
|
https://en.wikipedia.org/wiki/Apex%20%28radio%20band%29
|
Apex radio stations (also known as skyscraper and pinnacle) was the name commonly given to a short-lived group of United States broadcasting stations, which were used to evaluate transmitting on frequencies that were much higher than the ones used by standard amplitude modulation (AM) and shortwave stations. Their name came from the tall height of their transmitter antennas, which were needed because coverage was primarily limited to local line-of-sight distances. These stations were assigned to what at the time were described as "ultra-high shortwave" frequencies, between roughly 25 and 44 MHz. They employed amplitude modulation (AM) transmissions, although in most cases using a wider bandwidth than standard broadcast band AM stations, in order to provide high fidelity sound with less static and distortion.
In 1937 the Federal Communications Commission (FCC) formally allocated an Apex station band, consisting of 75 transmitting frequencies running from 41.02 to 43.98 MHz. These stations were never given permission to operate commercially, although they were allowed to retransmit programming from standard AM stations. Most operated under experimental licenses, however this band was the first to include a formal "non-commercial educational" station classification.
The FCC eventually concluded that frequency modulation (FM) transmissions were superior, and the Apex band was eliminated effective January 1, 1941, in order to make way for the creation of the original FM band, assigned to 42 to 50 MHz.
Initial development
During the 1920s and 1930s, radio engineers and government regulators investigated the characteristics of transmitting frequencies higher than those currently in use. In the United States, by 1930 the original AM broadcasting band consisted of 96 frequencies from 550 to 1500 kHz, with a 10 kHz spacing between adjacent assignments. On this band, a station's coverage during the daytime consisted exclusively of its groundwave signal, which for the most
|
https://en.wikipedia.org/wiki/Mass%20transfer%20coefficient
|
In engineering, the mass transfer coefficient is a diffusion rate constant that relates the mass transfer rate, mass transfer area, and concentration change as driving force:
Where:
is the mass transfer coefficient [mol/(s·m2)/(mol/m3)], or m/s
is the mass transfer rate [mol/s]
is the effective mass transfer area [m2]
is the driving force concentration difference [mol/m3].
This can be used to quantify the mass transfer between phases, immiscible and partially miscible fluid mixtures (or between a fluid and a porous solid). Quantifying mass transfer allows for design and manufacture of separation process equipment that can meet specified requirements, estimate what will happen in real life situations (chemical spill), etc.
Mass transfer coefficients can be estimated from many different theoretical equations, correlations, and analogies that are functions of material properties, intensive properties and flow regime (laminar or turbulent flow). Selection of the most applicable model is dependent on the materials and the system, or environment, being studied.
Mass transfer coefficient units
(mol/s)/(m2·mol/m3) = m/s
Note, the units will vary based upon which units the driving force is expressed in. The driving force shown here as '' is expressed in units of moles per unit of volume, but in some cases the driving force is represented by other measures of concentration with different units. For example, the driving force may be partial pressures when dealing with mass transfer in a gas phase and thus use units of pressure.
See also
Mass transfer
Separation process
Sieving coefficient
References
Transport phenomena
|
https://en.wikipedia.org/wiki/LabWindows/CVI
|
LabWindows/CVI (CVI is short for C for Virtual Instrumentation) is an ANSI C programming environment for test and measurement developed by National Instruments. The program was originally released as LabWindows for DOS in 1987, but was soon revisioned (and renamed) for the Microsoft Windows platform. The current version of LabWindows/CVI (commonly referred to as CVI) is 2020.
LabWindows/CVI uses the same libraries and data-acquisition modules as the better known National Instrument product LabVIEW and is thus highly compatible with it.
LabVIEW is targeted more at domain experts and scientists, and CVI more towards software engineers that are more comfortable with text-based linear languages such as C.
Release history
Starting with LabWindows/CVI 8.0, major versions are released around the first week of August, to coincide with the annual National Instruments conference NI Week, and followed by a bug-fix release the following February.
In 2009, National Instruments started to name the releases after the year in which they are released. The bugfix is called a Service Pack (for instance, the 2009 Service Pack 1 release was published in February 2010).
See also
National Instruments
References
Integrated development environments
Domain-specific programming languages
C (programming language) compilers
Data analysis software
Numerical software
Cross-platform software
|
https://en.wikipedia.org/wiki/Grand%20Theft%20Auto%20clone
|
A Grand Theft Auto clone (often shortened to GTA clone) is a subgenre of open world action-adventure video games, characterized by their likeness to the Grand Theft Auto series in either gameplay, or overall design. In these types of open world games, players may find and use a variety of vehicles and weapons while roaming freely in an open world setting. The objective of Grand Theft Auto clones is to complete a sequence of core missions involving driving and shooting, but often side-missions and minigames are added to improve replay value. The storylines of games in this subgenre typically have strong themes of crime, violence and other controversial elements such as drugs and sexually explicit content.
The subgenre has its origins in open world action adventure games popularized in Europe (and particularly the United Kingdom) throughout the 1980s and 1990s. The release of Grand Theft Auto (1997) marked a major commercial success for open-ended game design in North America, and featured a more marketable crime theme. But it was the popularity of its 3D sequel Grand Theft Auto III in 2001 that led to the widespread propagation of a more specific set of gameplay conventions consistent with a subgenre. The subgenre now includes many games from different developers all over the world where the player can control wide ranges of vehicles and weapons. The subgenre has evolved with greater levels of environmental detail and more realistic behaviors.
As usage of the term "clone" often has a negative connotation and can be seen as controversial, reviewers have come up with other names for the subgenre. Similar terminology for other genres, such as "[[Platform game#Naming|Donkey Kong-type]]" and "Doom clone", has given way to more neutral language. Names such as "sandbox game," however, are applied to a wider range of games that do not share key features of the Grand Theft Auto series.
Definition
A Grand Theft Auto clone is a video game that falls within the genre populari
|
https://en.wikipedia.org/wiki/War%20in%20Middle%20Earth
|
War in Middle Earth is a real-time strategy game released for the ZX Spectrum, MSX, Commodore 64, Amstrad CPC, MS-DOS, Amiga, Apple IIGS, and Atari ST in 1988 by Virgin Mastertronic on the Melbourne House label.
The game combines both large scale army unit level and small scale character level. All the action happens simultaneously in game world and places could be seen from the map or at the ground level. Individual characters can also be seen in larger battles (in which they either survive or die). If the battle is less than 100 units, approximately, it can be watched on ground level. Otherwise it will be only displayed numerically. On ground level characters can acquire objects and talk with non-player characters (such as Radagast or Tom Bombadil).
Reception
The game was reviewed in 1989 in Dragon #147 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 3 out of 5 stars. Computer Gaming World gave the game a mixed review, noting that, although it faithfully recreates the events of the books, genuine strategy is lacking and the game plays very similarly on subsequent playthroughs. Compute!s review was more positive, only criticizing an anticlimactic ending to "an otherwise impressive game" that was "faithful to the Middle Earth story line".
The Spanish magazine Microhobby valued the game with the following scores: Originality: 80% Graphics: 70% Motion: - Sound: 50% Difficulty: 100% Addiction: 80%
Reviews
Computer and Video Games (Mar, 1989)
ACE (Advanced Computer Entertainment) (May, 1989)
Commodore User (Apr, 1989)
Your Sinclair (Apr, 1989)
Info (Nov, 1989)
Crash! (Mar, 1989)
Zzap! (Apr, 1989)
Power Play (Mar, 1989)
The Games Machine (Apr, 1989)
Amstrad Action (Mar, 1989)
ASM (Aktueller Software Markt) (Feb, 1989)
Jeux & Stratégie #58
References
External links
War in Middle-earth at World of Spectrum
1988 video games
Amstrad CPC games
Amiga games
Apple IIGS games
Atari ST games
Commodore 64 games
MSX games
|
https://en.wikipedia.org/wiki/Of%20Man%20and%20Manta
|
Of Man and Manta is a trilogy of science fiction novels written by Piers Anthony. It consists of the three books: Omnivore (1968), Orn (1970), and (1975).
Omnivore has as its frame the investigation of the deaths of eighteen travelers from Earth to the distant planet Nacre. Nacre is seen through the eyes of three surviving scientist-explorers: Cal, Veg, and Aquilon.
The planet Nacre's dominant species are fungi, including the intelligent mantas. The mantas are soft-bodied creatures capable of high speeds and flight, superficially resembling manta rays. They are carnivores who farm the one extant herbivore species by protecting them from the voracious omnivore species. The planet is notable for its thick atmosphere, which allows flight to be performed with less energy, and permits the existence of air-borne phytoplankton. The herbivores eat the plankton, and the omnivores eat anything they can. The human characters' diets play an important role in their interaction with the native species. Aquilon eats a normal human diet—she is an omnivore. Veg is a vegetarian. Cal is forced to drink blood to survive, due to a medical condition.
Orn involves travel by the scientists and mantas into a parallel dimension they dub Paleo, resembling the distant past of Earth, where they encounter dinosaur species and an intelligent flightless bird called Orn. Orn has the ability of genetic memory, able to remember anything that happened to an ancestor prior to the time of their reproduction. Much of the plot conflict stems from the love triangle between the protagonists and the mysterious motives of a cybernetically-augmented government agent sent along to monitor their progress.
involves the three scientists attempting to return to Earth from another dimension inhabited by hostile machines. Interlopers from other realities (using technology similar to that of the scientists' government) guide and hamper the explorers. A secondary story tells of a multidimensional cellular aut
|
https://en.wikipedia.org/wiki/Paradox%20of%20enrichment
|
The paradox of enrichment is a term from population ecology coined by Michael Rosenzweig in 1971. He described an effect in six predator–prey models where increasing the food available to the prey caused the predator's population to destabilize. A common example is that if the food supply of a prey such as a rabbit is overabundant, its population will grow unbounded and cause the predator population (such as a lynx) to grow unsustainably large. That may result in a crash in the population of the predators and possibly lead to local eradication or even species extinction.
The term 'paradox' has been used since then to describe this effect in slightly conflicting ways. The original sense was one of irony; by attempting to increase the carrying capacity in an ecosystem, one could fatally imbalance it. Since then, some authors have used the word to describe the difference between modelled and real predator–prey interactions.
Rosenzweig used ordinary differential equation models to describe changes in prey populations. Enrichment was taken to be increasing the prey carrying capacity and showing that the prey population destabilized, usually into a limit cycle.
The cycling behavior after destabilization was more thoroughly explored in a subsequent paper (May 1972) and discussion (Gilpin and Rosenzweig 1972).
Support and possible solutions to the paradox
Many studies have been done on the paradox of enrichment since Rosenzweig. There is empirical support for the paradox of enrichment, mainly from small scale laboratory experiments, but limited support from field observations. as summarised by Roy and Chattopadhyay
, such as these exceptions:
Inedible prey: if there are multiple prey species and not all are edible, some may absorb nutrients and stabilise cyclicity.
Invulnerable prey: even with a single prey species, if there is a degree of temporal or spatial refuge (the prey can hide from the predator), destabilisation may not happen.
Unpalatable prey: if prey do n
|
https://en.wikipedia.org/wiki/Histone%20methylation
|
Histone methylation is a process by which methyl groups are transferred to amino acids of histone proteins that make up nucleosomes, which the DNA double helix wraps around to form chromosomes. Methylation of histones can either increase or decrease transcription of genes, depending on which amino acids in the histones are methylated, and how many methyl groups are attached. Methylation events that weaken chemical attractions between histone tails and DNA increase transcription because they enable the DNA to uncoil from nucleosomes so that transcription factor proteins and RNA polymerase can access the DNA. This process is critical for the regulation of gene expression that allows different cells to express different genes.
Function
Histone methylation, as a mechanism for modifying chromatin structure is associated with stimulation of neural pathways known to be important for formation of long-term memories and learning. Histone methylation is crucial for almost all phases of animal embryonic development.
Animal models have shown methylation and other epigenetic regulation mechanisms to be associated with conditions of aging, neurodegenerative diseases, and intellectual disability (Rubinstein–Taybi syndrome, X-linked intellectual disability). Misregulation of H3K4, H3K27, and H4K20 are associated with cancers. This modification alters the properties of the nucleosome and affects its interactions with other proteins, particularly in regards to gene transcription processes.
Histone methylation can be associated with either transcriptional repression or activation. For example, trimethylation of histone H3 at lysine 4 (H3K4me3) is an active mark for transcription and is upregulated in hippocampus one hour after contextual fear conditioning in rats. However, dimethylation of histone H3 at lysine 9 (H3K9me2), a signal for transcriptional silencing, is increased after exposure to either the fear conditioning or a novel environment alone.
Methylation of some lysine
|
https://en.wikipedia.org/wiki/Rental%20utilization
|
Utilization is the primary method by which tool rental companies measure asset performance. In its most basic form it measures the actual revenue earned by assets against the potential revenue they could have earned.
Calculations
Rental utilization is divided into a number of different calculations, and not all companies work precisely the same way. In general terms however there are two key calculations: the physical utilization on the asset, which is measured based on the number of available days for rental against the number of days actually rented. (This may also be measured in hours for certain types of equipment), and the financial utilization on the asset (referred to in North America as $ Utilization) which is measured as the rental revenue achieved over a period of time against the potential revenue that could have been achieved based on a target or standard, non-discounted rate. Physical utilization is also sometimes referred to as spot utilization, where a rental company looks at its current utilization of assets based on a single moment in time (e.g. now, 9 am today, etc.).
Utilization calculations may be varied based on many different factors. For example:
A company with equipment which requires preventative maintenance activities every 2 weeks, may decide that the number of available days in the month is decreased as it will unavailable due to maintenance for 2 days out of each month.
Some rental businesses give "free days" on rental contract billing processes, for example on a national or public holiday, and therefore the equipment does not earn any money on those days, even though it is physically on rent.
Some companies charge minimum rates, for example you may rent an excavator for 1 day, but be charged a three-day minimum. The Physical utilization will therefore be 100% on the day, but the financial utilization is actually 300% as you've earned 3 days revenue for 1 day's work.
Rental Software is normally required to assist management teams
|
https://en.wikipedia.org/wiki/Dip-coating
|
Dip coating is an industrial coating process which is used, for example, to manufacture bulk products such as coated fabrics and condoms and specialised coatings for example in the biomedical field. Dip coating is also commonly used in academic research, where many chemical and nano material engineering research projects use the dip coating technique to create thin-film coatings.
The earliest dip-coated products may have been candles. For flexible laminar substrates such as fabrics, dip coating may be performed as a continuous roll-to-roll process. For coating a 3D object, it may simply be inserted and removed from the bath of coating. For condom-making, a former is dipped into the coating. For some products, such as early methods of making candles, the process is repeated many times, allowing a series of thin films to bulk up to a relatively thick final object.
The final product may incorporate the substrate and the coating, or the coating may be peeled off to form an object which consists solely of the dried or solidified coating, as in the case of a condom.
As a popular alternative to Spin coating, dip-coating methods are frequently employed to produce thin films from sol-gel precursors for research purposes, where it is generally used for applying films onto flat or cylindrical substrates.
Process
The dip-coating process can be separated into five stages:
Immersion: The substrate is immersed in the solution of the coating material at a constant speed (preferably jitter-free).
Start-up: The substrate has remained inside the solution for a while and is starting to be pulled up.
Deposition: The thin layer deposits itself on the substrate while it is pulled up. The withdrawing is carried out at a constant speed to avoid any jitters. The speed determines the thickness of the coating (faster withdrawal gives thicker coating material).
Drainage: Excess liquid will drain from the surface.
Evaporation: The solvent evaporates from the liquid, forming the thin lay
|
https://en.wikipedia.org/wiki/Evil%20number
|
In number theory, an evil number is a non-negative integer that has an even number of 1s in its binary expansion. These numbers give the positions of the zero values in the Thue–Morse sequence, and for this reason they have also been called the Thue–Morse set. Non-negative integers that are not evil are called odious numbers.
Examples
The first evil numbers are:
0, 3, 5, 6, 9, 10, 12, 15, 17, 18, 20, 23, 24, 27, 29, 30, 33, 34, 36, 39 ...
Equal sums
The partition of the non-negative integers into the odious and evil numbers is the unique partition of these numbers into two sets that have equal multisets of pairwise sums.
As 19th-century mathematician Eugène Prouhet showed, the partition into evil and odious numbers of the numbers from to , for any , provides a solution to the Prouhet–Tarry–Escott problem of finding sets of numbers whose sums of powers are equal up to the th power.
In computer science
In computer science, an evil number is said to have even parity.
References
Integer sequences
|
https://en.wikipedia.org/wiki/House%20with%20two%20rooms
|
House with two rooms or Bing's house is a particular contractible, 2-dimensional simplicial complex that is not collapsible. The name was given by R. H. Bing.
The house is made of 2-dimensional panels, and has two rooms. The upper room may be entered from the bottom face, while the lower room may be entered from the upper face. There are two small panels attached to the tunnels between the rooms, which make this simplicial complex contractible.
See also
Dogbone space
Dunce hat
List of topologies
External links
Bing's house with two rooms at Info Shako
A 3D model of Bing's house - the model can be visualized by using anaglyph glasses
A printable 3D model of Bing's house at Thingverse
References
Low-dimensional topology
|
https://en.wikipedia.org/wiki/Nuclear%20run-on
|
A nuclear run-on assay is conducted to identify the genes that are being transcribed at a certain time point. Approximately one million cell nuclei are isolated and incubated with labeled nucleotides, and genes in the process of being transcribed are detected by hybridization of extracted RNA to gene specific probes on a blot. Garcia-Martinez et al. (2004) developed a protocol for the yeast S. cerevisiae (Genomic run-on, GRO) that allows for the calculation of transcription rates (TRs) for all yeast genes to estimate mRNA stabilities for all yeast mRNAs.
Alternative microarray methods have recently been developed, mainly PolII RIP-chip: RNA immunoprecipitation of RNA polymerase II with phosphorylated C-terminal domain directed antibodies and hybridization on a microarray slide or chip (the word chip in the name stems from "ChIP-chip" where a special Affymetrix GeneChip was required). A comparison of methods based on run-on and ChIP-chip has been made in yeast (Pelechano et al., 2009). A general correspondence of both methods has been detected but GRO is more sensitive and quantitative. It has to be considered that run-on only detects elongating RNA polymerases whereas ChIP-chip detects all present RNA polymerases, including backtracked ones.
Attachment of new RNA polymerase to genes is prevented by inclusion of sarkosyl. Therefore only genes that already have an RNA polymerase will produce labeled transcripts. RNA transcripts that were synthesized before the addition of the label will not be detected as they will lack the label. These run on transcripts can also be detected by purifying labeled transcripts by using antibodies that detect the label and hybridizing these isolated transcripts with gene expression arrays or by next generation sequencing (GRO-Seq).
Run on assays have been largely supplanted with Global Run on assays that use next generation DNA sequencing as a readout platform. These assays are known as GRO-Seq and provide an incredibly detailed view
|
https://en.wikipedia.org/wiki/Homoiconicity
|
In computer programming, homoiconicity (from the Greek words homo- meaning "the same" and icon meaning "representation") is a property of some programming languages. A language is homoiconic if a program written in it can be manipulated as data using the language, and thus the program's internal representation can be inferred just by reading the program itself. This property is often summarized by saying that the language treats code as data.
In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself. This makes metaprogramming easier than in a language without this property: reflection in the language (examining the program's entities at runtime) depends on a single, homogeneous structure, and it does not have to handle several different structures that would appear in a complex syntax. Homoiconic languages typically include full support of syntactic macros, allowing the programmer to express transformations of programs in a concise way.
A commonly cited example is Lisp, which was created to allow for easy list manipulations and where the structure is given by S-expressions that take the form of nested lists, and can be manipulated by other Lisp code. Other examples are the programming languages Clojure (a contemporary dialect of Lisp), Rebol (also its successor Red), Refal, Prolog, and possibly Julia (see the section “Implementation methods” for more details).
History
The term first appeared in connection with the TRAC programming language, developed by Calvin Mooers:
The last sentence above is annotated with footnote 4, which gives credit for the origin of the term:
The researchers implicated in this quote might be neurophysiologist and cybernetician Warren Sturgis McCulloch (note the difference in the surname from the note) and philosopher, logician and mathematician Charles Sanders Peirce. Pierce indeed used the term "icon" in his Semiotic Theory. According to Peirce, there are th
|
https://en.wikipedia.org/wiki/Deltic%20Preservation%20Society
|
The Deltic Preservation Society is a railway preservation group based in England. The society is dedicated to the preservation and restoration of the remaining Class 55 "Deltic" diesel locomotives operated by British Rail from the 1960s to the 1980s.
Formation
The Deltic Preservation Society (DPS) was founded in 1977 following the entry into service of the Class 43 High Speed Train. A group of Class 55 enthusiasts made the decision to join together to ensure that a working locomotive was kept running, forming the DPS to raise funds to this end. By 1982, when the Class 55 was withdrawn, the Society numbered over 1,500, with the result that it was able to purchase two locomotives, D9009/55009 (Alycidon) and D9019/55019 (Royal Highland Fusilier), from British Rail. These two units were moved immediately from Doncaster Works and put into service on the North Yorkshire Moors Railway. A third locomotive, D9015/55015 (Tulyar) was added to the inventory in 1986 when it was purchased from a private owner. For the first few years, the DPS provided its locomotives to run on a number of private railways. However, following a change of policy by British Rail in 1991, a few years before its privatisation, it became possible for private operators to run trains on the mainline rail network. With this in mind, the DPS sent Alycidon for a major overhaul, completed in 1998, which allowed the locomotive to gain a certification for running on the public railway. Royal Highland Fusilier was given a less extensive overhaul, receiving its certification at the same time. Both locomotives re-entered passenger service in May 1999, operating railtour services for the Society. At the same time, both were also used on many occasions by Venice-Simplon Orient Express to haul the Northern Belle charter train. In 1997, Tulyar was withdrawn from its private railway services and sent for an overhaul along the same lines as Alycidon to restore it to mainline service.
Royal Scots Grey
D9000/55022 Roya
|
https://en.wikipedia.org/wiki/Viral%20transformation
|
Viral transformation is the change in growth, phenotype, or indefinite reproduction of cells caused by the introduction of inheritable material. Through this process, a virus causes harmful transformations of an in vivo cell or cell culture. The term can also be understood as DNA transfection using a viral vector.
Viral transformation can occur both naturally and medically. Natural transformations can include viral cancers, such as human papillomavirus (HPV) and T-cell Leukemia virus type I. Hepatitis B and C are also the result of natural viral transformation of the host cells. Viral transformation can also be induced for use in medical treatments.
Cells that have been virally transformed can be differentiated from untransformed cells through a variety of growth, surface, and intracellular observations. The growth of transformed cells can be impacted by a loss of growth limitation caused by cell contact, less oriented growth, and high saturation density. Transformed cells can lose their tight junctions, increase their rate of nutrient transfer, and increase their protease secretion. Transformation can also affect the cytoskeleton and change in the quantity of signal molecules.
Type
There are three types of viral infections that can be considered under the topic of viral transformation. These are cytocidal, persistent, and transforming infections. Cytocidal infections can cause fusion of adjacent cells, disruption of transport pathways including ions and other cell signals, disruption of DNA, RNA and protein synthesis, and nearly always leads to cell death. Persistent infections involve viral material that lays dormant within a cell until activated by some stimulus. This type of infection usually causes few obvious changes within the cell but can lead to long chronic diseases. Transforming infections are also referred to as malignant transformation. This infection causes a host cell to become malignant and can be either cytocidal (usually in the c
|
https://en.wikipedia.org/wiki/DSSP%20%28imaging%29
|
DSSP stands for digital shape sampling and processing. It is an alternative and often preferred way of describing "reverse engineering" software and hardware. The term originated in a 2005 Society of Manufacturing Engineers' "Blue Book" on the topic, which referenced numerous suppliers of both scanning hardware and processing software.
DSSP employs various 3D scanning methods, including laser scanners, to acquire thousands to millions of points on the surface of a form and then software from a variety of suppliers to convert the resulting "point cloud" into forms useful for inspection, computer-aided design, visualization and other applications. It may also employ volumetric methods of scanning, such as digital tomography.
Some common applications include CAI (computer-aided inspection), creation of 3D CAD models from scanned data, medical applications, 3D imaging for Web 2.0 applications, and the restoration of culturally significant artifacts; as well as conventional reverse engineering for creating replacement parts.
The term 'reverse engineering' itself has acquired some notoriety when the technology has been used to copy others' designs.
The term 'laser scanning' has also been used somewhat interchangeably for DSSP. However, there are two problems with the term as a broad description of the field. First, it is only one of many alternative scanning technologies. Second, it misses the essential role of processing software in converting point cloud data into useful forms.
In some ways, DSSP is a 3D analog to DSP (digital signal processing) in that the software attempts to extract a clear and accurate 3D image from point data that may include noise. The notion of 'shape sampling' embedded in the term also acknowledges that, as in many measurement processes, the accuracy of the 3D data will depend upon the number and accuracy of points sampled.
The speed and accuracy of both scanners to acquire data and software algorithms to extract useful data has dramatical
|
https://en.wikipedia.org/wiki/Vibration%20isolation
|
Vibration isolation is the prevention of transmission of vibration from one component of a system to others parts of the same system, as in buildings or mechanical systems. Vibration is undesirable in many domains, primarily engineered systems and habitable spaces, and methods have been developed to prevent the transfer of vibration to such systems. Vibrations propagate via mechanical waves and certain mechanical linkages conduct vibrations more efficiently than others. Passive vibration isolation makes use of materials and mechanical linkages that absorb and damp these mechanical waves. Active vibration isolation involves sensors and actuators that produce disruptive interference that cancels-out incoming vibration.
Passive isolation
"Passive vibration isolation" refers to vibration isolation or mitigation of vibrations by passive techniques such as rubber pads or mechanical springs, as opposed to "active vibration isolation" or "electronic force cancellation" employing electric power, sensors, actuators, and control systems.
Passive vibration isolation is a vast subject, since there are many types of passive vibration isolators used for many different applications. A few of these applications are for industrial equipment such as pumps, motors, HVAC systems, or washing machines; isolation of civil engineering structures from earthquakes (base isolation), sensitive laboratory equipment, valuable statuary, and high-end audio.
A basic understanding of how passive isolation works, the more common types of passive isolators, and the main factors that influence the selection of passive isolators:
Common passive isolation systems
Pneumatic or air isolators
These are bladders or canisters of compressed air. A source of compressed air is required to maintain them. Air springs are rubber bladders which provide damping as well as isolation and are used in large trucks. Some pneumatic isolators can attain low resonant frequencies and are used for isolating large industr
|
https://en.wikipedia.org/wiki/Personal%20communicator
|
The term personal communicator has been used with several meanings. Around 1990 the next generation digital mobile phones were called digital personal communicators. Another definition, coined in 1991, is for a category of handheld devices that provide personal information manager functions and packet switched wireless data communications capabilities over wireless wide area networks such as cellular networks. These devices are now commonly referred to as smartphones.
See also
AT&T EO Personal Communicator, 1993
IBM Simon Personal Communicator, 1994
Nokia Communicator
Wireless PDA
Smartphone
External links
Concept genesis, Aug 1991
The Executive Computer; 'Mother of All Markets' or a 'Pipe Dream Driven by Greed'? NYT, July 1992
EO Inc. Describes 'Personal Communicator' Devices, New York Times,1992
Motorola expands family of personal communicator products, Mobile Phone News,1993
Bellsouth, IBM unveil personal communicator phone, Mobile Phone News,1993
The EO 440 And EO 880 Paradigms For Personal Communicators, Mobile Computing,1993
The Return of the PDA, Marketing Computers,1995
Personal digital assistants
Mobile computers
|
https://en.wikipedia.org/wiki/SLIP%20%28programming%20language%29
|
SLIP is a list processing computer programming language, invented by Joseph Weizenbaum in the 1960s. The name SLIP stands for Symmetric LIst Processor. It was first implemented as an extension to the Fortran programming language, and later embedded into MAD and ALGOL. The best known program written in the language is ELIZA, an early natural language processing computer program created by Weizenbaum at the MIT Artificial Intelligence Laboratory.
General overview
In a nutshell, SLIP consisted of a set of FORTRAN "accessor" functions which operated on circular doubly linked lists with fixed-size data fields. The "accessor" functions had direct and indirect addressing variants.
List representation
The list representation had four types of cell: a reader, a header, a sublist indicator, and a payload cell. The header included a reference count field for garbage collection purposes. The sublist indicator allowed it to be able to represent nested lists, such as (A, B, C, (1, 2, 3), D, E, F) where (1, 2, 3) is a sublist indicated by a cell in the '*' position in the list (A, B, C, *, D, E, F). The reader was essentially a state history stack—a good example of a memento pattern—where each cell pointed to the header of the list being read, the current position within the list being read, and the level or depth of the history stack.
References
Symmetric List Processor, Joseph Weizenbaum, CACM 6:524-544(1963). Sammet 1969, p. 387.
Computer Power and Human Reason: From Judgment To Calculation, Joseph Weizenbaum, San Francisco: W. H. Freeman, 1976
Fortran libraries
Programming languages
|
https://en.wikipedia.org/wiki/EOS%20memory
|
EOS memory (for ECC on SIMMs) is an error-correcting memory system built into SIMMs, used to upgrade server-class computers without built-in ECC memory support. The EOS SIMM itself does the error checking, with reduced need for ECC memory modules and support. The technology was introduced by IBM in the mid-1990s.
References
External links
EOS definition at PCmag
Computer memory
|
https://en.wikipedia.org/wiki/Mathematical%20Olympiads%20for%20Elementary%20and%20Middle%20Schools
|
Mathematical Olympiads for Elementary and Middle Schools (MOEMS) is a worldwide math competition, organized by a not for profit foundation with the same name. It is held yearly from November through March with one test administered each month. Tests are given at individual schools and results are sent to MOEMS for scoring. Schools, home schools and institutions may participate in the contest. Two dozen other nations also participate in the competition. There are two divisions, Elementary and Middle School. Elementary level problems are for grades 4-6 and Middle School level problems are for grades 7-8, though 4-6 graders may participate in Middle School problems. Hundreds of thousands of students participate annually in MOEMS events.
MOEMS plans soon to develop an online teacher training program.
History
First set up in 1977 by founder George Lenchner (1917–2006), MOEMS became a public competition in 1979. Lenchner, who died after decades in service to the math education community, wrote several books on elementary problem solving used by many MOEMS teachers and students. His obituary was featured in the Sunday New York Times on May 14, 2006.
The current MOEMS Director is Richard Kalman who also worked with the American Regions Mathematics League for many years.
References
External links
MOEMS Homepage
Forums for MOEMS students and teachers
Mathematics competitions
|
https://en.wikipedia.org/wiki/Minimum%20energy%20control
|
In control theory, the minimum energy control is the control that will bring a linear time invariant system to a desired state with a minimum expenditure of energy.
Let the linear time invariant (LTI) system be
with initial state . One seeks an input so that the system will be in the state at time , and for any other input , which also drives the system from to at time , the energy expenditure would be larger, i.e.,
To choose this input, first compute the controllability Gramian
Assuming is nonsingular (if and only if the system is controllable), the minimum energy control is then
Substitution into the solution
verifies the achievement of state at .
See also
LTI system theory
Control engineering
State space (controls)
Variational Calculus
Control theory
|
https://en.wikipedia.org/wiki/Imposex
|
Imposex is a disorder in sea snails caused by the toxic effects of certain marine pollutants. These pollutants cause female sea snails (marine gastropod molluscs) to develop male sex organs such as a penis and a vas deferens.
Imposex inducing substances
It was believed that the only inducer of imposex was tributyltin (TBT), which can be active in extremely low concentrations, but recent studies reported other substances as inducers, such as triphenyltin and ethanol. Tributyltin is used as an anti-fouling agent for boats which affects females of the species Nucella lapillus (dog whelk), Voluta ebraea (the Hebrew volute), Olivancillaria vesica, Stramonita haemastoma (red-mouthed rock shell) and more than 200 other marine gastropods.
Abnormalities
In the dog whelk, the growth of a penis in imposex females gradually blocks the oviduct, although ovule production continues. An imposex female dog whelk passes through several stages of penis growth before it becomes unable to maintain a constant production of ovules. Later stages of imposex lead to sterility and the premature death of the females of reproductive age, which can adversely affect the entire population.
In 1993, Scientists from the Plymouth Marine Laboratory found a thriving dog-whelk population in the
Dumpton Gap, near Ramsgate in the UK despite high levels of TBT in the water. In the Dumpton Gap population, only 25% of females showed any significant signs of imposex, while 10% of males were characterized by the absence of a penis or an undersized penis, with incomplete development of the vas deferens and prostate. After further experiments, scientists concluded that "Dumpton Syndrome" was a genetic selection caused by high TBT levels. TBT-resistance was improved at the cost of lower reproductive fitness.
Biomonitoring
The imposex stages of female dog whelks and other molluscs (including Nucella lima) are used in the United Kingdom and worldwide to monitor levels of tributyltin. The RPSI (Relative Penis
|
https://en.wikipedia.org/wiki/Australian%20Wine%20Research%20Institute
|
The Australian Wine Research Institute (AWRI) is a research institute with a focus on Australian wine, based in Adelaide, South Australia.
Location
It is based at the Wine Innovation Cluster, situated in the Waite Research Precinct, in the Adelaide suburb of Urrbrae, South Australia.
History
The institute was established in 1955 at the Waite campus of the University of Adelaide. It is funded by grape growers and wineries. Its first scientific chief was John Fornachon. An early researcher was Bryce Rankine, who later taught at the Roseworthy College, an oenology institution. The primary aim of the institute in the 1950s was to create good Australian table wines as opposed to traditional fortified wines.
Research done by the institute has looked at "oxidation, hazes and deposits caused by trace amounts of iron and copper, and the need for better yeast strains, more effective use of sulphur dioxide, and pH control" as well as "research into new grape varieties".
See also
Australian and New Zealand Wine Industry Journal
Australian Society of Viticulture and Oenology
National Wine Centre of Australia
Australian Grape and Wine Authority
References
External links
1955 establishments in Australia
Wine industry organizations
Alcohol industry trade associations
University of Adelaide
Australian food and drink organizations
Food science institutes
Research and development in Australia
|
https://en.wikipedia.org/wiki/LeTourneau%20L-2350
|
The P&H L-2350 Wheel Loader (formerly the L-2350 loader) is a loader used for surface mining. It is manufactured by Komatsu Limited. It holds the Guinness World Record for Biggest Earth Mover. Designed to center-load haul trucks with capacities of up to , the L-2350 provides an operating payload of , a lift height, and an reach.
History
The L-2350 was originally manufactured by LeTourneau Inc., LeTourneau Inc. was acquired by Marathon in 1972, Rowan Companies in 1986, and Joy Global in 2011.
Joy Global renamed the equipment as the P&H L-2350.
Specifications
Operational weight
Power
Engine 16 Cylinder 65 Litre Detroit Diesel 4-cycle Turbocharged Aftercooler Engine or 16 cylinder 60 Litre Cummins Diesel 4-cycle Turbocharged Aftercooler Engine of
Hydraulic lifting payload
Standard Bucket
Fuel Tank
Hydraulic Oil
Tyres 70/70-57 SRG DT (diameter 4 m, width 1.78 m) [d=13.12 ft & w=5.84 ft]
Cost $1.5M (2012)
See also
Diesel-electric transmission
References
Engineering vehicles
Wheeled vehicles
|
https://en.wikipedia.org/wiki/Maximum%20throughput%20scheduling
|
Maximum throughput scheduling is a procedure for scheduling data packets in a packet-switched best-effort network, typically a wireless network, in view to maximize the total throughput of the network, or the system spectral efficiency in a wireless network. This is achieved by giving scheduling priority to the least "expensive" data flows in terms of consumed network resources per transferred amount of information.
In advanced packet radio systems, for example the HSDPA 3.5G cellular system, channel-dependent scheduling is used instead of FIFO queuing to take advantage of favourable channel conditions to make best use of available radio conditions. Maximum throughput scheduling may be tempting in this context, especially in simulations where throughput of various schemes are compared. However, maximum throughput scheduling is normally not desirable, and channel-dependent scheduling should be used with care, as we will see below.
Cost function in wireless packet radio systems
Example 1: Link adaptation
In a wireless network with link adaptation, and without co-channel interference from nearby wireless networks, the bit rate depends heavily on the carrier to noise ratio (CNR), which depends on the attenuation on the link between the transmitter and receiver, i.e. the path loss. For maximum throughput scheduling, links that are affected by low attenuation should be considered as inexpensive, and should be given scheduling priority.
Example 2: Spread spectrum
In the uplink of a spread spectrum cellular system, the carrier-to-interference ratio (CIR) is held constant by the power control for all users. For a user that suffers from high path loss, the power control will cause high interference level to signals from other users. This will prevent other more efficient data flows, since there is a maximum allowed interference level in the cell, and reduce the throughput. Consequently, for maximum throughput scheduling, data flows that suffer from high path loss should
|
https://en.wikipedia.org/wiki/%E2%88%82
|
The character ∂ (Unicode: U+2202) is a stylized cursive d mainly used as a mathematical symbol, usually to denote a partial derivative such as (read as "the partial derivative of z with respect to x"). It is also used for boundary of a set, the boundary operator in a chain complex, and the conjugate of the Dolbeault operator on smooth differential forms over a complex manifold. It should be distinguished from other similar-looking symbols such as lowercase Greek letter delta (δ) or the lowercase Latin letter eth (ð).
History
The symbol was originally introduced in 1770 by Nicolas de Condorcet, who used it for a partial differential, and adopted for the partial derivative by Adrien-Marie Legendre in 1786.
It represents a specialized cursive type of the letter d, just as the integral sign originates as a specialized type of a long s (first used in print by Leibniz in 1686).
Use of the symbol was discontinued by Legendre, but it was taken up again by Carl Gustav Jacob Jacobi in 1841, whose usage became widely adopted.
Names and coding
The symbol is variously referred to as
"partial", "curly d", "funky d", "rounded d", "curved d", "dabba", "number 6 mirrored", or "Jacobi's delta", or as "del" (but this name is also used for the "nabla" symbol ∇).
It may also be pronounced simply "dee", "partial dee", "doh", or "die".
The Unicode character is accessed by HTML entities ∂ or ∂, and the equivalent LaTeX symbol (Computer Modern glyph: ) is accessed by \partial.
Uses
∂ is also used to denote the following:
The Jacobian .
The boundary of a set in topology.
The boundary operator on a chain complex in homological algebra.
The boundary operator of a differential graded algebra.
The conjugate of the Dolbeault operator on complex differential forms.
The boundary ∂(S) of a set of vertices S in a graph is the set of edges leaving S, which defines a cut.
See also
d'Alembert operator
Differentiable programming
List of mathematical symbols
Notation for diff
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.