source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Substrate%20mapping
|
Substrate mapping (or wafer mapping) is a process in which the performance of semiconductor devices on a substrate is represented by a map showing the performance as a colour-coded grid. The map is a convenient representation of the variation in performance across the substrate, since the distribution of those variations may be a clue as to their cause.
The concept also includes the package of data generated by modern wafer testing equipment which can be transmitted to equipment used for subsequent 'back-end' manufacturing operations.
History
The initial process supported by substrate maps was inkless binning.
Each tested die is assigned a bin value, depending on the result of the test. For example, a pass die is assigned a bin value of 1 for a good bin, bin 10 for an open circuit, and bin 11 for a short circuit. In the very early days of wafer test, the dies were put in different bins or buckets, depending on the test results.
Physical binning may no longer be used, but the analogy is still good. The next step in the process was to mark the failing dies with ink, so that during assembly only uninked dies were used for die attachment and final assembly. The inking step may be skipped if the assembly equipment is able to access the information in the maps generated by the test equipment.
A wafer map is where the substrate map applies to an entire wafer, while a substrate map is mapping in other areas of the semiconductors process including frames, trays and strips.
E142
As with many items in the Semiconductor process area, also for this process step there are standards available. The latest and most potential standard is the E142 standard, provided by the SEMI organization. This standard has been approved via ballots for release in 2005.
It supports many possible substrate maps, including the ones named above. While the old standards could only support standard bin maps, representing bin information, this standard also support transfermaps, which can help in
|
https://en.wikipedia.org/wiki/Chemotaxis%20assay
|
Chemotaxis assays are experimental tools for evaluation of chemotactic ability of prokaryotic or eukaryotic cells.
A wide variety of techniques have been developed. Some techniques are qualitative - allowing an investigator to approximately determine a cell's chemotactic affinity for an analyte - while others are quantitative, allowing a precise measurement of this affinity.
Quality control
In general, the most important requisite is to calibrate the incubation time of the assay both to the model cell and the ligand to be evaluated. Too short incubation time results in no cells in the sample, while too long time perturbs the concentration gradients and measures more chemokinetic than chemotactic responses.
The most commonly used techniques are grouped into two main groups:
Agar-plate techniques
This way of evaluation deals with agar-agar or gelatine containing semi-solid layers made prior to the experiment. Small wells are cut into the layer and filled with cells and the test substance. Cells can migrate towards the chemical gradient in the semi solid layer or under the layer as well. Some variations of the technique deal also with wells and parallel channels connected by a cut at the start of the experiment (PP-technique). Radial arrangement of PP-technique (3 or more channels) provides the possibility to compare chemotactic activity of different cell populations or study preference between ligands.
Counting of cells: positive responder cells could be counted from the front of migrating cells, after staining or in native conditions in light microscope.
Two-chamber techniques
Boyden chamber
Chambers isolated by filters are proper tools for accurate determination of chemotactic behavior. The pioneer type of these chambers was constructed by Boyden. The motile cells are placed into the upper chamber, while fluid containing the test substance is filled into the lower one. The size of the motile cells to be investigated determines the pore size of the filter;
|
https://en.wikipedia.org/wiki/Strongly%20positive%20bilinear%20form
|
A bilinear form, a(•,•) whose arguments are elements of normed vector space V is a strongly positive bilinear form if and only if there exists a constant, c>0, such that
for all where is the norm on V.
|
https://en.wikipedia.org/wiki/Torricellian%20chamber
|
In cave diving, a Torricellian chamber is a cave chamber with an airspace above the water at less than atmospheric pressure. This is formed when the water level drops and there is no way for more air to get into the chamber. In theory such chambers could pose a risk of decompression sickness to divers, similar to flying after diving. Also, in a Torricellian chamber the diver's depth gauge is unlikely to give an accurate reading of pressure as most depth gauges are not designed to show depths less than zero.
The chambers are named after Evangelista Torricelli, inventor of the barometer.
External links
http://www.speleogenesis.info/glossary/glossary_by_letter.php?Authors=t (scroll down to alphabetical order)
(scroll down to alphabetical order)
Portable Hyperbaric Chamber
Cave diving
Underwater diving physics
|
https://en.wikipedia.org/wiki/Bedford%20Level%20experiment
|
The Bedford Level experiment was a series of observations carried out along a length of the Old Bedford River on the Bedford Level of the Cambridgeshire Fens in the United Kingdom during the 19th and early 20th centuries to measure the curvature of the Earth.
Samuel Birley Rowbotham, who conducted the first observations starting in 1838, claimed that he had proven the Earth to be flat. However, in 1870, after adjusting Rowbotham's method to allow for the effects of atmospheric refraction, Alfred Russel Wallace found a curvature consistent with a spherical Earth.
The Bedford Level
At the point chosen for all the experiments, the river is a slow-flowing drainage canal running in an uninterrupted straight line for a stretch to the north-east of the village of Welney. This makes it an ideal location to directly measure the curvature of the Earth, as Rowbotham wrote in Zetetic Astronomy:
Experiments
The first experiment at this site was conducted by Rowbotham in the summer of 1838. He waded into the river and used a telescope held above the water to watch a boat, with a flag on its mast above the water, row slowly away from him. He reported that the vessel remained constantly in his view for the full to Welney Bridge, whereas, had the water surface been curved with the accepted circumference of a spherical Earth, the top of the mast should have been about below his line of sight. He published this observation using the pseudonym Parallax in 1849 and subsequently expanded it into a book, Earth Not a Globe published in 1865.
Rowbotham repeated his experiments several times over the years, but his claims received little attention until, in 1870, a supporter by the name of John Hampden offered a wager that he could show, by repeating Rowbotham's experiment, that the Earth was flat. The naturalist and qualified surveyor Alfred Russel Wallace accepted the wager. Wallace, by virtue of his surveyor's training and knowledge of physics, avoided the errors of the prece
|
https://en.wikipedia.org/wiki/Laghava
|
The laghava ( ; from the ) is the Devanagari abbreviation sign, comparable to the full stop or ellipsis as used in the Latin alphabet. It is encoded in Unicode at .
It is used as abbreviation sign in Hindi and other Devanagari-script-based languages. For example, "Dr." is written as "", "M.Sc." as "", etc.
See also
。: CJK full stop
° : degree symbol
|
https://en.wikipedia.org/wiki/International%20Food%20Safety%20Network
|
The International Food Safety Network (iFSN) at Kansas State University imparts the opportunity of improving the overall safety of the food supply by connecting all those in the agriculture and food industry.
iFSN offers a resource of evidence-based information through its website, listserves, research projects, on-farm food safety programs, publications, educational initiatives, graduate courses and policy analysis.
iFSN used to operate under the direction of Doug Powell.
History
International Food Safety Network began as a communications experiment, collecting and rapidly redistributing information about food safety using the then just-burgeoning Internet.
Created in January 1993, as a combination of Powell's interests in science, media and the public following an outbreak of E. coli O157:H7 associated with Jack in the Box restaurants, in which over 600 were sickened and four died from undercooked hamburgers.
International Food Safety Network's emphasis is on the integration of public perceptions of food safety risks into traditional food safety risk analysis, and engaging the public on the nature of food-related risks and benefits.
Powell and the International Food Safety Network are a primary source for food safety information during outbreaks and are often quoted in mainstream media reports.
The International Food Safety Network was replaced with the bites mailing list and website according to a notice on the former's home page.
|
https://en.wikipedia.org/wiki/Statistical%20semantics
|
In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.
History
The term statistical semantics was first used by Warren Weaver in his well-known paper on machine translation. He argued that word sense disambiguation for machine translation should be based on the co-occurrence frequency of the context words near a given target word. The underlying assumption that "a word is characterized by the company it keeps" was advocated by J.R. Firth. This assumption is known in linguistics as the distributional hypothesis. Emile Delavenay defined statistical semantics as the "statistical study of the meanings of words and their frequency and order of recurrence". "Furnas et al. 1983" is frequently cited as a foundational contribution to statistical semantics. An early success in the field was latent semantic analysis.
Applications
Research in statistical semantics has resulted in a wide variety of algorithms that use the distributional hypothesis to discover many aspects of semantics, by applying statistical techniques to large corpora:
Measuring the similarity in word meanings
Measuring the similarity in word relations
Modeling similarity-based generalization
Discovering words with a given relation
Classifying relations between words
Extracting keywords from documents
Measuring the cohesiveness of text
Discovering the different senses of words
Distinguishing the different senses of words
Subcognitive aspects of words
Distinguishing praise from criticism
Related fields
Statistical semantics focuses on the meanings of common words and the relations between common words, unlike text mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield of compu
|
https://en.wikipedia.org/wiki/Rangekeeper
|
Rangekeepers were electromechanical fire control computers used primarily during the early part of the 20th century. They were sophisticated analog computers whose development reached its zenith following World War II, specifically the Computer Mk 47 in the Mk 68 Gun Fire Control system. During World War II, rangekeepers directed gunfire on land, sea, and in the air. While rangekeepers were widely deployed, the most sophisticated rangekeepers were mounted on warships to direct the fire of long-range guns.
These warship-based computing devices needed to be sophisticated because the problem of calculating gun angles in a naval engagement is very complex. In a naval engagement, both the ship firing the gun and the target are moving with respect to each other. In addition, the ship firing its gun is not a stable platform because it will roll, pitch, and yaw due to wave action, ship change of direction, and board firing. The rangekeeper also performed the required ballistics calculations associated with firing a gun. This article focuses on US Navy shipboard rangekeepers, but the basic principles of operation are applicable to all rangekeepers regardless of where they were deployed.
Function
A rangekeeper is defined as an analog fire control system that performed three functions:
Target tracking
The rangekeeper continuously computed the current target bearing. This is a difficult task because both the target and the ship firing (generally referred to as "own ship") are moving. This requires knowing the target's range, course, and speed accurately. It also requires accurately knowing the own ship's course and speed.
Target position prediction
When a gun is fired, it takes time for the projectile to arrive at the target. The rangekeeper must predict where the target will be at the time of projectile arrival. This is the point at which the guns are aimed.
Gunfire correction
Directing the fire of a long-range weapon to deliver a projectile to a specific location requ
|
https://en.wikipedia.org/wiki/Fabric%20Application%20Interface%20Standard
|
ANSI INCITS 432-2007: Information technology - Fabric Application Interface Standard or FAIS is an application programming interface framework for implementing storage applications in a storage area network. FAIS is defined by Technical Committee T11 of the International Committee for Information Technology Standards.
It provides a high-speed, highly reliable device for performing fabric-based services throughout heterogeneous data center environments. Furthermore, it describes extensions to the Fibre Channel specification, specifically regarding Fibre Channel over 4-pair twisted pair cabling as described in ISO/IEC 11801.
|
https://en.wikipedia.org/wiki/NPIV
|
NPIV or N_Port ID Virtualization is a Fibre Channel feature whereby multiple Fibre Channel node port (N_Port) IDs can share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in Storage Area Network (SAN) design, especially where virtual SANs are called for. This allows each virtual server to see its own storage and no other virtual server's storage. NPIV is defined by the Technical Committee T11 in the Fibre Channel - Link Services (FC-LS) specification.
N_Port initialization with and without NPIV
Normally N_Port initialization proceeds like this:
N_Port sends FLOGI to address 0xFFFFFE to obtain a valid address
N_Port sends PLOGI to address 0xFFFFFC to register this address with the name server
N_Port sends SCR to address 0xFFFFFD to register for state change notifications
However, with NPIV it may continue like this:
N_Port sends FDISC to address 0xFFFFFE to obtain an additional address
N_Port sends PLOGI to address 0xFFFFFC to register this additional address with the name server
N_Port sends SCR to address 0xFFFFFD to register for state change notifications.
... (repeat FDISC/PLOGI/SCR for next address)
FDISC is an abbreviation for Fabric Discovery, or "Discover Fabric Service Parameters", which is a misleading name in this context. It works just like FLOGI.
N_Port is used to connect equipment ports to the fibre channel fabric (optical network) Source: https://www.hpe.com/h20195/v2/GetPDF.aspx/4AA4-4545ENW.pdf
Notes
Sources
Fibre Channel - Link Services
NPIV Functionality Protocol
T11 latest draft standards page
Fibre Channel
|
https://en.wikipedia.org/wiki/Magnetocardiography
|
Magnetocardiography (MCG) is a technique to measure the magnetic fields produced by electrical currents in the heart using extremely sensitive devices such as the superconducting quantum interference device (SQUID). If the magnetic field is measured using a multichannel device, a map of the magnetic field is obtained over the chest; from such a map, using mathematical algorithms that take into account the conductivity structure of the torso, it is possible to locate the source of the activity. For example, sources of abnormal rhythms or arrhythmia may be located using MCG.
History
The first MCG measurements were made by Baule and McFee using two large coils placed over the chest, connected in opposition to cancel out the relatively large magnetic background. Heart signals were indeed seen, but were very noisy. The next development was by David Cohen, who used a magnetically shielded room to reduce the background, and a smaller coil with better electronics; the heart signals were now less noisy, allowing a magnetic map to be made, verifying the magnetic properties and source of the signal. However, the use of an inherently noisy coil detector discouraged widespread interest in the MCG. The turning point came with the development of the sensitive detector called the SQUID (superconducting quantum interference device) by James Zimmerman. The combination of this detector and Cohen's new shielded room at MIT allowed the MCG signal to be seen as clearly as the conventional electrocardiogram, and the publication of this result by Cohen et al. marked the real beginning of magnetocardiography (as well as biomagnetism generally).
Magnetocardiography is used in various laboratories and clinics around the world, both for research on the normal human heart, and for clinical diagnosis.
Clinical implementation
MCG technology has been implemented in hospitals in Germany. The MCG system, CS MAG II of Biomagnetik Park GmbH, was installed at Coburg Hospital in 2013. The CS-MAG III
|
https://en.wikipedia.org/wiki/Rainy%20Days%20and%20Mondays
|
"Rainy Days and Mondays" is a song by the Carpenters from their self-titled third album, with instrumental backing by the Wrecking Crew. It was written by Paul Williams (lyrics) and Roger Nichols (music), who had previously written “We’ve Only Just Begun,” another hit for the duo. The B-side on the single is "Saturday", a song written and sung by Richard Carpenter.
A demo for the song was initially sent to Richard Carpenter by Williams and Nichols. Upon hearing it, Richard felt that the song was perfect for him and Karen Carpenter to record. The song was recorded a few weeks before Karen’s 21st birthday. Richard wanted to keep the song’s arrangement sparse in order to showcase her vocal talent.
“Rainy Days and Mondays” peaked at number 2 on the Billboard Hot 100 chart, spending seven weeks in the Top 10, and was kept from number 1 by "It's Too Late"/"I Feel the Earth Move" by Carole King. The song was also the duo's fourth number 1 single on the Adult Contemporary singles chart. However, the song failed to chart in the United Kingdom until it went to number 63 in a reissue there in 1993. "Rainy Days and Mondays" was certified Gold by the RIAA for 500,000 copies sold in the United States.
Personnel
Karen Carpenter - lead and backing vocals
Richard Carpenter - backing vocals, piano, Wurlitzer electric piano, orchestration
Joe Osborn - bass guitar
Hal Blaine - drums
Tommy Morgan - harmonica
Bob Messenger - tenor saxophone
Chart performance
Weekly charts
Year-end charts
Compilations
Yesterday Once More
From the Top
Interpretations
Love Songs
The Essential Collection
Carpenters: Gold 35th Anniversary Edition
See also
List of number-one adult contemporary singles of 1971 (U.S.)
|
https://en.wikipedia.org/wiki/Simple%20Key-Management%20for%20Internet%20Protocol
|
Simple Key-Management for Internet Protocol or SKIP was a protocol developed circa 1995 by the IETF Security Working Group for the sharing of encryption keys. SKIP and Photuris were evaluated as key exchange mechanisms for IPsec before the adoption of IKE in 1998.
Simple Key Management for Internet Protocols (SKIP) is similar to SSL, except that it establishes a long-term key once, and then requires no prior communication in order to establish or exchange keys on a session-by-session basis. Therefore, no connection setup overhead exists and new keys values are not continually generated.
|
https://en.wikipedia.org/wiki/American%20Elasmobranch%20Society
|
The American Elasmobranch Society (AES) is an international learned society devoted to the scientific study of chondrichthyans (sharks, skates, rays and chimaeras).
Founded in 1983, it is the world’s oldest and largest professional society devoted to the study. As of 2022, the society had around 500 members.
|
https://en.wikipedia.org/wiki/DSOS
|
DSOS (Deep Six Operating System) was a real-time operating system (sometimes termed an operating system kernel) developed by Texas Instruments' division Geophysical Services Incorporated (GSI) in the mid-1970s.
Background
The Geophysical Services division of Texas Instruments' main business was to search for petroleum (oil). They would collect data in likely spots around the world, process that data using high performance computers, and produce analyses that guided oil companies toward promising sites for drilling.
Much of the oil being sought was to be found beneath the ocean, hence GSI maintained a fleet of ships to collect seismic data from remote regions of the world. To do this properly, it was essential that the ships be navigated precisely. If evidence of oil is found, one cannot just mark an X on a tree. The oil is thousands of feet below the ocean and typically hundreds of miles from land. But this was a decade or more before GPS existed, thus the processing load to keep an accurate picture of where a finding is, was considerable.
The GEONAV systems, which used DSOS (Frailey, 1975) as their operating system, performed the required navigation, and collected, processed, and stored the seismic data being received in real-time.
Naming
The name Deep Six Operating System was the brainchild of Phil Ward (subsequently a world-renowned GPS expert) who, at the time, was manager of the project and slightly skeptical of the computer science professor, Dennis Frailey, who insisted that an operating system was the solution to the problem at hand. In a sense the system lived up to its name, according to legend. Supposedly one of the ships hit an old World War II naval mine off the coast of Egypt and sank while being navigated by GEONAV and DSOS.
Why an operating system?
In the 1970s, most real-time applications did not use operating systems because the latter were perceived as adding too much overhead. Typical computers of the time had barely enough computing powe
|
https://en.wikipedia.org/wiki/Kendall%20tau%20distance
|
The Kendall tau rank distance is a metric (distance function) that counts the number of pairwise disagreements between two ranking lists. The larger the distance, the more dissimilar the two lists are. Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. The Kendall tau distance was created by Maurice Kendall.
Definition
The Kendall tau ranking distance between two lists and is
where and are the rankings of the element in and respectively.
will be equal to 0 if the two lists are identical and (where is the list size) if one list is the reverse of the other.
Kendall tau distance may also be defined as
where
P is the set of unordered pairs of distinct elements in and
= 0 if i and j are in the same order in and
= 1 if i and j are in the opposite order in and
Kendall tau distance can also be defined as the total number of discordant pairs.
Kendall tau distance in Rankings: A permutation (or ranking) is an array of N integers where each of the integers between 0 and N-1 appears exactly once.
The Kendall tau distance between two rankings is the number of pairs that are in different order in the two rankings. For example, the Kendall tau distance between 0 3 1 6 2 5 4 and 1 0 3 6 4 2 5 is four because the pairs 0-1, 3-1, 2-4, 5-4 are in different order in the two rankings, but all other pairs are in the same order.
The normalized Kendall tau distance is and therefore lies in the interval [0,1].
If Kendall tau distance function is performed as instead of (where and are the rankings of and elements respectively), then triangular inequality is not guaranteed. The triangular inequality fails sometimes also in cases where there are repetitions in the lists. So then we are not dealing with a metric anymore.
Generalised versions of Kendall tau distance have been proposed to give weights to
|
https://en.wikipedia.org/wiki/Nocturnal%20penile%20tumescence
|
Nocturnal penile tumescence (NPT) is a spontaneous erection of the penis during sleep or when waking up. Along with nocturnal clitoral tumescence, it is also known as sleep-related erection. (Colloquially, the term morning wood (or less commonly, morning glory) is also used, although this is more commonly used to refer specifically to an erection beginning during sleep and persisting into the period just after waking.) Men without physiological erectile dysfunction or severe depression experience nocturnal penile tumescence, usually three to five times during a period of sleep, typically during rapid eye movement sleep. Nocturnal penile tumescence is believed to contribute to penile health.
Mechanism
The cause of nocturnal penile tumescence is not known with certainty. In a wakeful state, in the presence of mechanical stimulation with or without an arousal, erection is initiated by the parasympathetic division of the autonomic nervous system with minimal input from the central nervous system. Parasympathetic branches extend from the sacral plexus of the spinal nerves into the arteries supplying the erectile tissue; upon stimulation, these nerve branches release acetylcholine, which in turn causes release of nitric oxide from endothelial cells in the trabecular arteries, that eventually causes tumescence. Bancroft (2005) hypothesizes that the noradrenergic neurons of the locus ceruleus in the brain are perpetually inhibitory to penile erection, and that the cessation of their discharge that occurs during rapid eye movement sleep may allow testosterone-related excitatory actions to manifest as nocturnal penile tumescence. Suh et al. (2003) recognizes that in particular the spinal regulation of the cervical cord is critical for nocturnal erectile activity.
The nerves that control one's ability to have a reflex erection are located in the sacral nerves (S2-S4) of the spinal cord. Evidence supporting the possibility that a full bladder can stimulate an erection has ex
|
https://en.wikipedia.org/wiki/Dynamic%20single-frequency%20networks
|
Dynamic Single Frequency Networks (DSFN) is a transmitter macrodiversity technique for OFDM based cellular networks.
DSFN is based on the idea of single frequency networks (SFN), which is a group of radio transmitters that send the same signal simultaneously over the same frequency. The term originates from the broadcasting world, where a broadcast network is a group of transmitters that send the same TV or radio program. Digital wireless communication systems based on the OFDM modulation scheme are well-suited to SFN operation, since OFDM in combination with some forward error correction scheme can eliminate intersymbol interference and fading caused by multipath propagation without the use of complex equalization.
The concept of DSFN implies the SFN grouping is changed dynamically over time, from timeslot to timeslot. The aim is to achieve efficient spectrum utilization for downlink unicast or multicast communication services in centrally controlled cellular systems based on for example the OFDM modulation scheme. A centralized scheduling algorithm assigns each data packet to a certain timeslot, frequency channel and group of base station transmitters. DSFN can be considered as a combination of packet scheduling, macro-diversity and dynamic channel allocation (DCA). The scheduling algorithm can be further extended to dynamically assign other radio resource management parameters to each timeslot and transmitter, such as modulation scheme and error correction scheme, in view to optimize the efficiency.
DSFN makes it possible to increase the received signal strength to a mobile terminal in between several base station transmitters in comparison to non-macrodiversity communication schemes. Thus, DSFN can improve the coverage area and lessen the outage probability. Alternatively, DSFN may allow the same outage probability with a less robust but more efficient modulation and error coding scheme, and thus improve the spectral efficiency in bit/s/Hz/base station trans
|
https://en.wikipedia.org/wiki/Greedy%20algorithm%20for%20Egyptian%20fractions
|
In mathematics, the greedy algorithm for Egyptian fractions is a greedy algorithm, first described by Fibonacci, for transforming rational numbers into Egyptian fractions. An Egyptian fraction is a representation of an irreducible fraction as a sum of distinct unit fractions, such as . As the name indicates, these representations have been used as long ago as ancient Egypt, but the first published systematic method for constructing such expansions was described in 1202 in the Liber Abaci of Leonardo of Pisa (Fibonacci). It is called a greedy algorithm because at each step the algorithm chooses greedily the largest possible unit fraction that can be used in any representation of the remaining fraction.
Fibonacci actually lists several different methods for constructing Egyptian fraction representations. He includes the greedy method as a last resort for situations when several simpler methods fail; see Egyptian fraction for a more detailed listing of these methods. As Salzer (1948) details, the greedy method, and extensions of it for the approximation of irrational numbers, have been rediscovered several times by modern mathematicians, earliest and most notably by A closely related expansion method that produces closer approximations at each step by allowing some unit fractions in the sum to be negative dates back to .
The expansion produced by this method for a number is called the greedy Egyptian expansion, Sylvester expansion, or Fibonacci–Sylvester expansion of . However, the term Fibonacci expansion usually refers, not to this method, but to representation of integers as sums of Fibonacci numbers.
Algorithm and examples
Fibonacci's algorithm expands the fraction to be represented, by repeatedly performing the replacement
(simplifying the second term in this replacement as necessary). For instance:
in this expansion, the denominator 3 of the first unit fraction is the result of rounding up to the next larger integer, and the remaining fraction is the
|
https://en.wikipedia.org/wiki/Continuous%20embedding
|
In mathematics, one normed vector space is said to be continuously embedded in another normed vector space if the inclusion function between them is continuous. In some sense, the two norms are "almost equivalent", even though they are not both defined on the same space. Several of the Sobolev embedding theorems are continuous embedding theorems.
Definition
Let X and Y be two normed vector spaces, with norms ||·||X and ||·||Y respectively, such that X ⊆ Y. If the inclusion map (identity function)
is continuous, i.e. if there exists a constant C > 0 such that
for every x in X, then X is said to be continuously embedded in Y. Some authors use the hooked arrow "↪" to denote a continuous embedding, i.e. "X ↪ Y" means "X and Y are normed spaces with X continuously embedded in Y". This is a consistent use of notation from the point of view of the category of topological vector spaces, in which the morphisms ("arrows") are the continuous linear maps.
Examples
A finite-dimensional example of a continuous embedding is given by a natural embedding of the real line X = R into the plane Y = R2, where both spaces are given the Euclidean norm:
In this case, ||x||X = ||x||Y for every real number X. Clearly, the optimal choice of constant C is C = 1.
An infinite-dimensional example of a continuous embedding is given by the Rellich–Kondrachov theorem: let Ω ⊆ Rn be an open, bounded, Lipschitz domain, and let 1 ≤ p < n. Set
Then the Sobolev space W1,p(Ω; R) is continuously embedded in the Lp space Lp∗(Ω; R). In fact, for 1 ≤ q < p∗, this embedding is compact. The optimal constant C will depend upon the geometry of the domain Ω.
Infinite-dimensional spaces also offer examples of discontinuous embeddings. For example, consider
the space of continuous real-valued functions defined on the unit interval, but equip X with the L1 norm and Y with the supremum norm. For n ∈ N, let fn be the continuous, piecewise linear function given by
Then, for every n, ||fn||Y = ||fn|
|
https://en.wikipedia.org/wiki/Genghis%20Khan%20II%3A%20Clan%20of%20the%20Gray%20Wolf
|
Genghis Khan II: Clan of the Gray Wolf, originally released as , is a 1992 video game developed by Koei. It is part of Koei's Historical Simulation Series of games, and is the sequel to Genghis Khan, though this is the third game in the series. Genghis Khan II was developed and published for MSX2, Nintendo Entertainment System, DOS, X68000, PC-9801, PC-8801, Mega Drive/Genesis, Super NES, Sega CD, PC Engine, and later PlayStation. The Super NES version was also made available on the Wii Virtual Console in North America on June 8, 2009 and in Japan on May 11, 2010.
Gameplay
The player is given the option to conquer either the country of Mongolia as Temujin, the man who would one day become Genghis Khan himself, or as one of three other rivals in that region; or to take over the known world of the time as one of several rulers from throughout Europe, mainland Asia, and North Africa. Conquests are made through the balance of economy, population, buying and selling manufactured goods, family relations, promoting and demoting generals, developing military, all in a turn-based fashion. All of these actions can happen only within a given number of "turn points", so some actions are given priority while others are overlooked. The game also includes a turn-based battle sequence, allowing specific control to the player or delegated to a general.
Scenarios
Scenario 1: Conquest of Mongolia
In the first scenario, it is the year 1184 AD in Mongolia; the player has the option of controlling four different characters. The objective is to become ruler of the Mongolian steppes (basically control all of Mongolia). At the end of this scenario, if it is beaten by 1212 AD, the player can take on the rest of the world in the fourth scenario, World Conquest. The player gets to lead the fight with their chosen ruler, and their choice of eight generals, and an advisor.
Playable factions:
Mongols - Temujin
Jadarans - Jamukha
Keraites - Togril Khan
Naimans - Tayan Khan
Players are als
|
https://en.wikipedia.org/wiki/Protocol%20for%20Carrying%20Authentication%20for%20Network%20Access
|
PANA (Protocol for Carrying Authentication for Network Access) is an IP-based protocol that allows a device to authenticate itself with a network to be granted access. PANA will not define any new authentication protocol, key distribution, key agreement or key derivation protocols. For these purposes, the Extensible Authentication Protocol (EAP) will be used, and PANA will carry the EAP payload. PANA allows dynamic service provider selection, supports various authentication methods, is suitable for roaming users, and is independent from the link layer mechanisms.
PANA is an Internet Engineering Task Force (IETF) protocol and described in RFC 5191.
Architecture's elements
PaC (PANA Client)
The PaC is the client part of the protocol. This element is located in the node that wants to reach the access network.
PAA (PANA Authentication Agent)
This entity represents the server part of the PANA protocol. Its main task is the message exchange with the PaC for authenticating and authorizing it for network access. In addition, in some scenarios, the PAA entity has to do other message exchange with the AAA server in order to offer the PaC credentials to it. In this case, EAP is configured as pass-through and the AAA server is placed physically in a different place than the PAA.
AS (Authentication Server)
This element contains the information needed to check the PaC's credentials. To this end this node receives the PaC's credentials from the PAA, performs a credential check, and sends a packet with the result of the credential check. If the credential check was successful, that packet contains access parameters, such as allowed bandwidth or IP configuration. At this point, a session between PAA and PaC has been established. This session has a session lifetime. When the session expires, a re-authentication process is required for the PaC to regain network access.
EP (Enforcement Point) It works as a filter of the packets which source is an authenticated PaC. Basically, an E
|
https://en.wikipedia.org/wiki/Deoxyguanosine%20diphosphate
|
Deoxyguanosine diphosphate (dGDP) is a nucleoside diphosphate. It is related to the common nucleic acid guanosine triphosphate (GTP), with the -OH group on the 2' carbon on the nucleotide's pentose removed (hence the deoxy- part of the name), and with one fewer phosphoryl group than GTP.
See also
Cofactor
Guanosine
|
https://en.wikipedia.org/wiki/Deoxycytidine%20diphosphate
|
Deoxycytidine diphosphate is a nucleoside diphosphate. It is related to the common nucleic acid CTP, or cytidine triphosphate, with the -OH (hydroxyl) group on the 2' carbon on the nucleotide's pentose removed (hence the deoxy- part of the name), and with one fewer phosphoryl group than CTP .
2'-deoxycytidine diphosphate is abbreviated as dCDP.
Synthesis of Cytidine Nucleotides
Deoxycytidine diphosphate is synthesized through the oxidation-reduction reaction of cytidine 5'-diphosphocholine which is catalyzed by the presence of ribonucleoside-diphosphate reductase. Additionally, ribonucleoside-diphosphate reductase is capable of binding and catalyzing both the formation of deoxyribonucleotides from ribonucleotide.
See also
DNA
Cofactor
Cytosine
|
https://en.wikipedia.org/wiki/Deoxyguanosine%20monophosphate
|
Deoxyguanosine monophosphate (dGMP), also known as deoxyguanylic acid or deoxyguanylate in its conjugate acid and conjugate base forms, respectively, is a derivative of the common nucleic acid guanosine triphosphate (GTP), in which the –OH (hydroxyl) group on the 2' carbon on the nucleotide's pentose has been reduced to just a hydrogen atom (hence the "deoxy-" part of the name). It is used as a monomer in DNA.
See also
Cofactor
Guanosine
Nucleic acid
|
https://en.wikipedia.org/wiki/Deoxyadenosine%20monophosphate
|
Deoxyadenosine monophosphate (dAMP), also known as deoxyadenylic acid or deoxyadenylate in its conjugate acid and conjugate base forms, respectively, is a derivative of the common nucleic acid AMP, or adenosine monophosphate, in which the -OH (hydroxyl) group on the 2' carbon on the nucleotide's pentose has been reduced to just a hydrogen atom (hence the "deoxy-" part of the name). Deoxyadenosine monophosphate is abbreviated dAMP. It is a monomer used in DNA.
See also
Nucleic acid
DNA metabolism
Cofactor
Guanosine
Cyclic AMP (cAMP)
ATP
Sources
Nucleotides
|
https://en.wikipedia.org/wiki/Deoxyadenosine%20diphosphate
|
Deoxyadenosine diphosphate is a nucleoside diphosphate. It is related to the common nucleic acid ATP, or adenosine triphosphate, with the -OH (hydroxyl) group on the 2' carbon on the nucleotide's pentose removed (hence the deoxy- part of the name), and with one fewer phosphoryl group than ATP. This makes it also similar to adenosine diphosphate except with a hydroxyl group removed.
Deoxyadenosine diphosphate is abbreviated dADP.
See also
Cofactor
Guanosine
Cyclic adenosine monophosphate
|
https://en.wikipedia.org/wiki/Beta-M
|
The Beta-M is a radioisotope thermoelectric generator (RTG) that was used in Soviet-era lighthouses and beacons.
Design
The Beta-M contains a core made up of strontium-90, which has a half-life of 28.79 years. The service life of these generators is initially 10 years, and can be extended for another 5 to 10 years. The core is also known as radioisotope heat source 90 (RHS-90). In its initial state after manufacture, the generator is capable of generating 10 watts of electricity. The generator contains the strontium-90 radioisotope, with a heating power of 250W and 1,480 TBq of radioactivity – equivalent to some of Sr-90. Mass-scale production of RTGs in the Soviet Union was the responsibility of a plant called Baltiyets, in Narva, Estonia.
Safety incidents
Some Beta-M generators have been subject to incidents of vandalism when scavengers disassembled the units while searching for non-ferrous metals. In December 2001 a radiological accident occurred when three residents of Lia, Georgia found parts of an abandoned Beta-M in the forest while collecting firewood. The three suffered burns and symptoms of acute radiation syndrome as a result of their exposure to the strontium-90 contained in the Beta-M. The disposal team that removed the radiation sources consisted of 25 men who were restricted to 40 seconds' worth of exposure each while transferring the canisters to lead-lined drums.
|
https://en.wikipedia.org/wiki/Variable-length%20code
|
In coding theory, a variable-length code is a code which maps source symbols to a variable number of bits. The equivalent concept in computer science is bit string.
Variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. With the right coding strategy an independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure.
Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–Ziv coding, arithmetic coding, and context-adaptive variable-length coding.
Codes and their extensions
The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code.
Using terms from formal language theory, the precise mathematical definition is as follows: Let and be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from to a sequence of symbols over , and the extension of to a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as its extension.
Classes of variable-length codes
Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular:
Non-singular codes
A code is non-singular if each source symbol is mapped to a different non-empty bit string, i.e. the map
|
https://en.wikipedia.org/wiki/Amanita%20cokeri
|
Amanita cokeri, commonly known as Coker's amanita and solitary lepidella, is a mushroom in the family Amanitaceae. The mushroom is poisonous. First described as Lepidella cokeri in 1928, it was transferred to the genus Amanita in 1940.
Taxonomy
Amanita cokeri was first described as Lepidella cokeri by mycologists E.-J.Gilbert and Robert Kühner in 1928. It was in 1940 when the species was transferred from genus Lepidella to Amanita by Gilbert. Presently, A. cokeri is placed under genus Amanita and section Roanokenses. The epithet cokeri is in honour of American mycologist and botanist William Chambers Coker.
Description
Its cap is white in colour, and across. It is oval to convex in shape. The surface is dry but sticky when wet. The cap surface is characterized by large pointed warts, white to brown in colour.
Gills are closely spaced and free from the stem. They are cream at first, but can turn white as the mushroom matures. Short-gills are frequent. Stem is white, measuring long and thick. It tapers slightly to the top, smooth to shaggy in texture. There is a ring, thick and often double-edged, the underside being tissuelike. The universal veil hangs from the top of the stipe. The basal bulb is considerably large in size, with concentric circles of down-turned scales. The volval remnants stick to it and cause irregular patches.
Spores are white, elliptical and amyloid. They measure 11-14 x 6-9 μm, and feel smooth. Flesh is white, and shows no change when exposed. There is no distinctive odour, but some specimens may develop the smell of decaying protein.
Lookalikes
Amanita solitaria is a closely related species, though a completely different European taxa. The notable similarity is that both it and A. cokeri are double-ringed. A. timida, from the tropical South Asia, resembles A. cokeri in its volval structure, thick and notable ring and the large bulbal base.
Toxicity
In a study, the presence of non-protein amino acids 2-amino-3-cyclopropylbutanoic aci
|
https://en.wikipedia.org/wiki/Surface-area-to-volume%20ratio
|
The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V, SA/V, or sa/vol) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface the volume. Good examples for such processes are processes governed by the heat equation, that is, diffusion and heat transfer by thermal conduction. SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, water loss by animals, bacterial morphogenesis, organism's thermoregulation, design of artificial bone tissue, artificial lungs and many more biological and biotechnological structures. For more examples see Glazier.
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", and elsewhere.
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball, a consequence of the isoperimetric inequality in 3 dimensions. By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
For solid spheres
A solid sphere or ball is a three-dimensional object, being the solid figure bounded by a sphere. (In geometry, the term sphere properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using
|
https://en.wikipedia.org/wiki/Trailing%20zero
|
In mathematics, trailing zeros are a sequence of 0 in the decimal representation (or more generally, in any positional representation) of a number, after which no other digits follow.
Trailing zeros to the right of a decimal point, as in 12.340, don’t affect the value of a number and may be omitted if all that is of interest is its numerical value. This is true even if the zeros recur infinitely. For example, in pharmacy, trailing zeros are omitted from dose values to prevent misreading. However, trailing zeros may be useful for indicating the number of significant figures, for example in a measurement. In such a context, "simplifying" a number by removing trailing zeros would be incorrect.
The number of trailing zeros in a non-zero base-b integer n equals the exponent of the highest power of b that divides n. For example, 14000 has three trailing zeros and is therefore divisible by 1000 = 103, but not by 104. This property is useful when looking for small factors in integer factorization. Some computer architectures have a count trailing zeros operation in their instruction set for efficiently determining the number of trailing zero bits in a machine word.
Factorial
The number of trailing zeros in the decimal representation of n!, the factorial of a non-negative integer n, is simply the multiplicity of the prime factor 5 in n!. This can be determined with this special case of de Polignac's formula:
where k must be chosen such that
more precisely
and denotes the floor function applied to a. For n = 0, 1, 2, ... this is
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 6, ... .
For example, 53 > 32, and therefore 32! = 263130836933693530167218012160000000 ends in
zeros. If n < 5, the inequality is satisfied by k = 0; in that case the sum is empty, giving the answer 0.
The formula actually counts the number of factors 5 in n!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each o
|
https://en.wikipedia.org/wiki/History%20of%20IBM
|
International Business Machines (IBM) is a multinational corporation specializing in computer technology and information technology consulting. Headquartered in Armonk, New York, United States, the company traces its roots to the amalgamation of various enterprises dedicated to automating routine business transactions, notably pioneering punched card-based data tabulating machines and time clocks. In 1911, these entities were unified under the umbrella of the Computing-Tabulating-Recording Company (CTR).
Thomas J. Watson (1874–1956) assumed the role of General Manager within the company in 1914 and ascended to the position of President in 1915. By 1924, the company rebranded as "International Business Machines." IBM diversified its offerings to include electric typewriters and other office equipment. Watson, a proficient salesman, aimed to cultivate a highly motivated, well-compensated sales force capable of devising solutions for clients unacquainted with the latest technological advancements.
In the 1940s and 1950s, IBM initiated its initial forays into computing, which constituted incremental improvements to the prevailing card-based system. A pivotal moment arrived in the 1960s with the introduction of the System/360 family of mainframe computers. IBM provided a comprehensive spectrum of hardware, software, and service agreements, fostering client loyalty and solidifying its moniker "Big Blue." The customized nature of end-user software, tailored by in-house programmers for a specific brand of computers, deterred brand switching due to its associated costs. Despite challenges posed by clone makers like Amdahl and legal confrontations, IBM leveraged its esteemed reputation, assuring clients with both hardware and system software solutions, earning acclaim as one of the esteemed American corporations during the 1970s and 1980s.
However, IBM encountered difficulties in the late 1980s and 1990s, marked by substantial losses surpassing $8 billion in 1993. The main
|
https://en.wikipedia.org/wiki/Band%20cell
|
A band cell (also called band neutrophil, band form or stab cell) is a cell undergoing granulopoiesis, derived from a metamyelocyte, and leading to a mature granulocyte.
It is characterized by having a curved but not lobular nucleus.
The term "band cell" implies a granulocytic lineage (e.g., neutrophils).
Clinical significance
Band neutrophils are an intermediary step prior to the complete maturation of segmented neutrophils. Polymorphonuclear neutrophils are initially released from the bone marrow as band cells, as the immature neutrophils become activated or exposed to pathogens, their nucleus will take on a segmented appearance. An increase in the number of these immature neutrophils in circulation can be indicative of a infection for which they are being called to fight against, or some inflammatory process. The increase of band cells in the circulation is called bandemia and is a "left shift" process.
Blood reference ranges for neutrophilic band cells in adults are 3 to 5% of white blood cells, or up to 0.7 x109/L.
An excess may sometimes be referred to as bandemia.
See also
Pluripotential hemopoietic stem cell
Additional images
|
https://en.wikipedia.org/wiki/Metamyelocyte
|
A metamyelocyte is a cell undergoing granulopoiesis, derived from a myelocyte, and leading to a band cell.
It is characterized by the appearance of a bent nucleus, cytoplasmic granules, and the absence of visible nucleoli. (If the nucleus is not yet bent, then it is likely a myelocyte.)
Additional images
See also
Pluripotential hemopoietic stem cell
External links
- "Bone Marrow and Hemopoiesis: bone marrow smear, neutrophilic metamyelocyte and mature PMN"
Interactive diagram at lycos.es
Slide at marist.edu
hematologyatlas.com
Histology
Leukocytes
|
https://en.wikipedia.org/wiki/Granulopoiesis
|
Granulopoiesis (or granulocytopoiesis) is a part of haematopoiesis, that leads to the production of granulocytes. A granulocyte, also referred to as a polymorphonuclear leukocyte (PMN), is a type of white blood cell that has multi lobed nuclei, usually containing three lobes, and has a significant amount of cytoplasmic granules within the cell. Granulopoiesis takes place in the bone marrow. It leads to the production of three types of mature granulocytes: neutrophils (most abundant, making up to 60% of all white blood cells), eosinophils (up to 4%) and basophils (up to 1%).
Stages of granulocyte development
Granulopoiesis is often divided into two parts;
1) Granulocyte lineage determination and
2) Committed granulopoiesis.
Granulocyte lineage determination
Granulopoiesis, as well as the rest of haematopoiesis, begins from a haematopoietic stem cells. These are multipotent cells that reside in the bone marrow niche and have the ability to give rise to all haematopoietic cells, as well as the ability of self renewal. They give rise to either a common lymphoid progenitor (CLP, a progenitor for all lymphoid cells) or a common myeloid progenitor, CMP, an oligopotent progenitor cell, that gives rise to the myeloid part of the haematopoietic tree. The first stage of the myeloid lineage is a granulocyte - monocyte progenitor (GMP), still an oligopotent progenitor, which then develops into unipotent cells that will later on form a population of granulocytes, as well as a population of monocytes. The first unipotent cell in granulopoiesis is a myeloblast.
Committed granulopoiesis
Committed granulopoiesis consists of maturation stages of unipotent cells. The first cell that starts to resemble a granulocyte is a myeloblast. It is characterized by large oval nucleus that takes up most of the space in the cell and very little cytoplasm. The next developmental stage, a promyelocyte, still has a large oval nucleus, but there is more cytoplasm in the cell at this point, also
|
https://en.wikipedia.org/wiki/Lymphopoiesis
|
Lymphopoiesis (lĭm'fō-poi-ē'sĭs) (or lymphocytopoiesis) is the generation of lymphocytes, one of the five types of white blood cells (WBCs). It is more formally known as lymphoid hematopoiesis.
Disruption in lymphopoiesis can lead to a number of lymphoproliferative disorders, such as lymphomas and lymphoid leukemias.
Terminology
Lymphocytes are of the lymphoid (rather than the myeloid or erythroid) lineage of blood cells.
Nomenclature is not trivial in this case, because, although lymphocytes are found in the bloodstream and originate in the bone marrow, they principally belong to the separate lymphatic system which interacts with the blood circulation.
Lymphopoiesis is now usually used interchangeably with the term "lymphocytopoiesis" – the making of lymphocytes – but some sources distinguish between the two, stating that "lymphopoiesis" additionally refers to creating lymphatic tissue, while "lymphocytopoiesis" refers only to the creation of cells in that tissue. It is rare now for lymphopoiesis to refer to the creation of lymphatic tissues.
Myelopoiesis refers to "generation of cells of the myeloid lineage" and erythropoiesis refers to "generation of cells of the erythroid lineage", so parallel usage has evolved in which lymphopoiesis refers to "generation of cells of the lymphoid lineage".
Observations on research going back well over 100 years had elucidated the two classes of WBCs – myeloid and lymphoid – and advances in medicine and science have resulted from these studies. After investigating the origins of these two classes of cells, scientists isolated and defined two cell types with some strong stem cell properties – the common myeloid progenitor (CMP) and the common lymphoid progenitor (CLP) for mice. It was eventually found these progenitors were not unique, and that the myeloid and lymphoid classes were not disjoint, but rather two partially interwoven family trees.
Function
Mature lymphocytes are a critical part of the immune system that (wit
|
https://en.wikipedia.org/wiki/Bostongurka
|
Bostongurka (Swedish meaning "Boston cucumber") is a type of relish with pickled gherkins, red bell pepper and onion with spices such as mustard seeds. It is so popular in Sweden that it is considered by some to be a generic term.
Despite its name, Bostongurka has nothing to do with the city of Boston. Bostongurka was invented by the Swedish company Felix based on a recipe from Hungary. It was originally invented as a way to use up the end pieces left over when making pickled gherkins. Today, the name Bostongurka is a registered trademark of Orkla Group.
|
https://en.wikipedia.org/wiki/Obligate
|
As an adjective, obligate means "by necessity" (antonym facultative) and is used mainly in biology in phrases such as:
Obligate aerobe, an organism that cannot survive without oxygen
Obligate anaerobe, an organism that cannot survive in the presence of oxygen
Obligate air-breather, a term used in fish physiology to describe those that respire entirely from the atmosphere
Obligate biped, Bipedalism designed to walk on two legs
Obligate carnivore, an organism dependent for survival on a diet of animal flesh.
Obligate chimerism, a kind of organism with two distinct sets of DNA, always
Obligate hibernation, a state of inactivity in which some organisms survive conditions of insufficiently available resources.
Obligate intracellular parasite, a parasitic microorganism that cannot reproduce without entering a suitable host cell
Obligate parasite, a parasite that cannot reproduce without exploiting a suitable host
Obligate photoperiodic plant, a plant that requires sufficiently long or short nights before it initiates flowering, germination or similarly functions
Obligate symbionts, organisms that can only live together in a symbiosis
See also
Opportunism (biological)
Biology terminology
|
https://en.wikipedia.org/wiki/Pneumocystis%20pneumonia
|
Pneumocystis pneumonia (PCP), also known as Pneumocystis jirovecii pneumonia (PJP), is a form of pneumonia that is caused by the yeast-like fungus Pneumocystis jirovecii.
Pneumocystis specimens are commonly found in the lungs of healthy people although it is usually not a cause for disease. However, they are a source of opportunistic infection and can cause lung infections in people with a weak immune system or other predisposing health conditions. PCP is seen in people with HIV/AIDS (who account for 30-40% of PCP cases), those using medications that suppress the immune system, and people with cancer, autoimmune or inflammatory conditions, and chronic lung disease.
Signs and symptoms
Signs and symptoms may develop over several days or weeks and may include: shortness of breath and/or difficulty breathing (of gradual onset), fever, dry/non-productive cough, weight loss, night sweats, chills, and fatigue. Uncommonly, the infection may progress to involve other visceral organs (such as the liver, spleen, and kidney).
Cough - typically dry/non-productive because sputum becomes too viscous to be coughed up. The dry cough distinguishes PCP from typical pneumonia.
Complications
Pneumothorax is a well-known complication of PCP. Also, a condition similar to acute respiratory distress syndrome (ARDS) may occur in patients with severe Pneumocystis pneumonia, and such individuals may require intubation.
Pathophysiology
The risk of PCP increases when CD4-positive T-cell levels are less than 400 cells/μL. In these immunosuppressed individuals, the manifestations of the infection are highly variable. The disease attacks the interstitial, fibrous tissue of the lungs, with marked thickening of the alveolar septa and alveoli, leading to significant hypoxia, which can be fatal if not treated aggressively. In this situation, lactate dehydrogenase levels increase and gas exchange is compromised. Oxygen is less able to diffuse into the blood, leading to hypoxia, which along with hi
|
https://en.wikipedia.org/wiki/64b/66b%20encoding
|
In data networking and transmission, 64b/66b is a line code that transforms 64-bit data to 66-bit line code to provide enough state changes to allow reasonable clock recovery and alignment of the data stream at the receiver. It was defined by the IEEE 802.3 working group as part of the IEEE 802.3ae-2002 amendment which introduced 10 Gbit/s Ethernet. At the time 64b/66b was deployed, it allowed 10 Gb Ethernet to be transmitted with the same lasers used by SONET OC-192, rather than requiring the 12.5 Gbit/s lasers that were not expected to be available for several years.
The protocol overhead of a coding scheme is the ratio of the number of raw payload bits to the number of raw payload bits plus the number of added coding bits. The overhead of 64b/66b encoding is 2 coding bits for every 64 payload bits or 3.125%. This is a considerable improvement on the 25% overhead of the previously-used 8b/10b encoding scheme, which added 2 coding bits to every 8 payload bits.
The overhead can be reduced further by doubling the payload size to produce the 128b/130b encoding used by PCIe 3.0.
Function
As its scheme name suggests, 64 payload bits are encoded as a 66-bit entity. The 66-bit entity is made by prefixing one of two possible 2-bit preambles to the 64 payload bits.
If the preamble is 01, the 64 payload bits are data.
If the preamble is 10, the 64 payload bits hold an 8-bit Type field and 56 bits of control information and/or data.
The preambles 00 and 11 are not used and indicate an error if seen.
The use of the 01 and 10 preambles guarantees a bit transition every 66 bits, which means that a continuous stream of 0s or 1s cannot be valid data. It also allows easier clock/timer synchronization, as a transition must be seen every 66 bits.
The 64-bit payload is then scrambled using a self-synchronous scrambler function. Scrambling is not intended to encrypt the data but to ensure that a relatively even distribution of 1s and 0s are found in the transmitted data. The s
|
https://en.wikipedia.org/wiki/Screenless%20video
|
Screenless video is any system for transmitting visual information from a video source without the use of a screen. Screenless computing systems can be divided into three groups: Visual Image, Retinal Direct, and Synaptic Interface.
Visual image
Visual Image screenless display includes any image that the eye can perceive. The most common example of Visual Image screenless display is a hologram. In these cases, light is reflected off some intermediate object (hologram, LCD panel, or cockpit window) before it reaches the retina. In the case of LCD panels the light is refracted from the back of the panel, but is nonetheless a reflected source. Google has proposed a similar system to replace the screens of tablet computers and smartphones.
Retinal display
Virtual retinal display systems are a class of screenless displays in which images are projected directly onto the retina. They are distinguished from visual image systems because light is not reflected from some intermediate object onto the retina, it is instead projected directly onto the retina. Retinal Direct systems, once marketed, hold out the promise of extreme privacy when computing work is done in public places because most snooping relies on viewing the same light as the person who is legitimately viewing the screen, and retinal direct systems send light only into the pupils of their intended viewer.
Synaptic interface
Synaptic Interface screenless video does not use light at all. Visual information completely bypasses the eye and is transmitted directly to the brain. While such systems have only been implemented in humans in rudimentary form - for example, displaying single Braille characters to blind people – success has been achieved in sampling usable video signals from the biological eyes of a living horseshoe crab through their optic nerves, and in sending video signals from electronic cameras into the creatures' brains using the same method.
See also
Volumetric display
Fog display
Augment
|
https://en.wikipedia.org/wiki/Rotation%20formalisms%20in%20three%20dimensions
|
In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational (or angular) kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
According to Euler's rotation theorem, the rotation of a rigid body (or three-dimensional coordinate system with a fixed origin) is described by a single rotation about some axis. Such a rotation may be uniquely described by a minimum of three real parameters. However, for various reasons, there are several ways to represent it. Many of these representations use more than the necessary minimum of three parameters, although each of them still has only three degrees of freedom.
An example where rotation representation is used is in computer vision, where an automated observer needs to track a target. Consider a rigid body, with three orthogonal unit vectors fixed to its body (representing the three axes of the object's local coordinate system). The basic problem is to specify the orientation of these three unit vectors, and hence the rigid body, with respect to the observer's coordinate system, regarded as a reference placement in space.
Rotations and motions
Rotation formalisms are focused on proper (orientation-preserving) motions of the Euclidean space with one fixed point, that a rotation refers to. Although physical motions with a fixed point are an important case (such as ones described in the center-of-mass frame, or motions of a joint), this approach creates a knowledge about all motions. Any proper motion of the Euclidean space decomposes to a rotation around the origin and a translation. Whichever the order of their composition will be,
|
https://en.wikipedia.org/wiki/B%E2%80%93Bbar%20oscillation
|
Neutral B meson oscillations (or – oscillations) are one of the manifestations of the neutral particle oscillation, a fundamental prediction of the Standard Model of particle physics. It is the phenomenon of B mesons changing (or oscillating) between their matter and antimatter forms before their decay. The meson can exist as either a bound state of a strange antiquark and a bottom quark, or a strange quark and bottom antiquark. The oscillations in the neutral B sector are analogous to the phenomena that produce long and short-lived neutral kaons.
– mixing was observed by the CDF experiment at Fermilab in 2006 and by LHCb at CERN in 2011 and 2021.
Excess of matter over antimatter
The Standard Model predicts that regular matter mesons are slightly favored in these oscillations over their antimatter counterpart, making strange B mesons of special interest to particle physicists. The observation of the – mixing phenomena led physicists to propose the construction of B-factories in the early 1990s. They realized that a precise – oscillation measure could pin down the unitarity triangle and perhaps explain the excess of matter over antimatter in the universe. To this end construction began on two "B factories" in the late nineties, one at the Stanford Linear Accelerator Center (SLAC) in California and one at KEK in Japan.
These B factories, BaBar and Belle, were set at the (4S) resonance which is just above the threshold for decay into two B mesons.
On 14 May 2010, physicists at the Fermi National Accelerator Laboratory reported that the oscillations decayed into matter 1% more often than into antimatter, which may help explain the abundance of matter over antimatter in the observed Universe. However, more recent results at LHCb in 2011, 2012, and 2021 with larger data samples have demonstrated no significant deviation from the Standard Model prediction of very nearly zero asymmetry.
See also
Baryogenesis
CP Violation
Kaon
Neutral particle oscillation
Stra
|
https://en.wikipedia.org/wiki/Strange%20B%20meson
|
The meson is a meson composed of a bottom antiquark and a strange quark. Its antiparticle is the meson, composed of a bottom quark and a strange antiquark.
B–B oscillations
Strange B mesons are noted for their ability to oscillate between matter and antimatter via a box-diagram with measured by CDF experiment at Fermilab.
That is, a meson composed of a bottom quark and strange antiquark, the strange meson, can spontaneously change into an bottom antiquark and strange quark pair, the strange meson, and vice versa.
On 25 September 2006, Fermilab announced that they had claimed discovery of previously-only-theorized Bs meson oscillation. According to Fermilab's press release:
Ronald Kotulak, writing for the Chicago Tribune, called the particle "bizarre" and stated that the meson "may open the door to a new era of physics" with its proven interactions with the "spooky realm of antimatter".
Better understanding of the meson is one of the main objectives of the LHCb experiment conducted at the Large Hadron Collider. On 24 April 2013, CERN physicists in the LHCb collaboration announced that they had observed CP violation in the decay of strange mesons for the first time. Scientists found the Bs meson decaying into two muons for the first time, with Large Hadron Collider experiments casting doubt on the scientific theory of supersymmetry.
CERN physicist Tara Shears described the CP violation observations as "verification of the validity of the Standard Model of physics".
Rare decays
The rare decays of the Bs meson are an important test of the Standard Model. The branching fraction of the strange b-meson to a pair of muons is very precisely predicted with a value of Br(Bs→ µ+µ−)SM = (3.66 ± 0.23) × 10−9. Any variation from this rate would indicate possible physics beyond the Standard Model, such as supersymmetry. The first definitive measurement was made from a combination of LHCb and CMS experiment data:
This result is compatible with the Standard Model a
|
https://en.wikipedia.org/wiki/Loco-Motion%20%28video%20game%29
|
Loco-Motion, known as in Japan, is an arcade puzzle game developed by Konami in 1982 and released by Sega in Japan. The North American rights were licensed to Centuri. In Loco-Motion, the player builds a path for their unstoppable locomotive by moving tracks which will allow it to pick up passengers.
The game was ported to Intellivision, the Tomy Tutor, and–under a different name–MSX. A clone programmed by Carol Shaw of Activision, Happy Trails, was published for Intellivision before the official version was released.
Gameplay
Loco-Motion is an updated version of a sliding block puzzle game in which the player can move tiles horizontally or vertically within a rectangular frame that contains one empty square. The tiles are sections of railroad track and the player must use them to construct a path for a locomotive that never stops moving. Laid out around the edges of the frame are several stations with passengers that must be picked up.
The player uses a joystick to slide a piece of the track into the vacant square and can use a button to accelerate the locomotive. However, it is always in motion and cannot be stopped. The player must avoid running into a dead-end barricade, a barrier at the playfield edge or the edge of the empty square; doing so costs one life. As the player moves the pieces of track around, the route the locomotive will take is highlighted in yellow up to any dead end.
A countdown timer occasionally appears on a station. If passengers are waiting there and the player picks them up before the countdown reaches zero, the remaining amount is added to the score as bonus points. If not, the passengers send a "Crazy Train" onto the tracks and the player must avoid crashing into it. If a countdown timer at an unoccupied station reaches zero, the station is destroyed and becomes a pair of barrier squares. Crazy Trains can be crashed into each other to destroy them but doing so creates a new pair of dead ends or destroys a station if the collision h
|
https://en.wikipedia.org/wiki/Extrusion%20detection
|
Extrusion detection or outbound intrusion detection is a branch of intrusion detection aimed at developing mechanisms to identify successful and unsuccessful attempts to use the resources of a computer system to compromise other systems. Extrusion detection techniques focus primarily on the analysis of system activity and outbound traffic in order to detect malicious users, malware or network traffic that may pose a threat to the security of neighboring systems.
While intrusion detection is mostly concerned about the identification of incoming attacks (intrusion attempts), extrusion detection systems try to prevent attacks from being launched in the first place. They implement monitoring controls at leaf nodes of the network—rather than concentrating them at choke points, e.g., routers—in order to distribute the inspection workload and to take advantage of the visibility a system has of its own state. The ultimate goal of extrusion detection is to identify attack attempts launched from an already compromised system in order to prevent them from reaching their target, hereby containing the impact of the threat.
External links
"Stopping Spam by Extrusion Detection"
"Outbound Intrusion Detection"
Data security
|
https://en.wikipedia.org/wiki/Solar%20gain
|
Solar gain (also known as solar heat gain or passive solar gain) is the increase in thermal energy of a space, object or structure as it absorbs incident solar radiation. The amount of solar gain a space experiences is a function of the total incident solar irradiance and of the ability of any intervening material to transmit or resist the radiation.
Objects struck by sunlight absorb its visible and short-wave infrared components, increase in temperature, and then re-radiate that heat at longer infrared wavelengths. Though transparent building materials such as glass allow visible light to pass through almost unimpeded, once that light is converted to long-wave infrared radiation by materials indoors, it is unable to escape back through the window since glass is opaque to those longer wavelengths. The trapped heat thus causes solar gain via a phenomenon known as the greenhouse effect. In buildings, excessive solar gain can lead to overheating within a space, but it can also be used as a passive heating strategy when heat is desired.
Window solar gain properties
Solar gain is most frequently addressed in the design and selection of windows and doors. Because of this, the most common metrics for quantifying solar gain are used as a standard way of reporting the thermal properties of window assemblies. In the United States, The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE), and The National Fenestration Rating Council (NFRC) maintain standards for the calculation and measurement of these values.
Shading coefficient
The shading coefficient (SC) is a measure of the radiative thermal performance of a glass unit (panel or window) in a building. It is defined as the ratio of solar radiation at a given wavelength and angle of incidence passing through a glass unit to the radiation that would pass through a reference window of frameless Clear Float Glass. Since the quantities compared are functions of both wavelength and angle of i
|
https://en.wikipedia.org/wiki/Left%20border%20of%20heart
|
The left border of heart (or obtuse margin) is formed from the rounded lateral wall of the left ventricle. It is called the 'obtuse' margin because of the obtuse angle (>90 degrees) created between the anterior part of the heart and the left side, which is formed from the rounded lateral wall of the left ventricle. Within this margin can be found the obtuse marginal artery, which is the a branch of the left circumflex artery.
It extends from a point in the second left intercostal space, about 2.5 mm. from the sternal margin, obliquely downward, with a convexity to the left, to the apex of the heart.
This is contrasted with the acute margin of the heart, which is at the border of the anterior and posterior surface, and in which the acute marginal branch of the right coronary artery is found. The angle formed here is <90 degrees, therefore an acute angle.
|
https://en.wikipedia.org/wiki/Coprinopsis%20atramentaria
|
Coprinopsis atramentaria, commonly known as the common ink cap, tippler's bane, or inky cap, is an edible (although poisonous when combined with alcohol) mushroom found in Europe and North America. Previously known as Coprinus atramentarius, it is the second best known ink cap and previous member of the genus Coprinus after C. comatus. It is a widespread and common fungus found throughout the northern hemisphere. Clumps of mushrooms arise after rain from spring to autumn, commonly in urban and disturbed habitats such as vacant lots and lawns, as well as grassy areas. The grey-brown cap is initially bell-shaped before opening, after which it flattens and disintegrates. The flesh is thin and the taste mild. It can be eaten, but due to the presence of coprine within the mushroom, it is poisonous when consumed with alcohol, as it heightens the body's sensitivity to ethanol in a similar manner to the anti-alcoholism drug disulfiram.
Taxonomy
The common ink cap was first described by French naturalist Pierre Bulliard in 1786 as Agaricus atramentarius before being placed in the large genus Coprinus in 1838 by Elias Magnus Fries. The specific epithet is derived from the Latin word atramentum "ink".
The genus was formerly considered to be a large one with well over 100 species. However, molecular analysis of DNA sequences showed that most species belonged in the family Psathyrellaceae, distinct from the type species that belonged to the Agaricaceae. It was given its current binomial name in 2001 as a result, as this and other species were moved to the new genus Coprinopsis.
The term "tippler's bane" is derived from its ability to create acute sensitivity to alcohol, similar to disulfiram (Antabuse). Other common names include common ink cap and inky cap. The black liquid that this mushroom releases after being picked was once used as ink.
Description
Measuring in diameter, the greyish or brownish-grey cap is initially bell-shaped, is furrowed, and later splits. The co
|
https://en.wikipedia.org/wiki/Jozaria
|
Jozaria is an extinct genus of stem perissodactyl from the Early to Middle Eocene of the Kuldana Formation of Kohat, Pakistan. It and other anthracobunids were formerly classified with proboscideans.
Only one specimen belonging to the species Jozaria palustris has been discovered so far. Geological evidences from the place of discovery indicate that the animal lived in a brackish marsh environment. It probably fed on soft aquatic vegetation.
|
https://en.wikipedia.org/wiki/Statistical%20potential
|
In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB).
The original method to obtain such potentials is the quasi-chemical approximation, due to Miyazawa and Jernigan. It was later followed by the potential of mean force (statistical PMF ), developed by Sippl. Although the obtained scores are often considered as approximations of the free energy—thus referred to as pseudo-energies—this physical interpretation is incorrect. Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences.
Overview
Possible features to which a pseudo-energy can be assigned include:
interatomic distances,
torsion angles,
solvent exposure,
or hydrogen bond geometry.
The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials. For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the PDB).
History
Initial development
Many textbooks present the statistical PMFs as proposed by Sippl as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice.
The Boltzmann distribution applied to a specific pair of amino acids,
is given by:
where is the distance, is the Boltzmann constant, is
the temperature and is the partition function, w
|
https://en.wikipedia.org/wiki/T.37
|
T.37 is an ITU standard which deals with sending fax messages using email. It is also referred to as "Internet fax" or "Store-forward-fax".
A fax machine supporting T.37 will send a fax to an email address by converting the document to a TIFF-F image, attaching it to an email (using the MIME format), and sending the document (using SMTP). The destination fax receives the email and prints the attached document.
To interface with regular fax machines:
T.37 can be used in conjunction with fax gateways to communicate with regular fax machines. The fax gateway converts emails to regular faxes or regular faxes to emails.
A T.37-compliant fax machine includes legacy fax functionality to send to regular fax numbers and requires a phone line for this.
To find the destination faxes email address, the RFC 4143 standard is in development, which allows a fax machine to use a destination fax number to look up an alternative email address.
See also
Unified messaging
T.38 (Fax over the Internet Protocol)
Internet fax
External links
Official ITU-T T.37 page
Cisco Fax over IP T.37 Store and Forward Fax
Internet Standards
VoIP protocols
ITU-T recommendations
ITU-T T Series Recommendations
|
https://en.wikipedia.org/wiki/WDMA%20%28computer%29
|
The Word DMA (WDMA) interface was the fastest method used to transfer data between the computer (through the Advanced Technology Attachment (ATA) controller) and an ATA device until Ultra Direct Memory Access (UDMA) was implemented. Single/Multiword DMA took over from Programmed input/output (PIO) as the choice of interface between ATA devices and the computer.
The WDMA interface is grouped into different modes.
In single transfer mode, only one word (16-bit) will be transferred between the device and the computer before returning control to the CPU, and later it will repeat this cycle, allowing the CPU to process data while data is transferred. In multiword transfer mode (block mode), once a transfer has begun it will continue until all words are transferred.
Two additional Advanced Timing modes have been defined in the CompactFlash specification 2.1. Those are Multiword DMA mode 3 and Multiword DMA mode 4. They are specific to CompactFlash. Multiword DMA is only permitted for CompactFlash devices configured in True IDE mode.
AT Attachment the category
|
https://en.wikipedia.org/wiki/Sociometer
|
Sociometer theory is a theory of self-esteem from an evolutionary psychological perspective which proposes that self-esteem is a gauge (or sociometer) of interpersonal relationships.
This theoretical perspective was first introduced by Mark Leary and colleagues in 1995 and later expanded on by Kirkpatrick and Ellis.
In Leary's research, the idea of self-esteem as a sociometer is discussed in depth. This theory was created as a response to psychological phenomenon i.e. social emotions, inter- and intra- personal behaviors, self-serving biases, and reactions to rejection. Based on this theory, self-esteem is a measure of effectiveness in social relations and interactions that monitors acceptance and/or rejection from others. With this, an emphasis is placed on relational value, which is the degree to which a person regards his or her relationship with another, and how it affects day-to-day life. Confirmed by various studies and research, if a person is deemed having relational value, they are more likely to have higher self-esteem.
The main concept of sociometer theory is that the self-esteem system acts as a gauge to measure the quality of an individual's current and forthcoming relationships. Furthermore, this measurement of self-esteem assesses these two types of relationships in terms of relational appreciation. This is how other people might view and value the relationships they hold with the individual. If relational appreciation of an individual differs negatively, relational devaluation is experienced. Relational devaluation exists in the format of belongingness, with a negative alteration allowing the sociometer gauge to highlight these threats, producing emotional distress to act to regain relational appreciation and restore balance in the individual's self-esteem.
According to Leary, there are five main groups associated with relational value that are classified as those affording the greatest impact on an individual. They are: 1) macro-level, i.e., comm
|
https://en.wikipedia.org/wiki/Structural%20semantics
|
Structural semantics (also structuralist semantics) is a linguistic school and paradigm that emerged in Europe from the 1930s, inspired by the structuralist linguistic movement started by Ferdinand de Saussure's 1916 work "Cours De Linguistique Generale" (A Course in General Linguistics).
Examples of approaches within structural semantics are Lexical field theory (1931-1960s), relational semantics (from the 1960s by John Lyons) and componential analysis (from the 1960s by Eugenio Coseriu, Bernard Pottier and Algirdas Greimas). From the 1960s these approaches were incorporated into generative linguistics. Other prominent developer of structural semantics have been Louis Hjelmslev, Émile Benveniste, Klaus Heger, Kurt Baldinger and Horst Geckeler.
Logical positivism asserts that structural semantics is the study of relationships between the meanings of terms within a sentence, and how meaning can be composed from smaller elements. However, some critical theorists suggest that meaning is only divided into smaller structural units via its regulation in concrete social interactions; outside of these interactions, language may become meaningless.
Structural semantics is that branch that marked the modern linguistics movement started by Ferdinand de Saussure at the break of the 20th century in his posthumous discourse titled "Cours De Linguistique Generale" (A Course in General Linguistics). He posits that language is a system of inter-related units and structures and that every unit of language is related to the others within the same system. His position later became the bedding ground for other theories such as componential analysis and relational predicates. Structuralism is a very efficient aspect of Semantics, as it explains the concordance in the meaning of certain words and utterances. The concept of sense relations as a means of semantic interpretation is an offshoot of this theory as well.
Structuralism has revolutionized semantics to its present state, and i
|
https://en.wikipedia.org/wiki/Multiple%20sub-Nyquist%20sampling%20encoding
|
MUSE (Multiple sub-Nyquist Sampling Encoding), commercially known as Hi-Vision (a contraction of HIgh-definition teleVISION) was a Japanese analog high-definition television system, with design efforts going back to 1979.
It used dot-interlacing and digital video compression to deliver 1125 line, 60 field-per-second (1125i60) signals to the home. The system was standardized as ITU-R recommendation BO.786 and specified by SMPTE 260M, using a colorimetry matrix specified by SMPTE 240M. As with other analog systems, not all lines carry visible information. On MUSE there are 1035 active interlaced lines, therefore this system is sometimes also mentioned as 1035i. It employed 2-dimensional filtering, dot-interlacing, motion-vector compensation and line-sequential color encoding with time compression to "fold" an original 20 MHz bandwidth source signal into just 8.1 MHz.
Japan began broadcasting wideband analog HDTV signals in December 1988, initially with an aspect ratio of 2:1. The Sony HDVS high-definition video system was used to create content for the MUSE system.
By the time of its commercial launch in 1991, digital HDTV was already under development in the United States. Hi-Vision was mainly broadcast by NHK through their BShi satellite TV channel.
On May 20, 1994, Panasonic released the first MUSE LaserDisc player. There were a number of players available from other brands like Pioneer and Sony, yet Hi-Vision LaserDiscs are extremely rare and expensive.
Hi-Vision continued broadcasting in analog until 2007.
History
MUSE was developed by NHK Science & Technology Research Laboratories in the 1980s as a compression system for Hi-Vision HDTV signals.
Japanese broadcast engineers immediately rejected conventional vestigial sideband broadcasting.
It was decided early on that MUSE would be a satellite broadcast format as Japan economically supports satellite broadcasting.
Modulation research
Japanese broadcast engineers had been studying the various HDTV broa
|
https://en.wikipedia.org/wiki/Pi%20Josephson%20junction
|
A Josephson junction is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.).
A Josephson junction is a Josephson junction in which the Josephson phase φ equals in the ground state, i.e. when no external current or magnetic field is applied.
Background
The supercurrent Is through a Josephson junction (JJ) is generally given by Is = Icsin(φ),
where φ is the phase difference of the superconducting wave functions of the two
electrodes, i.e. the Josephson phase.
The critical current Ic is the maximum supercurrent that can exist through the Josephson junction.
In experiment, one usually causes some current through the Josephson junction and the junction reacts by changing the Josephson phase. From the above formula it is clear that the phase φ = arcsin(I/Ic), where I is the applied (super)current.
Since the phase is 2-periodic, i.e. φ and φ + 2n are physically equivalent, without losing generality, the discussion below refers to the interval 0 ≤ φ < 2.
When no current (I = 0) exists through the Josephson junction, e.g. when the junction is disconnected, the junction is in the ground state and the Josephson phase across it is zero (φ = 0). The phase can also be φ = , also resulting in no current through the junction. It turns out that the state with φ = is unstable and corresponds to the Josephson energy maximum, while the state φ = 0 corresponds to the Josephson energy minimum and is a ground state.
In certain cases, one may obtain a Josephson junction where the critical current is negative (Ic < 0). In this case, the first Josephson relation becomes
The ground state of such a Josephson junction is and corresponds to the Josephson energy minimum, while the conventional state φ = 0 is unstable and corresponds to the Josephson energy maximum. Such a Josephson junction with in the ground state is called a Josephson junction.
Josep
|
https://en.wikipedia.org/wiki/Automatically%20Tuned%20Linear%20Algebra%20Software
|
Automatically Tuned Linear Algebra Software (ATLAS) is a software library for linear algebra. It provides a mature open source implementation of BLAS APIs for C and Fortran77.
ATLAS is often recommended as a way to automatically generate an optimized BLAS library. While its performance often trails that of specialized libraries written for one specific hardware platform, it is often the first or even only optimized BLAS implementation available on new systems and is a large improvement over the generic BLAS available at Netlib. For this reason, ATLAS is sometimes used as a performance baseline for comparison with other products.
ATLAS runs on most Unix-like operating systems and on Microsoft Windows (using Cygwin). It is released under a BSD-style license without advertising clause, and many well-known mathematics applications including MATLAB, Mathematica, Scilab, SageMath, and some builds of GNU Octave may use it.
Functionality
ATLAS provides a full implementation of the BLAS APIs as well as some additional functions from LAPACK, a higher-level library built on top of BLAS. In BLAS, functionality is divided into three groups called levels 1, 2 and 3.
Level 1 contains vector operations of the form
as well as scalar dot products and vector norms, among other things.
Level 2 contains matrix-vector operations of the form
as well as solving for with being triangular, among other things.
Level 3 contains matrix-matrix operations such as the widely used General Matrix Multiply (GEMM) operation
as well as solving for triangular matrices , among other things.
Optimization approach
The optimization approach is called Automated Empirical Optimization of Software (AEOS), which identifies four fundamental approaches to computer assisted optimization of which ATLAS employs three:
Parameterization—searching over the parameter space of a function, used for blocking factor, cache edge, etc.
Multiple implementation—searching through various approaches to impleme
|
https://en.wikipedia.org/wiki/Pugmill
|
A pugmill, pug mill, or commonly just pug, is a machine in which clay or other materials are extruded in a plastic state or a similar machine for the trituration of ore. Industrial applications are found in pottery, bricks, cement and some parts of the concrete and asphalt mixing processes. A pugmill may be a fast continuous mixer. A continuous pugmill can achieve a thoroughly mixed, homogeneous mixture in a few seconds, and the right machines can be matched to the right application by taking into account the factors of agitation, drive assembly, inlet, discharge, cost and maintenance. Mixing materials at optimum moisture content requires the forced mixing action of the pugmill paddles, while soupy materials might be mixed in a drum mixer. A typical pugmill consists of a horizontal boxlike chamber with a top inlet and a bottom discharge at the other end, 2 shafts with opposing paddles, and a drive assembly. Some of the factors affecting mixing and residence time are the number and the size of the paddles, paddle swing arc, overlap of left and right swing arc, size of mixing chamber, length of pugmill floor, and material being mixed.
Common construction and industrial uses
Road Base - Dense well-graded aggregate, uniformly mixed, wetted, and densely compacted for building the foundation under a pavement.
Lime Addition to asphalt – Lime may be added to the cold feed of an asphalt plant to strengthen the binding properties of the asphalt.
Flyash Conditioning – Wetting fly ash in a pugmill to stabilize the ash so that it won’t create dust. Some flyashes have cementitious properties when wetted and can be used to stabilize other materials.
Waste stabilization – various waste streams are remediated with pugmills forcing the mixing of the wastes with remediation agents.
Roller-compacted concrete – (RCC) or rolled concrete is a special blend of concrete that has the same ingredients as conventional concrete but in different ratios. It has cement, water, and aggregates
|
https://en.wikipedia.org/wiki/JOSSO
|
Java Open Single Sign On (JOSSO) is an open source Identity and Access Management (IAM) platform for rapid and standards-based Cloud-scale Single Sign-On, web services security, authentication and provisioning.
See also
Shibboleth (Internet2)
CAS
Digital certificates
List of single sign-on implementations
External links
JOSSO Home Page
Federated identity
|
https://en.wikipedia.org/wiki/Institute%20of%20Food%20Technologists
|
The Institute of Food Technologists (IFT) is an international, non-profit scientific society of professionals engaged in food science, food technology, and related areas in academia, government and industry. It has more than 17,000 members from more than 95 countries.
History
Early history
As food technology grew from the individual family farm to the factory level, including the slaughterhouse for meat and poultry processing, the cannery for canned foods, and bakeries for bread, the need to have personnel trained for the food industry did also. Literature such as Upton Sinclair's The Jungle in 1906 about slaughterhouse operations would be a factor in the establishment of the U.S. Food and Drug Administration (FDA) later that year. The United States Department of Agriculture was also interested in food technology, and research was already being done at agricultural colleges in the United States, including the Massachusetts Institute of Technology (MIT), the University of Illinois at Urbana-Champaign, the University of Wisconsin–Madison, and the University of California, Berkeley. By 1935, two MIT professors, Samuel C. Prescott and Bernard E. Proctor decided that it was time to hold an international conference regarding this. A detailed proposal was presented to MIT President Karl Taylor Compton in 1936 was presented with $1500 of financial aid from MIT for a meeting to be held from June 30 to July 2, 1937, with Compton asking how many people would be in attendance at this meeting. Prescott replied with "fifty or sixty people". 500 people actually attended the event.
This meeting proved so successful that in early 1938 that a second conference would be held in 1939. Initially led by George J. Hucker of the New York State Agricultural Experiment Station (part of Cornell University) in Geneva, New York, a small group meeting was held on August 5, 1938 on forming an organization with an expanded group meeting in New York City on January 16, 1939 to further discuss th
|
https://en.wikipedia.org/wiki/Usenet%20quoting
|
When Usenet and e-mail users respond to a message, they often want to include some context for the discussion. This is often accomplished by quoting a portion of the original message using Usenet conventions. In essence the convention is to communicate in plain text format (not HTML) and quote with ">" at the beginning of each line, ">>" for a quote of quote, and so on. Most email clients can perform Usenet quoting automatically.
Examples
Usenet standard quoting refers to the practice of preceding the original message with the ">" (or right-angle bracket) character at the beginning of each line, and then inserting one's responses inline, using no special designator for the author's messages.
> hello, how are you?
I am fine
When a second response is made to the second message, the second message is
again quoting with >, perhaps causing parts of the original message to now be designated with >>. Such nested quotations can technically be continued indefinitely, but quickly become cumbersome.
>> hello, how are you?
> I am fine
Good, I am also well.
Enhanced quoting (such as facilitating by the Emacs supercite module), includes more context by using the initials or a short form of the name. The
program has to be careful not to quote already quoted material:
first> hello, how are you?
I am fine.
first> hello, how are you?
second> I am fine.
Good, I am also fine.
It is often the case that it makes sense, particularly in the simple quoting case,
to insert a note telling who said what:
Last Saturday, when the sun was nice, Second Guy said:
> Last thursday, while eating popcorn, First Guy said:
>> hello, how are you?
> I am fine
Good, I am also fine.
Canonical quoting
There is no standard declaring one way of quoting to be "right" and others to be "wrong", but some standards depend on conventions. The son-of-1036 draft recommends ">" as the quote-prefix; RFC 3676 depends on it and considers ">> " and "> > " to be semantically different. That is, ">> " has a quote-d
|
https://en.wikipedia.org/wiki/Kinetic%20Rule%20Language
|
Kinetic Rule Language (KRL) is a rule-based programming language for creating applications on the Live Web. KRL programs, or rulesets, comprise a number of rules that respond to particular events. KRL has been promoted as language for building personal clouds.
KRL is part of an open-source project called KRE, for Kinetic Rules Engine, developed by Kynetx, Inc.
History
KRL was designed by Phil Windley at Kynetx, beginning in 2007. Development of the language has since expanded to include libraries and modules for a variety of web services, including Twitter, Facebook, and Twilio.
Philosophy and design
KRL is event-based with strict evaluation, single assignment, and dynamic typing. In event-driven programming, events, a notification that something happened, control the flow of execution. KRL supports a programming model based on three key ideas:
Entity orientation – The programming model of KRL has identity as a core feature. KRL programs execute on behalf of a particular entity. The idea of entity is built into the underlying semantics of the language. The entity orientation of KRL is supported by the underlying KRE (Kynetx Rules Engine) and so is usable by any program running in the engine—even one not written in KRL. The next two features illustrate why identity is crucial to the programming model.
Entity orientation requires that KRL execution environments support the notion of entity. Rulesets are installed for each entity.
Event binding – rules in KRL bind event patterns to actions. Event patterns are specified using event expressions. Events and actions are both extensible so that programmers are free to define events and actions that are relevant to their problem space.
Events are rarely addressed to a specific ruleset. Rather events are raised on behalf of a particular entity and thus any rule selected from the entity's installed rulesets runs on behalf of that same entity. This concept is called “salience.” An event is salient for a given entity if
|
https://en.wikipedia.org/wiki/General%20somatic%20afferent%20fiber
|
The general somatic afferent fibers (GSA, or somatic sensory fibers) afferent fibers arise from neurons in sensory ganglia and are found in all the spinal nerves, except occasionally the first cervical, and conduct impulses of pain, touch and temperature from the surface of the body through the dorsal roots to the spinal cord and impulses of muscle sense, tendon sense and joint sense from the deeper structures.
See also
Afferent nerve
|
https://en.wikipedia.org/wiki/Random%20energy%20model
|
In the statistical physics of disordered systems, the random energy model is a toy model of a system with quenched disorder, such as a spin glass, having a first-order phase transition. It concerns the statistics of a collection of spins (i.e. degrees of freedom that can take one of two possible values ) so that the number of possible states for the system is . The energies of such states are independent and identically distributed Gaussian random variables with zero mean and a variance of . Many properties of this model can be computed exactly. Its simplicity makes this model suitable for pedagogical introduction of concepts like quenched disorder and replica symmetry.
Comparison with other disordered systems
The -spin infinite-range model, in which all -spin sets interact with a random, independent, identically distributed interaction constant, becomes the random energy model in a suitably defined limit.
More precisely, if the Hamiltonian of the model is defined by
where the sum runs over all distinct sets of indices, and, for each such set, , is an independent Gaussian variable of mean 0 and variance , the Random-Energy model is recovered in the limit.
Derivation of thermodynamical quantities
As its name suggests, in the REM each microscopic state has an independent distribution of energy. For a particular realization of the disorder, where refers to the individual spin configurations described by the state and is the energy associated with it. The final extensive variables like the free energy need to be averaged over all realizations of the disorder, just as in the case of the Edwards–Anderson model. Averaging over all possible realizations, we find that the probability that a given configuration of the disordered system has an energy equal to is given by
where denotes the average over all realizations of the disorder. Moreover, the joint probability distribution of the energy values of two different microscopic configurations of the s
|
https://en.wikipedia.org/wiki/Notch%20of%20cardiac%20apex
|
The anterior interventricular sulcus and posterior interventricular sulcus extend from the base of the ventricular portion to a notch, the notch of cardiac apex, (or incisura apicis cordis) on the acute margin of the heart just to the right of the apex.
|
https://en.wikipedia.org/wiki/Desktop%20virtualization
|
Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.
Desktop virtualization can be used in conjunction with application virtualization and user profile management systems, now termed user virtualization, to provide a comprehensive desktop environment management system. In this mode, all the components of the desktop are virtualized, which allows for a highly flexible and much more secure desktop delivery model. In addition, this approach supports a more complete desktop disaster recovery strategy as all components are essentially saved in the data center and backed up through traditional redundant maintenance systems. If a user's device or hardware is lost, the restore is straightforward and simple, because the components will be present at login from another device. In addition, because no data are saved to the user's device, if that device is lost, there is much less chance that any critical data can be retrieved and compromised.
System architectures
Desktop virtualization implementations are classified based on whether the virtual desktop runs remotely or locally, on whether the access is required to be constant or is designed to be intermittent, and on whether or not the virtual desktop persists between sessions. Typically, software products that deliver desktop virtualization solutions can combine local and remote implementations into a single product to provide the most appropriate support specific to requirements. The degrees of independent functionality of the client device is necessarily interdependent with the server location and access strategy. And virtualization is not strictly required for remote control to exist. Virtualization is employed to present independent instances to multiple users and requires a strategic segmentation of the host server and presentation at some layer of the host's architecture. The enabling
|
https://en.wikipedia.org/wiki/Nomen%20novum
|
In biological nomenclature, a nomen novum (Latin for "new name"), new replacement name (or replacement name, new substitute name, substitute name) is a scientific name that is created specifically to replace another scientific name, but only when this other name cannot be used for technical, nomenclatural reasons (for example because it is a homonym: it is spelled the same as an existing, older name). It does not apply when a name is changed for taxonomic reasons (representing a change in scientific insight). It is frequently abbreviated, e.g. nomen nov., nom. nov..
Zoology
In zoology establishing a new replacement name is a nomenclatural act and it must be expressly proposed to substitute a previously established and available name.
Often, the older name cannot be used because another animal was described earlier with exactly the same name. For example, Lindholm discovered in 1913 that a generic name Jelskia established by Bourguignat in 1877 for a European freshwater snail could not be used because another author Taczanowski had proposed the same name in 1871 for a spider. So Lindholm proposed a new replacement name Borysthenia. This is an objective synonym of Jelskia Bourguignat, 1877, because he has the same type species, and is used today as Borysthenia.
Also, for names of species new replacement names are often necessary. New replacement names have been proposed since more than 100 years ago. In 1859 Bourguignat saw that the name Bulimus cinereus Mortillet, 1851 for an Italian snail could not be used because Reeve had proposed exactly the same name in 1848 for a completely different Bolivian snail. Since it was understood even then that the older name always has priority, Bourguignat proposed a new replacement name Bulimus psarolenus, and also added a note why this was necessary. The Italian snail is known until today under the name Solatopupa psarolena (Bourguignat, 1859).
A new replacement name must obey certain rules; not all of these are well known.
|
https://en.wikipedia.org/wiki/Nearest%20neighbor%20search
|
Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values.
Formally, the nearest-neighbor (NN) search problem is defined as follows: given a set S of points in a space M and a query point q ∈ M, find the closest point in S to q. Donald Knuth in vol. 3 of The Art of Computer Programming (1973) called it the post-office problem, referring to an application of assigning to a residence the nearest post office. A direct generalization of this problem is a k-NN search, where we need to find the k closest points.
Most commonly M is a metric space and dissimilarity is expressed as a distance metric, which is symmetric and satisfies the triangle inequality. Even more common, M is taken to be the d-dimensional vector space where dissimilarity is measured using the Euclidean distance, Manhattan distance or other distance metric. However, the dissimilarity function can be arbitrary. One example is asymmetric Bregman divergence, for which the triangle inequality does not hold.
Applications
The nearest neighbour search problem arises in numerous fields of application, including:
Pattern recognition – in particular for optical character recognition
Statistical classification – see k-nearest neighbor algorithm
Computer vision – for point cloud registration
Computational geometry – see Closest pair of points problem
Cryptanalysis – for lattice problem
Databases – e.g. content-based image retrieval
Coding theory – see maximum likelihood decoding
Semantic Search
Data compression – see MPEG-2 standard
Robotic sensing
Recommendation systems, e.g. see Collaborative filtering
Internet marketing – see contextual advertising and behavioral targeting
DNA sequencing
Spell checking – suggesting correct spelling
Plagiarism detec
|
https://en.wikipedia.org/wiki/Articella
|
The Articella is a collection of medical treatises bound together in one volume that was used mainly as a textbook and reference manual between the 13th and the 16th centuries. In medieval times, several versions of this anthology circulated in manuscript form among medical students. Between 1476 and 1534, printed editions of the Articella were also published in several European cities.
The collection grew around a synthetic exposition of classical Greek medicine written in Baghdad by the physician and polyglot Hunayn bin Ishaq, better known in the West as Ioannitius. His synthesis was in turn based on Galen's Ars Medica (Techne iatrike) and thus became known in Europe as Isagoge Ioannitii ad Tegni Galieni (Hunayn's Introduction to the Art of Galen).
In the mid-13th century, the emergence of formal medical education in several European universities fueled a demand for comprehensive textbooks. Instructors from the influential Scuola Medica Salernitana popularized the practice of binding other treatises together with their manuscript copies of the Isagoge. These included Hippocrates’ Prognostics as well as his Aphorisms, Theophilus Protospatharius's De Urinis and De Pulsibus and many other classic works.
See also
Medieval medicine of Western Europe
|
https://en.wikipedia.org/wiki/Audio%20bit%20depth
|
In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.
In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range. However, techniques such as dithering, noise shaping, and oversampling can mitigate these effects without changing the bit depth. Bit depth also affects bit rate and file size.
Bit depth is useful for describing PCM digital signals. Non-PCM formats, such as those using lossy compression, do not have associated bit depths.
Binary representation
A PCM signal is a sequence of digital audio samples containing the data providing the necessary information to reconstruct the original analog signal. Each sample represents the amplitude of the signal at a specific point in time, and the samples are uniformly spaced in time. The amplitude is the only information explicitly stored in the sample, and it is typically stored as either an integer or a floating point number, encoded as a binary number with a fixed number of digits: the sample's bit depth, also referred to as word length or word size.
The resolution indicates the number of discrete values that can be represented over the range of analog values. The resolution of binary integers increases exponentially as the word length increases. Adding one bit doubles the resolution, adding two quadruples it, and so on. The number of possible values that an integer bit depth can represent can be calculated by using 2n, where n is the bit depth. Thus, a 16-bit system has a resolution of 65,536 (216) possible values.
Integer PCM audio data is typically stored as signed numbers in two's complement format.
Today, most audio file form
|
https://en.wikipedia.org/wiki/Richtmyer%E2%80%93Meshkov%20instability
|
The Richtmyer–Meshkov instability (RMI) occurs when two fluids of different density are impulsively accelerated. Normally this is by the passage of a shock wave. The development of the instability begins with small amplitude perturbations which initially grow linearly with time. This is followed by a nonlinear regime with bubbles appearing in the case of a light fluid penetrating a heavy fluid, and with spikes appearing in the case of a heavy fluid penetrating a light fluid. A chaotic regime eventually is reached and the two fluids mix. This instability can be considered the impulsive-acceleration limit of the Rayleigh–Taylor instability.
History
R. D. Richtmyer provided a theoretical prediction, and E. E. Meshkov (Евгений Евграфович Мешков)(ru) provided experimental verification. Materials in the cores of stars, like Cobalt-56 from Supernova 1987A were observed earlier than expected. This was evidence of mixing due to Richtmyer–Meshkov and Rayleigh–Taylor instabilities.
Examples
During the implosion of an inertial confinement fusion target, the hot shell material surrounding the cold D-T fuel layer is shock-accelerated. This instability is also seen in Magnetized target fusion. Mixing of the shell material and fuel is not desired and efforts are made to minimize any tiny imperfections or irregularities which will be magnified by RMI.
Supersonic combustion in a Scramjet may benefit from RMI as the fuel-oxidants interface is enhanced by the breakup of the fuel into finer droplets. Also in studies of deflagration to detonation transition (DDT) processes show that RMI-induced flame acceleration can result in detonation.
See also
Rayleigh–Taylor instability
Mushroom cloud
Plateau–Rayleigh instability
Salt fingering
Kármán vortex street
Kelvin–Helmholtz instability
Hydrodynamics
|
https://en.wikipedia.org/wiki/Neural%20therapy
|
Neural therapy is a form of alternative medicine in which local anesthetic is injected into certain locations of the body in an attempt to treat chronic pain and illness.
The International Medical Association of Neural Therapy has about 400 members some of whom have been practising in this field for over 30 years. Neural Therapy is both in theory and practice a pseudoscience and studies have found it not to be of any benefit.
Description and history
Neural therapy has been described as a form of holistic medicine for treating illness and chronic pain. According to Quackwatch, neural therapy is "a bizarre approach claimed to treat pain and disease by injecting local anesthetics into nerves, scars, glands, trigger points, and other tissues".
The idea underlying the therapy is that "interference fields" (Störfelder) at certain sites of the body are responsible for a type of electric energy that causes illness. The fields can be disrupted by injection, allowing the body to heal.
The practice originated in 1925 when Ferdinand Huneke, a German surgeon, used a newly launched pain drug that contained procaine (a local anaesthetic) on his sister who had severe intractable migraines. Instead of using it intramuscularly as recommended he injected it intravenously and the migraine attack stopped immediately. He and his brother Walter subsequently used Novocaine in a similar way to treat a variety of ailments.
In 1940 Ferdinand Huneke injected the painful shoulder of a woman who also had an osteomyelitis in her leg which at that time (before antibiotics) threatened her with amputation. The shoulder pain improved somewhat but the leg wound became itchy. On injecting the leg wound the shoulder pain vanished immediately – a reaction he called the "secondary phenomenon" (Sekundenphänomen).
In segment therapy, a local anaesthetic in the form of skin quads is injected in the area of the corresponding dermatome (called Head zones) of the internal organs or on vegetative ganglia.
|
https://en.wikipedia.org/wiki/Phosphatherium
|
Phosphatherium escuillei is a basal proboscidean that lived from the Late Paleocene to the early stages of the Ypresian age until the early Thanetian some 56 million years ago in North Africa. Research has suggested that Phosphatherium existed during the Eocene period.
Description
P. escuillei possessed rather flat features, centered around a low skull and a long, straight dorsal profile. The skull itself was rather disproportionate, consisting of an elongated cranial region and a rather short rostrum. The sagittal crest, the ridge along the dorsomedian line of its skull, spans across nearly half of the skull itself. The nasal cavity is high and wide, suggesting a large snout in life.
One of the main factors of Phosphatherium body is its nontraditional musculoskeletal system. The shape of its head is composed of attributes of a snout, more vividly, turning into
a mouth with a rounded jawline. Similar mammals in its order retained a more snout-like nose, which was also a factor that pertained to it having a semiaquatic lifestyle. Furthermore, sexual dimorphism can be noticed on Phosphatherium face by a varying degrees of muscle attachments on its upper jaw.<ref name="gheerbrant2005"/
Phosphatherium lacked a trunk. The tooth rows extend back to roughly 45% of its total skull length. The dental structures suggests that P. escuillei is a heterodont, meaning it possessed more than one type of tooth morphology. This is evident because they possessed more than one type of molar upon fossil examinations. The various dental formations of heterodonts suggest that this animal, unlike later proboscideans, may have been omnivorous.
The unique traits of Phosphatherium teeth suggest them to be intraspecific. Some features of P. escuilliei's teeth and jaw structures also show noticeable variation, which is related to sexual dimorphism. This suggests physiological differences existed between males and females, which ultimately suggest behavioral differences.<ref name="gheerbran
|
https://en.wikipedia.org/wiki/Mil%C3%BC
|
Milü (; "close ratio"), also known as Zulü (Zu's ratio), is the name given to an approximation to (pi) found by Chinese mathematician and astronomer Zu Chongzhi in the 5th century. Using Liu Hui's algorithm (which is based on the areas of regular polygons approximating a circle), Zu famously computed to be between 3.1415926 and 3.1415927 and gave two rational approximations of , and , naming them respectively Yuelü (; "approximate ratio") and Milü.
is the best rational approximation of with a denominator of four digits or fewer, being accurate to six decimal places. It is within % of the value of , or in terms of common fractions overestimates by less than . The next rational number (ordered by size of denominator) that is a better rational approximation of is , though it is still only correct to six decimal places. To be accurate to seven decimal places, one needs to go as far as . For eight, is needed.
The accuracy of Milü to the true value of can be explained using the continued fraction expansion of , the first few terms of which are . A property of continued fractions is that truncating the expansion of a given number at any point will give the "best rational approximation" to the number. To obtain Milü, truncate the continued fraction expansion of immediately before the term 292; that is, is approximated by the finite continued fraction , which is equivalent to Milü. Since 292 is an unusually large term in a continued fraction expansion (corresponding to the next truncation introducing only a very small term, , to the overall fraction), this convergent will be especially close to the true value of :
An easy mnemonic helps memorize this useful fraction by writing down each of the first three odd numbers twice: , then dividing the decimal number represented by the last 3 digits by the decimal number given by the first three digits. Alternatively,
Zu's contemporary calendarist and mathematician He Chengtian invented a fraction interpolation method
|
https://en.wikipedia.org/wiki/Explant%20culture
|
In biology, explant culture is a technique to organotypically culture cells from a piece or pieces of tissue or organ removed from a plant or animal. The term explant can be applied to samples obtained from any part of the organism. The extraction process is extensively sterilized, and the culture can be typically used for two to three weeks.
The major advantage of explant culture is the maintenance of near in vivo environment in the laboratory for a short duration of time. This experimental setup allows investigators to perform experiments and easily visualize the impact of tests.
This ex vivo model requires a highly maintained environment in order to recreate original cellular conditions. The composition of extracellular matrix, for example, must be precisely similar to that of in vivo conditions in order to induce naturally observed behaviors of cells. The growth medium also must be considered, as different solutions may be needed for different experiments.
The tissue must be placed and harvested in an aseptic environment such as sterile laminar flow tissue culture hood. The samples are often minced, and the pieces are placed in a cell culture dish containing growth media. Over time, progenitor cells migrate out of the tissue onto the surface of the dish. These primary cells can then be further expanded and transferred into fresh dishes through micropropagation.
Explant culture can also refer to the culturing of the tissue pieces themselves, where cells are left in their surrounding extracellular matrix to more accurately mimic the in vivo environment e.g. cartilage explant culture, or blastocyst implant culture.
Application
Historically, explant culture has been used in several areas of biological research. Organogenesis and morphogenesis in fetus have been studied with explant cultures. Since the explant culture is grown in the lab, the area or cells of interest can be labeled with fluorescent markers. These transgenic labels can help researchers observe g
|
https://en.wikipedia.org/wiki/Manus%20%28anatomy%29
|
The manus (Latin for hand, plural manus) is the zoological term for the distal portion of the forelimb of an animal. In tetrapods, it is the part of the pentadactyl limb that includes the metacarpals and digits (phalanges). During evolution, it has taken many forms and served a variety of functions. It can be represented by the hand of primates, the lower front limb of hoofed animals or the forepaw and is represented in the wing of birds, bats and prehistoric flying reptiles (pterosaurs), the flipper of marine mammals and the 'paddle' of extinct marine reptiles, such as plesiosaurs and ichthyosaurs.
In cephalopods, the manus is the end, broader part of a tentacle, and its suckers are often larger and arranged differently from those on the other arms.
See also
Pes (anatomy) – the distal portion of the hind limb of tetrapod animals
|
https://en.wikipedia.org/wiki/Fitch%20notation
|
Fitch notation, also known as Fitch diagrams (named after Frederic Fitch), is a notational system for constructing formal proofs used in sentential logics and predicate logics. Fitch-style proofs arrange the sequence of sentences that make up the proof into rows. A unique feature of Fitch notation is that the degree of indentation of each row conveys which assumptions are active for that step.
Example
Each row in a Fitch-style proof is either:
an assumption or subproof assumption.
a sentence justified by the citation of (1) a rule of inference and (2) the prior line or lines of the proof that license that rule.
Introducing a new assumption increases the level of indentation, and begins a new vertical "scope" bar that continues to indent subsequent lines until the assumption is discharged. This mechanism immediately conveys which assumptions are active for any given line in the proof, without the assumptions needing to be rewritten on every line (as with sequent-style proofs).
The following example displays the main features of Fitch notation:
0 |__ [assumption, want P iff not not P]
1 | |__ P [assumption, want not not P]
2 | | |__ not P [assumption, for reductio]
3 | | | contradiction [contradiction introduction: 1, 2]
4 | | not not P [negation introduction: 2]
|
5 | |__ not not P [assumption, want P]
6 | | P [negation elimination: 5]
|
7 | P iff not not P [biconditional introduction: 1 - 4, 5 - 6]
0. The null assumption, i.e., we are proving a tautology
1. Our first subproof: we assume the l.h.s. to show the r.h.s. follows
2. A subsubproof: we are free to assume what we want. Here we aim for a reductio ad absurdum
3. We now have a contradiction
4. We are allowed to prefix the statement that "caused" the contradiction with a not
5. Our second subproof: we assume the r.h.s. to show the l.h.s. follows
6. We invoke the rule that allows
|
https://en.wikipedia.org/wiki/Von%20Zeipel%20theorem
|
In astrophysics, the von Zeipel theorem states that the radiative flux in a uniformly rotating star is proportional to the local effective gravity . The theorem is named after Swedish astronomer Edvard Hugo von Zeipel.
The theorem is:
where the luminosity and mass are evaluated on a surface of constant pressure . The effective temperature can then be found at a given colatitude from the local effective gravity:
This relation ignores the effect of convection in the envelope, so it primarily applies to early-type stars.
According to the theory of rotating stars, if the rotational velocity of a star depends only on the radius, it cannot simultaneously be in thermal and hydrostatic equilibrium. This is called the von Zeipel paradox. The paradox is resolved, however, if the rotational velocity also depends on height, or there is a meridional circulation. A similar situation may arise in accretion disks.
|
https://en.wikipedia.org/wiki/SONOS
|
SONOS, short for "silicon–oxide–nitride–oxide–silicon", more precisely, "polycrystalline silicon"—"silicon dioxide"—"silicon nitride"—"silicon dioxide"—"silicon",
is a cross sectional structure of MOSFET (metal–oxide–semiconductor field-effect transistor), realized by P.C.Y. Chen of Fairchild Camera and Instrument in 1977. This structure is often used for non-volatile memories, such as EEPROM and flash memories. It is sometimes used for TFT LCD displays.
It is one of CTF (charge trap flash) variants. It is distinguished from traditional non-volatile memory structures by the use of silicon nitride (Si3N4 or Si9N10) instead of "polysilicon-based FG (floating-gate)" for the charge storage material.
A further variant is "SHINOS" ("silicon"—"hi-k"—"nitride"—"oxide"—"silicon"), which is substituted top oxide layer with high-κ material. Another advanced variant is "MONOS" ("metal–oxide–nitride–oxide–silicon").
Companies offering SONOS-based products include Cypress Semiconductor, Macronix, Toshiba, United Microelectronics Corporation and Floadia.
Description
A SONOS memory cell is formed from a standard polysilicon N-channel MOSFET transistor with the addition of a small sliver of silicon nitride inserted inside the transistor's gate oxide. The sliver of nitride is non-conductive but contains a large number of charge trapping sites able to hold an electrostatic charge. The nitride layer is electrically isolated from the surrounding transistor, although charges stored on the nitride directly affect the conductivity of the underlying transistor channel. The oxide/nitride sandwich typically consists of a 2 nm thick oxide lower layer, a 5 nm thick silicon nitride middle layer, and a 5–10 nm oxide upper layer.
When the polysilicon control gate is biased positively, electrons from the transistor source and drain regions tunnel through the oxide layer and get trapped in the silicon nitride. This results in an energy barrier between the drain and the source, raising the th
|
https://en.wikipedia.org/wiki/Entropy%20%28energy%20dispersal%29
|
In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'.
In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature.
Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology.
Comparisons with traditional approach
The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state.
The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract natur
|
https://en.wikipedia.org/wiki/Expression%20cloning
|
Expression cloning is a technique in DNA cloning that uses expression vectors to generate a library of clones, with each clone expressing one protein. This expression library is then screened for the property of interest and clones of interest are recovered for further analysis. An example would be using an expression library to isolate genes that could confer antibiotic resistance.
Expression vectors
Expression vectors are a specialized type of cloning vector in which the transcriptional and translational signals needed for the regulation of the gene of interest are included in the cloning vector. The transcriptional and translational signals may be synthetically created to make the expression of the gene of interest easier to regulate.
Purpose
Usually the ultimate aim of expression cloning is to produce large quantities of specific proteins. To this end, a bacterial expression clone may include a ribosome binding site (Shine-Dalgarno sequence) to enhance translation of the gene of interest's mRNA, a transcription termination sequence, or, in eukaryotes, specific sequences to promote the post-translational modification of the protein product.
See also
Molecular cell biology
genetics
gene expression
Transcription (genetics)
translation
λ phage
pBR322
|
https://en.wikipedia.org/wiki/AASHTO%20Soil%20Classification%20System
|
The AASHTO Soil Classification System was developed by the American Association of State Highway and Transportation Officials, and is used as a guide for the classification of soils and soil-aggregate mixtures for highway construction purposes. The classification system was first developed by Hogentogler and Terzaghi in 1929, but has been revised several times since.
Plasticity index of A-7-5 subgroup is equal to or less than the LL - 30. Plasticity index of A-7-6 subgroup is greater than LL - 30.
|
https://en.wikipedia.org/wiki/Hematogen
|
Hematogen (; , aimatogóno) is a nutrition bar which is notable in that one of its main ingredients is black food albumin, a technical term for cow's blood. Other ingredients may vary, but they usually contain sugar, condensed milk and vanillin.
It is often considered to be a medicinal product, and is used to treat or prevent low blood levels of iron and vitamin B12 (e.g., for anemia or during pregnancy).
See also
Sanguinaccio dolce, a sweet pudding made with pig’s blood
Protein bar
Blood as food
|
https://en.wikipedia.org/wiki/RFQ%20beam%20cooler
|
A radio-frequency quadrupole (RFQ) beam cooler is a device for particle beam cooling, especially suited for ion beams. It lowers the temperature of a particle beam by reducing its energy dispersion and emittance, effectively increasing its brightness (brilliance). The prevalent mechanism for cooling in this case is buffer-gas cooling, whereby the beam loses energy from collisions with a light, neutral and inert gas (typically helium). The cooling must take place within a confining field in order to counteract the thermal diffusion that results from the ion-atom collisions.
The quadrupole mass analyzer (a radio frequency quadrupole used as a mass filter) was invented by Wolfgang Paul in the late 1950s to early 60s at the University of Bonn, Germany. Paul shared the 1989 Nobel Prize in Physics for his work. Samples for mass analysis are ionized, for example by laser (matrix-assisted laser desorption/ionization) or discharge (electrospray or inductively coupled plasma) and the resulting beam is sent through the RFQ and "filtered" by scanning the operating parameters (chiefly the RF amplitude). This gives a mass spectrum, or fingerprint, of the sample. Residual gas analyzers use this principle as well.
Applications of ion cooling to nuclear physics
Despite its long history, high-sensitivity high-accuracy mass measurements of atomic nuclei continue to be very important areas of research for many branches of physics. Not only do these measurements provide a better understanding of nuclear structures and nuclear forces but they also offer insight into how matter behaves in some of Nature's harshest environments. At facilities such as ISOLDE at CERN and TRIUMF in Vancouver, for instance, measurement techniques are now being extended to short-lived radionuclei that only occur naturally in the interior of exploding stars. Their short half-lives and very low production rates at even the most powerful facilities require the very highest in sensitivity of such measurements.
|
https://en.wikipedia.org/wiki/Interferometric%20synthetic-aperture%20radar
|
Interferometric synthetic aperture radar, abbreviated InSAR (or deprecated IfSAR), is a radar technique used in geodesy and remote sensing. This geodetic method uses two or more synthetic aperture radar (SAR) images to generate maps of surface deformation or digital elevation, using differences in the phase of the waves returning to the satellite or aircraft. The technique can potentially measure millimetre-scale changes in deformation over spans of days to years. It has applications for geophysical monitoring of natural hazards, for example earthquakes, volcanoes and landslides, and in structural engineering, in particular monitoring of subsidence and structural stability.
Technique
Synthetic aperture radar
Synthetic aperture radar (SAR) is a form of radar in which sophisticated processing of radar data is used to produce a very narrow effective beam. It can be used to form images of relatively immobile targets; moving targets can be blurred or displaced in the formed images. SAR is a form of active remote sensing – the antenna transmits radiation that is reflected from the image area, as opposed to passive sensing, where the reflection is detected from ambient illumination. SAR image acquisition is therefore independent of natural illumination and images can be taken at night. Radar uses electromagnetic radiation at microwave frequencies; the atmospheric absorption at typical radar wavelengths is very low, meaning observations are not prevented by cloud cover.
Phase
SAR makes use of the amplitude and the absolute phase of the return signal data. In contrast, interferometry uses differential phase of the reflected radiation, either from multiple passes along the same trajectory and/or from multiple displaced phase centers (antennas) on a single pass. Since the outgoing wave is produced by the satellite, the phase is known, and can be compared to the phase of the return signal. The phase of the return wave depends on the distance to the ground, since the path l
|
https://en.wikipedia.org/wiki/Dielectric%20thermal%20analysis
|
Dielectric thermal analysis (DETA), or dielectric analysis (DEA), is a materials science technique similar to dynamic mechanical analysis except that an oscillating electrical field is used instead of a mechanical force. For investigation of the curing behavior of thermosetting resin systems, composite materials, adhesives and paints, Dielectric Analysis (DEA) can be used in accordance with ASTM E 2038 or E 2039. The great advantage of DEA is that it can be employed not only on a laboratory scale, but also in process.
Measuring principle
In a typical test, the sample is placed in contact with two electrodes (the dielectric sensor) and a sinusoidal voltage (the excitation) is applied to one electrode. The resulting sinusoidal current (the response) is measured at the second electrode. The response signal is attenuated in amplitude and shifted in phase in relation to the mobility of the ions and alignment of the dipoles. Dipoles in the material will attempt to align with the electric field and ions (present as impurities) will move toward the electrode of opposite polarity. The dielectric properties of permittivity ε' and loss factor ε" are then calculated from this measured amplitude and phase change.
|
https://en.wikipedia.org/wiki/Anisocytosis
|
Anisocytosis is a medical term meaning that a patient's red blood cells are of unequal size. This is commonly found in anemia and other blood conditions. False diagnostic flagging may be triggered on a complete blood count by an elevated WBC count, agglutinated RBCs, RBC fragments, giant platelets or platelet clumps. In addition, it is a characteristic feature of bovine blood.
The red cell distribution width (RDW) is a measurement of anisocytosis and is calculated as a coefficient of variation of the distribution of RBC volumes divided by the mean corpuscular volume (MCV).
Types
Anisocytosis is identified by RDW and is classified according to the size of RBC measured by MCV. According to this, it can be divided into
Anisocytosis with microcytosis – Iron deficiency, sickle cell anemia
Anisocytosis with macrocytosis – Folate or vitamin B12 deficiency, autoimmune hemolytic anemia, cytotoxic chemotherapy, chronic liver disease, myelodysplastic syndrome
Increased RDW is seen in iron deficiency anemia and decreased or normal in thalassemia major (Cooley's anemia), thalassemia intermedia
Anisocytosis with normal RBC size – Early iron, vit B12 or folate deficiency, dimorphic anemia, Sickle cell disease, chronic liver disease, myelodysplastic syndrome
Etymology
From Ancient Greek: an- without, or negative quality, iso- equal, cyt- cell, -osis condition.
See also
Anisopoikilocytosis
Poikilocytosis
Red blood cell distribution width
|
https://en.wikipedia.org/wiki/Indexed%20language
|
Indexed languages are a class of formal languages discovered by Alfred Aho; they are described by indexed grammars and can be recognized by nested stack automata.
Indexed languages are a proper subset of context-sensitive languages. They qualify as an abstract family of languages (furthermore a full AFL) and hence satisfy many closure properties. However, they are not closed under intersection or complement.
The class of indexed languages has generalization of context-free languages, since indexed grammars can describe many of the nonlocal constraints occurring in natural languages.
Gerald Gazdar (1988) and Vijay-Shanker (1987) introduced a mildly context-sensitive language class now known as linear indexed grammars (LIG). Linear indexed grammars have additional restrictions relative to IG. LIGs are weakly equivalent (generate the same language class) as tree adjoining grammars.
Examples
The following languages are indexed, but are not context-free:
These two languages are also indexed, but are not even mildly context sensitive under Gazdar's characterization:
On the other hand, the following language is not indexed:
Properties
Hopcroft and Ullman tend to consider indexed languages as a "natural" class, since they are generated by several formalisms, such as:
Aho's indexed grammars
Aho's one-way nested stack automata
Fischer's macro grammars
Greibach's automata with stacks of stacks
Maibaum's algebraic characterization
Hayashi generalized the pumping lemma to indexed grammars.
Conversely, Gilman gives a "shrinking lemma" for indexed languages.
See also
Chomsky hierarchy
Indexed grammar
Mildly context-sensitive language
|
https://en.wikipedia.org/wiki/Genome-based%20peptide%20fingerprint%20scanning
|
Genome-based peptide fingerprint scanning (GFS) is a system in bioinformatics analysis that attempts to identify the genomic origin (that is, what species they come from) of sample proteins by scanning their peptide-mass fingerprint against the theoretical translation and proteolytic digest of an entire genome. This method is an improvement from previous methods because it compares the peptide fingerprints to an entire genome instead of comparing it to an already annotated genome. This improvement has the potential to improve genome annotation and identify proteins with incorrect or missing annotations.
History and background
GFS was designed by Michael C. Giddings (University of North Carolina, Chapel Hill) et al., and released in 2003. Giddings expanded the algorithms for GFS from earlier ideas. Two papers were published in 1993 explaining the techniques used to identify proteins in sequence databases. These methods determined the mass of peptides using mass spectrometry, and then used the mass to search protein databases to identify the proteins In 1999 a more complex program was released called Mascot that integrated three types of protein/database searches: peptide molecular weights, tandem mass spectrometry from one or more peptide, and combination mass data with amino acid sequence. The fallback with this widely used program is that it is unable to detect alternative splice sites that are not currently annotated, and it not usually able to find proteins that have not been annotated. Giddings built upon these sources to create GFS which would compare peptide mass data to entire genomes to identify the proteins. Giddings system is able to find new annotations of genes that have not been found, such as undocumented genes and undocumented alternative splice sites.
Research examples
In 2012 research was published where genes and proteins were found in a model organism that could not have been found without GFS because they had not been previously annotated. T
|
https://en.wikipedia.org/wiki/Rook%27s%20graph
|
In graph theory, a rook's graph is an undirected graph that represents all legal moves of the rook chess piece on a chessboard. Each vertex of a rook's graph represents a square on a chessboard, and there is an edge between any two squares sharing a row (rank) or column (file), the squares that a rook can move between. These graphs can be constructed for chessboards of any rectangular shape. Although rook's graphs have only minor significance in chess lore, they are more important in the abstract mathematics of graphs through their alternative constructions: rook's graphs are the Cartesian product of two complete graphs, and are the line graphs of complete bipartite graphs. The square rook's graphs constitute the two-dimensional Hamming graphs.
Rook's graphs are highly symmetric, having symmetries taking every vertex to every other vertex. In rook's graphs defined from square chessboards, more strongly, every two edges are symmetric, and every pair of vertices is symmetric to every other pair at the same distance in moves (making the graph distance-transitive). For rectangular chessboards whose width and height are relatively prime, the rook's graphs are circulant graphs. With one exception, the rook's graphs can be distinguished from all other graphs using only two properties: the numbers of triangles each edge belongs to, and the existence of a unique -cycle connecting each nonadjacent pair of vertices.
Rook's graphs are perfect graphs. In other words, every subset of chessboard squares can be colored so that no two squares in a row or column have the same color, using a number of colors equal to the maximum number of squares from the subset in any single row or column (the clique number of the induced subgraph). This class of induced subgraphs are a key component of a decomposition of perfect graphs used to prove the strong perfect graph theorem, which characterizes all perfect graphs. The independence number and domination number of a rook's graph both equal t
|
https://en.wikipedia.org/wiki/AirMagnet
|
AirMagnet was a Wi-Fi wireless network assurance company based in Sunnyvale, California. The firm was founded in 2001 by Dean T. Au, Chia-Chee Kuan and Miles Wu and shipped its first WLAN analyzer product in 2002. In August 2006, the company shipped the Vo-Fi Analyzer, the first voice-over-Wi-Fi analyzer that could be used on encrypted VoWLAN networks. It was backed by venture capital firms such as Intel Capital, Acer Technology Ventures and VenGlobal.
The company manufactured and sold a suite of wireless site survey tools, laptop analyzers, spectrum analyzers, handheld analyzers, network management and troubleshooting solutions (including wireless access point management via LWAPP), as well as wireless intrusion detection systems (WIDS) and wireless intrusion prevention systems (WIPS) products and VoWLAN instruments.
In August 2009, Fluke Networks acquired AirMagnet. which later became part of NetScout.
On September 14, 2018 NetScout divests Handheld network testing (HNT) tools business to a private equity firm StoneCalibre. This transaction includes AirMagnet Mobile solutions. AirMagnet Enterprise product line of WIPS monitoring solutions was retained NetScout.
August 14, 2019 StoneCalibre launched acquired HNT products as a new company NetAlly.
|
https://en.wikipedia.org/wiki/Stochastic%20optimization
|
Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.
Stochastic optimization methods generalize deterministic methods for deterministic problems.
Methods for stochastic functions
Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system, and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include:
stochastic approximation (SA), by Robbins and Monro (1951)
stochastic gradient descent
finite-difference SA by Kiefer and Wolfowitz (1952)
simultaneous perturbation SA by Spall (1992)
scenario optimization
Randomized search methods
On the other hand, even when the data set consists of precise measurements, some methods introduce randomness into the search-process to accelerate progress. Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics.
Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, this randomization principle is k
|
https://en.wikipedia.org/wiki/Izzy%20%28mascot%29
|
Izzy was the official mascot of the Atlanta 1996 Summer Olympics. Initially named Whatizit ("What is it?") at its introduction at the close of the 1992 Summer Olympics in Barcelona. The animated character with the ability to morph into different forms was a departure from the Olympic tradition in that it did not represent a nationally significant animal or human figure.
History
Conception and introduction at the 1992 Barcelona Olympics
In 1991, the Atlanta Committee for the Olympic Games (ACOG) began a search for a mascot with a competition of twenty design firms as well as suggestions from the general public. The selection, Whatizit, was designed by John Ryan, senior animation director of Atlanta-based design firm DESIGNefx.
Whatizit originally appeared as a blue, tear-shaped "blob" with rings around his eyes and tail. He wore high-top sneakers and had star-shaped pupils. His arms and legs were also short with a toothy grin showing both rows of teeth. He was later modified to have longer limbs to give a more athletic look.
In addition to renaming him "Izzy", several changes were made to the mascot's appearance including losing the bottom row of teeth, adding a nose, making the tongue visible, and making the limbs longer, skinnier, and more athletic.
Izzy's Quest for Olympic Gold
ACOG commissioned an animated television special entitled "Izzy's Quest For Olympic Gold" to promote Izzy and expand his backstory. Produced by Film Roman, the special debuted on TNT on August 12, 1995. Long considered to be lost media, a VHS recording was discovered and uploaded to YouTube in December 2020.
Plot
A torchbearer runs through Olympia all the way to Atlanta to light the Olympic cauldron. The flame is revealed to contain an alternate universe known as the Torch World. Torch World, described as being above the stadium/games, depicts rolling hills, mountains, columns, and other terrain that appears similar to depictions of Ancient Greece. The Torch World citizens can be
|
https://en.wikipedia.org/wiki/Equivalence%20group
|
An equivalence group is a set of unspecified cells that have the same developmental potential or ability to adopt various fates. Our current understanding suggests that equivalence groups are limited to cells of the same ancestry, also known as sibling cells. Often, cells of an equivalence group adopt different fates from one another.
Equivalence groups assume various potential fates in two general, non-mutually exclusive ways. One mechanism, induction, occurs when a signal originating from outside of the equivalence group specifies a subset of the naïve cells. Another mode, known as lateral inhibition, arises when a signal within an equivalence group causes one cell to adopt a dominant fate while others in the group are inhibited from doing so. In many examples of equivalence groups, both induction and lateral inhibition are used to define patterns of distinct cell types.
Cells of an equivalence group that do not receive a signal adopt a default fate. Alternatively, cells that receive a signal take on different fates. At a certain point, the fates of cells within an equivalence group become irreversibly determined, thus they lose their multipotent potential. The following provides examples of equivalence groups studied in nematodes and ascidians.
Vulva Precursor Cell Equivalence Group
Introduction
A classic example of an equivalence group is the vulva precursor cells (VPCs) of nematodes. In Caenorhabditis elegans self-fertilized eggs exit the body through the vulva. This organ develops from a subset of cell of an equivalence group consisting of six VPCs, P3.p-P8.p, which lie ventrally along the anterior-posterior axis. In this example a single overlying somatic cells, the anchor cell, induces nearby VPCs to take on vulva fates 1° (P6.p) and 2° (P5.p and P7.p). VPCs that are not induced form the 3° lineage (P3.p, P4.p and P8.p), which make epidermal cells that fuse to a large syncytial epidermis (see image).
The six VPCs form an equivalence group beca
|
https://en.wikipedia.org/wiki/Transcription%20bubble
|
A transcription bubble is a molecular structure formed during DNA transcription when a limited portion of the DNA double helix is unwound. The size of a transcription bubble ranges from 12 to 14 base pairs. A transcription bubble is formed when the RNA polymerase enzyme binds to a promoter and causes two DNA strands to detach. It presents a region of unpaired DNA, where a short stretch of nucleotides are exposed on each strand of the double helix.
RNA polymerase
The bacterial RNA polymerase, a leading enzyme involved in formation of a transcription bubble, uses DNA template to guide RNA synthesis. It is present in two main forms: as a core enzyme, when it is inactive, and as a holoenzyme, when it is activated. A sigma (σ) factor is a subunit that assists the process of transcription and it stabilizes the transcription bubble when it binds to unpaired bases. These two components, RNA polymerase and sigma factor, when paired together, build RNA polymerase holoenzyme which is then in its active form and ready to bind to a promoter and initiate DNA transcription. Once it binds to the DNA, RNA polymerase turns from a closed to an open complex, forming the transcription bubble. RNA polymerase synthesizes the new RNA in the 5' to 3' direction by adding complementary bases to the 3' end of a new strand. The holoenzyme composition dissociates after transcription initiation, where the σ factor disengages the complex and the RNA polymerase, in its core form, slides along the DNA molecule.
The transcription cycle of bacterial RNA polymerase
The RNA polymerase holoenzyme binds to a promoter of an exposed DNA strand and begins to synthesize the new strand of RNA. The double helix DNA is unwound and a short nucleotide sequence is accessible on each strand. The transcription bubble is a region of unpaired bases on one of the exposed DNA strands. The starting transcription point is determined by the place where the holoenzyme binds to a promoter. The DNA is unwound and single-st
|
https://en.wikipedia.org/wiki/Polarity%20in%20embryogenesis
|
In developmental biology, an embryo is divided into two hemispheres: the animal pole and the vegetal pole within a blastula. The animal pole consists of small cells that divide rapidly, in contrast with the vegetal pole below it. In some cases, the animal pole is thought to differentiate into the later embryo itself, forming the three primary germ layers and participating in gastrulation.
The vegetal pole contains large yolky cells that divide very slowly, in contrast with the animal pole above it. In some cases, the vegetal pole is thought to differentiate into the extraembryonic membranes that protect and nourish the developing embryo, such as the placenta in mammals and the chorion in birds.
In amphibians, the development of the animal-vegetal axis occurs prior to fertilization. Sperm entry can occur anywhere in the animal hemisphere. The point of sperm entry defines the dorso-ventral axis - cells opposite the region of sperm entry will eventually form the dorsal portion of the body.
In the frog Xenopus laevis, the animal pole is heavily pigmented while the vegetal pole remains unpigmented. A pigment pattern provides the oocyte with features of a radially symmetrical body with a distinct polarity. The animal hemisphere is dark brown, and the vegetal hemisphere is only weakly pigmented. The axis of symmetry passes through on one side the animal pole, and on the other side the vegetal pole. The two hemispheres are separated by an unpigmented equatorial belt. Polarity has a major influence on the emergence of the embryonic structures. In fact, the axis polarity serves as one coordinate of geometrical system in which early embryogenesis is organized.
Naming
The animal pole draws its name from its liveliness relative to the slowly developing vegetal pole, while the vegetal pole is named for its relative inactivity relative to the animal pole.
See also
Gastrulation
Embryogenesis
|
https://en.wikipedia.org/wiki/Lord%20Morton%27s%20mare
|
Lord Morton’s mare was an equid hybrid and once an often-noticed example in the history of evolutionary theory.
In 1820, George Douglas, 16th Earl of Morton, F.R.S., reported to the President of the Royal Society that, being desirous of domesticating the quagga (a now extinct subspecies of the plains zebra), he had bred an Arabian chestnut mare with a quagga stallion and that, subsequently, the same mare was bred with a black stallion and Lord Morton found that the offspring had strange stripes in the legs like the quagga. The Royal Society published Lord Morton's letter in its Philosophical Transactions, 1821. In the same issue "Particulars of a Fact, nearly similar to that related by Lord Morton, communicated to the President, in a letter from Daniel Giles, Esq." reported that in a litter of a black and white sow, by a "boar of the wild breed, the chestnut colour of the boar strongly prevailed" in some of the piglets, even to the two subsequent litters of that sow.
These circumstantial reports seemed to confirm the ancient idea of telegony in heritability: Charles Darwin cited the example in On the Origin of Species (1859) and The Variation of Animals and Plants Under Domestication (1868). The concept of telegony, that the seed of a male could continue to affect the offspring of a female, whether animal or human, had been inherited from Aristotle and remained a legitimate theory until experiments in the 1890s confirmed Mendelian inheritance. Biologists now explain the phenomenon of Lord Morton's mare as the result of dominant and recessive alleles. The mare and black stallion each carried genes for the striped markings on the foal, but the markings were hidden in the parent animals by dominant genes for normal color. Striped "primitive markings" are in fact commonly seen in domesticated horses, particularly those with a dun coat color.
See also
Zebroid
Notes
Applied genetics
History of evolutionary biology
Individual mares
Equid hybrids
Horse history and evol
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.