source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Schmidt%20decomposition
|
In linear algebra, the Schmidt decomposition (named after its originator Erhard Schmidt) refers to a particular way of expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantum information theory, for example in entanglement characterization and in state purification, and plasticity.
Theorem
Let and be Hilbert spaces of dimensions n and m respectively. Assume . For any vector in the tensor product , there exist orthonormal sets and such that , where the scalars are real, non-negative, and unique up to re-ordering.
Proof
The Schmidt decomposition is essentially a restatement of the singular value decomposition in a different context. Fix orthonormal bases and . We can identify an elementary tensor with the matrix , where is the transpose of . A general element of the tensor product
can then be viewed as the n × m matrix
By the singular value decomposition, there exist an n × n unitary U, m × m unitary V, and a positive semidefinite diagonal m × m matrix Σ such that
Write where is n × m and we have
Let be the m column vectors of , the column vectors of , and the diagonal elements of Σ. The previous expression is then
Then
which proves the claim.
Some observations
Some properties of the Schmidt decomposition are of physical interest.
Spectrum of reduced states
Consider a vector of the tensor product
in the form of Schmidt decomposition
Form the rank 1 matrix . Then the partial trace of , with respect to either system A or B, is a diagonal matrix whose non-zero diagonal elements are . In other words, the Schmidt decomposition shows that the reduced states of on either subsystem have the same spectrum.
Schmidt rank and entanglement
The strictly positive values in the Schmidt decomposition of are its Schmidt coefficients, or Schmidt numbers. The total number of Schmidt coefficients of , counted with multiplicity, is called its Schmidt rank.
If can be expressed as a product
then is cal
|
https://en.wikipedia.org/wiki/Outpost%20Firewall%20Pro
|
Outpost Firewall Pro is a discontinued personal firewall developed by Agnitum (founded in 1999 in St. Petersburg, Russia).
Overview
Outpost Firewall Pro monitors incoming and outgoing network traffic on Windows machines. Outpost also monitors application behavior in an attempt to stop malicious software covertly infecting Windows systems. Agnitum called this technology "Component Control" and "Anti-Leak Control" (included into HIPS-based "Host Protection" module). The product also includes a spyware scanner and monitor, along with a pop-up blocker and spyware filter for Internet Explorer and Mozilla Firefox. (Outpost's web surfing security tools had included black-lists for IPs and URLs, unwanted web page element filters and ad-blocking. The technology altogether is known as "Web control".)
Version 7.5 adds new techniques to help PC users block unknown new threats before their activation:
Removable media protection (so-called "USB Virus Protection", part of the Proactive Protection module) blocks unsigned programs set to run automatically upon the connection of a removable media.
SmartDecision technology (so-called "Personal Virus Adviser", basis of the Proactive Protection module) facilitates decision-making process.
Version 8 introduces further improvements as well as Windows 8 compatibility and a redesigned user interface; version 8 also has extends x64 host-based intrusion-prevention system (HIPS) support.
Outpost Firewall Pro allows the user to specifically define how a PC application connects to the Internet. This is known as the "Rules Wizard" mode, or policy, and is the default behavior for the program. In this mode, Outpost Firewall Pro displays a prompt each time a new process attempts network access or when a process requests a connection that was not covered by its pre-validated rules. The idea is to let the user decide whether an application should be allowed a network connection to a specific address, port or protocol. Outpost Firewall includes
|
https://en.wikipedia.org/wiki/Metabolic%20control%20analysis
|
Metabolic control analysis (MCA) is a mathematical framework for describing
metabolic, signaling, and genetic pathways. MCA quantifies how variables, such as fluxes and species concentrations, depend on network parameters.
In particular, it is able to describe how network-dependent properties,
called control coefficients, depend on local properties called elasticities or Elasticity Coefficients.
MCA was originally developed to describe the control in metabolic pathways
but was subsequently extended to describe signaling and genetic networks. MCA has sometimes also been referred to as Metabolic Control Theory, but this terminology was rather strongly opposed by Henrik Kacser, one of the founders.
More recent work has shown that MCA can be mapped directly on to classical control theory and are as such equivalent.
Biochemical systems theory is a similar formalism, though with rather different objectives. Both are evolutions of an earlier theoretical analysis by Joseph Higgins.
Control coefficients
A control coefficient
measures the relative steady state change in a system variable, e.g. pathway flux (J) or metabolite concentration (S), in response to a relative change in a parameter, e.g. enzyme activity or the steady-state rate () of step . The two main control coefficients are the flux and concentration control coefficients. Flux control coefficients are defined by
and concentration control coefficients by
.
Summation theorems
The flux control summation theorem was discovered independently by the Kacser/Burns group and the Heinrich/Rapoport group in the early 1970s and late 1960s. The flux control summation theorem implies that metabolic fluxes are systemic properties and that their control is shared by all reactions in the system. When a single reaction changes its control of the flux this is compensated by changes in the control of the same flux by all other reactions.
Elasticity coefficients
The elasticity coefficient measures the local response of
|
https://en.wikipedia.org/wiki/Riesz%27s%20lemma
|
Riesz's lemma (after Frigyes Riesz) is a lemma in functional analysis. It specifies (often easy to check) conditions that guarantee that a subspace in a normed vector space is dense. The lemma may also be called the Riesz lemma or Riesz inequality. It can be seen as a substitute for orthogonality when the normed space is not an inner product space.
Statement
If is a reflexive Banach space then this conclusion is also true when
Proof
The proof can be found in functional analysis texts such as Kreyszig. An online proof from Prof. Paul Garrett is available.
Metric reformulation
As usual, let denote the canonical metric induced by the norm, call the set of all vectors that are a distance of from the origin , and denote the distance from a point to the set by
The inequality holds if and only if for all and it formally expresses the notion that distance between and is at least
Because every vector subspace (such as ) contains the origin substituting in this infimum shows that for every vector In particular, when is a unit vector.
Using this new notation, the conclusion of Riesz's lemma may be restated more succinctly as: holds for some
Using this new terminology, Riesz's lemma may also be restated in plain English as:
Given any closed proper vector subspace of a normed space for any desired minimum distance less than there exists some vector in the unit sphere of that is this desired distance away from the subspace.
Minimum distances not satisfying the hypotheses
When is trivial then it has no vector subspace and so Riesz's lemma holds vacuously for all real numbers The remainder of this section will assume that which guarantees that a unit vector exists.
The inclusion of the hypotheses can be explained by considering the three cases: is non-negative, and
The lemma holds when since every unit vector satisfies the conclusion The hypotheses is included solely to exclude this trivial case and is sometimes omitted fr
|
https://en.wikipedia.org/wiki/Stellar%20birthline
|
The stellar birthline is a predicted line on the Hertzsprung–Russell diagram that relates the effective temperature and luminosity of pre-main-sequence stars at the start of their contraction. Prior to this point, the objects are accreting protostars, and are so deeply embedded in the cloud of dust and gas from which they are forming that they radiate only in far infrared and millimeter wavelengths. Once stellar winds disperse this cloud, the star becomes visible as a pre-main-sequence object. The set of locations on the Hertzsprung–Russell diagram where these newly visible stars reside is called the birthline, and is found above the main sequence.
The location of the stellar birthline depends in detail on the accretion rate onto the star and geometry of this accretion, i.e. whether or not it is occurring through an accretion disk. This means that the birthline is not an infinitely thin curve, but has a finite thickness in the Hertzsprung-Russell diagram.
See also
Hayashi track
Henyey track
Pre-main-sequence star
Protostar
Stellar isochrone
T Tauri star
|
https://en.wikipedia.org/wiki/History%20of%20variational%20principles%20in%20physics
|
In physics, a variational principle is an alternative method for determining the state or dynamics of a physical system, by identifying it as an extremum (minimum, maximum or saddle point) of a function or functional. This article describes the historical development of such principles.
Antiquity
Variational principles are found among earlier ideas in surveying and optics. The rope stretchers of ancient Egypt stretched corded ropes between two points to measure the path which minimized the distance of separation, and Claudius Ptolemy, in his Geographia (Bk 1, Ch 2), emphasized that one must correct for "deviations from a straight course"; in ancient Greece Euclid states in his Catoptrica that, for the path of light reflecting from a mirror, the angle of incidence equals the angle of reflection; and Hero of Alexandria later showed that this path was the shortest length and least time.
17th-18th century
Optics
The earlier ideas of variational principles in optics were generalized to refraction by Pierre de Fermat, who, in the 17th century, refined the principle to "light travels between two given points along the path of shortest time"; now known as the principle of least time or Fermat's principle.
Principle of least action
Its generalization to mechanics, the principle of least action is commonly attributed to Pierre Louis Maupertuis, who wrote about it in 1744 and 1746, although the true priority is less clear. In application to physics, Maupertuis suggested that the quantity to be minimized was the product of the duration (time) of movement within a system by the "vis viva", twice what we now call the kinetic energy of the system.
Leonhard Euler gave a formulation of the least action principle in 1744. He writes
"Let the mass of the projectile be M, and let its squared velocity resulting from its height be while being moved over a distance ds. The body will have a momentum that, when multiplied by the distance ds, will give , the momentum of the body
|
https://en.wikipedia.org/wiki/Islamic%20flag
|
An Islamic flag is the flag either representing an Islamic Caliphate or religious order, state, civil society, military force or other entity associated with Islam. Islamic flags have a distinct history due to the Islamic prescription on aniconism, making particular colours, inscriptions or symbols such as crescent-and-star popular choices. Since the time of the Islamic prophet Muhammad, flags with certain colours were associated with Islam according to the traditions. Since then, historical Caliphates, modern nation states, certain denominations as well as religious movements have adopted flags to symbolize their Islamic identity.
History
Early Islam
Before the advent of Islam, banners as tools for signaling had already been employed by the pre-Islamic Arab tribes and the Byzantines. Early Muslim army naturally deployed banners for the same purpose. Early Islamic flags, however, greatly simplified its design by using plain color, due to the Islamic prescriptions on aniconism. According to the Islamic traditions, the Quraysh had a black banner and a white-and-black banner. It further states that Muhammad had a banner in white nicknamed "the Young Eagle" (, ); and a flag in black, said to be made from his wife Aisha's head-cloth. This larger flag was known as the "Banner of the Eagle" (), as well as the "Black Banner" (). In the Islamic tradition, Muhammad used the white flag to represent both the leader of the Muslim army and the Muslim state. Other examples are the prominent Arab military commander 'Amr ibn al-'As using a red banner, and the Khawarij rebels using a red banner as well. Banners of the early Muslim armies in general, however, employed a variety of colors, both singly and in combination.
The Umayyad Caliphate, which ruled the largest geographical extent of the medieval Islamic Empire, adopted white flags. During the Abbasid Revolution, the Abbasids incorporated the Black Standard based on the early Islamic eschatological saying that "a people comi
|
https://en.wikipedia.org/wiki/Read%E2%80%93modify%E2%80%93write
|
In computer science, read–modify–write is a class of atomic operations (such as test-and-set, fetch-and-add, and compare-and-swap) that both read a memory location and write a new value into it simultaneously, either with a completely new value or some function of the previous value. These operations prevent race conditions in multi-threaded applications. Typically they are used to implement mutexes or semaphores. These atomic operations are also heavily used in non-blocking synchronization.
Maurice Herlihy (1991) ranks atomic operations by their consensus numbers, as follows:
: memory-to-memory move and swap, augmented queue, compare-and-swap, fetch-and-cons, sticky byte, load-link/store-conditional (LL/SC)
: -register assignment
: test-and-set, swap, fetch-and-add, queue, stack
: atomic read and atomic write
It is impossible to implement an operation that requires a given consensus number with only operations with a lower consensus number, no matter how many of such operations one uses. Read–modify–write instructions often produce unexpected results when used on I/O devices, as a write operation may not affect the same internal register that would be accessed in a read operation.
This term is also associated with RAID levels that perform actual write operations as atomic read–modify–write sequences. Such RAID levels include RAID 4, RAID 5 and RAID 6.
See also
Linearizability
Read–erase–modify–write
|
https://en.wikipedia.org/wiki/Grimm%E2%80%93Sommerfeld%20rule
|
In chemistry, the Grimm–Sommerfeld rule predicts that binary compounds with covalent character that have an average of 4 electrons per atom will have structures where both atoms are tetrahedrally coordinated (e.g. have the wurtzite structure). Examples are silicon carbide, the III-V semiconductors indium phosphide and gallium arsenide, the II-VI semiconductors, cadmium sulfide, cadmium selenide.
Gorynova expanded the scope of the rules to include ternary compounds where the average number of valence electrons per atom was four. Examples of this are the I-IV2-V3 CuGe2P3 compound which has a zincblende structure.
Compounds or phases that obey the Grimm–Sommerfeld rule are termed Grimm–Sommerfeld compounds or phases.
The rule has also been extended to predict bond lengths in Grimm–Sommerfeld compounds. When the sum of the atomic numbers is the same the bond lengths are the same. An example is the series of bond lengths ranging from 244.7 pm to 246 pm. for the Ge–Ge bond in elemental germanium, the Ga–As bond in gallium arsenide, the Zn–Se bond in zinc selenide and the Cu–Br bond in copper(I) bromide.
|
https://en.wikipedia.org/wiki/Verilog-AMS
|
Verilog-AMS is a derivative of the Verilog hardware description language that includes Analog and Mixed-Signal extensions (AMS) in order to define the behavior of analog and mixed-signal systems. It extends the event-based simulator loops of Verilog/SystemVerilog/VHDL, by a continuous-time simulator, which solves the differential equations in analog-domain. Both domains are coupled: analog events can trigger digital actions and vice versa.
Overview
The Verilog-AMS standard was created with the intent of enabling designers of analog and mixed signal systems and integrated circuits to create and use modules that encapsulate high-level behavioral descriptions as well as structural descriptions of systems and components.
Verilog-AMS is an industry standard modeling language for mixed signal circuits. It provides both continuous-time and event-driven modeling semantics, and so is suitable for analog, digital, and mixed analog/digital circuits. It is particularly well suited for verification of very complex analog, mixed-signal and RF integrated circuits.
Verilog and Verilog/AMS are not procedural programming languages, but event-based hardware description languages (HDLs). As such, they provide sophisticated and powerful language features for definition and synchronization of parallel actions and events. On the other hand, many actions defined in HDL program statements can run in parallel (somewhat similar to threads and tasklets in procedural languages, but much more fine-grained). However, Verilog/AMS can be coupled with procedural languages like the ANSI C language using the Verilog Procedural Interface of the simulator, which eases testsuite implementation, and allows interaction with legacy code or testbench equipment.
The original intention of the Verilog-AMS committee was a single language for both analog and digital design, however due to delays in the merger process it remains at Accellera while Verilog evolved into SystemVerilog and went to the IEEE.
Code
|
https://en.wikipedia.org/wiki/Torque%20motor
|
A torque motor is a specialized form of DC electric motor which can operate indefinitely while stalled, without incurring damage. In this mode of operation, the motor will apply a steady torque to the load (hence the name). A torque motor that cannot perform a complete rotation is known as a limited angle torque motor. Brushless torque motors are available; elimination of commutators and brushes allows higher speed operation.
Construction
Torque motors normally use toroidal construction, allowing them to have wider diameter, more torque, and better dissipation of heat. They differences from other motors because their higher torque, thermal performance, and ability to operate while drawing high current in a stalled state.
Linear versions
An analogous device, moving linearly rather than rotating, is described as a ''. These are widely used for refrigeration compressors and ultra-quiet air compressors, where the force motor produces simple harmonic motion in conjunction with a restoring spring.
Applications
Tape recorders
A common application of a torque motor would be the supply- and take-up reel motors in a tape drive. In this application, driven from a low voltage, the characteristics of these motors allow a relatively constant light tension to be applied to the tape whether or not the capstan is feeding tape past the tape heads. Driven from a higher voltage, (and so delivering a higher torque), the torque motors can also achieve fast-forward and rewind operation without requiring any additional mechanics such as gears or clutches.
Computer games
In the computer gaming world, torque motors are used in force feedback steering wheels.
Throttle control
Another common application is the control of the throttle of an internal combustion engine in conjunction with an electronic governor. In this usage, the motor works against a return spring to move the throttle in accordance with the output of the governor. The latter monitors engine speed by counting electrical p
|
https://en.wikipedia.org/wiki/WKMJ-TV
|
WKMJ-TV (channel 68) is a PBS member television station in Louisville, Kentucky, United States. It is the flagship station for KET2, the second television service of Kentucky Educational Television (KET), which is owned by the Kentucky Authority for Educational Television.
The station's master control and internal operations are located at KET's main studios at the O. Leonard Press Telecommunications Center in Lexington. WKMJ's transmitter, like those of several other Louisville stations including main KET transmitter WKPC-TV, is located at the Kentuckiana Tower Farm at Floyds Knobs, in Floyd County, Indiana. WKMJ and WKPC are the only KET-owned stations whose transmitters are outside Kentucky's borders.
History
As KET's original Louisville station
When Kentucky Educational Television began broadcasting in 1968, it was built to provide the widest statewide coverage with the fewest transmitters possible. Network officials expected that the transmitters in Elizabethtown (WKZT-TV, channel 23) and Owenton (WKON-TV, channel 54) would provide sufficient service in the Louisville area. Reception, however, was poorer than expected, prompting KET in March 1969 to announce plans to file for UHF channel 68 and strike a deal with NBC affiliate WAVE-TV for a new tower, which would also house a stronger WKPC-TV. The station, with the callsign WKMJ (the -TV suffix was added in 1983), began test broadcasts on August 17, 1970, and full service began two weeks later. Channel 68 originally went off the air when the rest of the stations of KET was airing the same programming as WKPC-TV. Duplication remained low, and at the end of 1982, an agreement was reached for WKPC-TV to be the primary PBS outlet in Louisville.
However, after this arrangement, duplication returned. In 1995, after WKPC-TV experienced a series of financial reversals caused by for-profit ventures intended to bolster station income, talks about intending to merge the two stations began, with channel 15—with its str
|
https://en.wikipedia.org/wiki/Penetration%20depth
|
Penetration depth is a measure of how deep light or any electromagnetic radiation can penetrate into a material. It is defined as the depth at which the intensity of the radiation inside the material falls to 1/e (about 37%) of its original value at (or more properly, just beneath) the surface.
When electromagnetic radiation is incident on the surface of a material, it may be (partly) reflected from that surface and there will be a field containing energy transmitted into the material. This electromagnetic field interacts with the atoms and electrons inside the material. Depending on the nature of the material, the electromagnetic field might travel very far into the material, or may die out very quickly. For a given material, penetration depth will generally be a function of wavelength.
Beer–Lambert law
According to Beer–Lambert law, the intensity of an electromagnetic wave inside a material falls off exponentially from the surface as
If denotes the penetration depth, we have
Penetration depth is one term that describes the decay of electromagnetic waves inside of a material. The above definition refers to the depth at which the intensity or power of the field decays to 1/e of its surface value. In many contexts one is concentrating on the field quantities themselves: the electric and magnetic fields in the case of electromagnetic waves. Since the power of a wave in a particular medium is proportional to the square of a field quantity, one may speak of a penetration depth at which the magnitude of the electric (or magnetic) field has decayed to 1/e of its surface value, and at which point the power of the wave has thereby decreased to or about 13% of its surface value:
Note that is identical to the skin depth, the latter term usually applying to metals in reference to the decay of electrical currents (which follow the decay in the electric or magnetic field due to a plane wave incident on a bulk conductor). The attenuation constant is also identical
|
https://en.wikipedia.org/wiki/Weissberger%27s%20model
|
Weissberger’s modified exponential decay model, or simply, Weissberger’s model, is a radio wave propagation model that estimates the path loss due to the presence of one or more trees in a point-to-point telecommunication link. This model belongs to the category Foliage or Vegetation models.
Applicable to/under conditions
This model is applicable to the cases of line of sight propagation. Example is microwave transmission.
This model is only applicable when there is an obstruction made by some foliage in the link. i.e. In between the transmitter and receiver.
This model is ideal for application in the situation where the LOS path is blocked by dense, dry and leafy trees.
Coverage
Frequency: 230 MHz to 95 GHz
Depth of foliage: up to 400 m
History
Formulated in 1982, this model is a development of the ITU Model for Exponential Decay (MED).
Mathematical formulation
Weissberger’s model is formally expressed as
where,
L = The loss due to foliage. Unit: decibels (dB)
f = The transmission frequency. Unit: gigahertz (GHz)
d = The depth of foliage along the path. Unit: meters (m)
Points to note
The equation is scaled for frequency specified in GHz range.
Depth of foliage must be specified in meters (m).
Limitations
This model is significant for frequency range 230 MHz to 95 GHz only, as pointed out by Blaunstein.
This model does not define the operation if the depth of vegetation is more than 400 m.
This model predicts the loss due to foliage. The path loss must be calculated with inclusion of the free space loss.
See also
Fresnel zone
Radio propagation model
|
https://en.wikipedia.org/wiki/Picotee
|
Picotee describes flowers whose edge is a different colour than the flower's base colour. The word originates from the French picoté, meaning 'marked with points'.
Examples
|
https://en.wikipedia.org/wiki/Early%20ITU%20model
|
The ITU vegetation model is a radio propagation model that estimates the path loss encountered due to the presence of one or more trees inside a point to point telecommunication link. The predictions found from this model is congruent to those found from Weissberger’s modified exponential decay model in low frequencies.
History
The CCIR, predecessor of ITU, adopted this model in the late 1986.
Applicable to/under conditions
This model is applicable on the situations where the telecommunication link has some obstructions made by trees along its way
This model is suitable for point-to-point microwave links that has a vegetation in their path.
Typical application of this model is to predict the path loss for microwave links.
Coverage
Frequency: Not specified
Depth of Foliage: Not specified
Mathematical formulation
The model is formulated as:
Where
L = The path loss. Unit: decibel (dB)
f = The frequency of transmission. Unit: megahertz (MHz)
d = The depth of foliage along the link: Unit: meter (m)
Points to note
This equation is scaled for frequency specified in megahertz (MHz).
The depth of foliage must be in the units of meters.
Limitations
The results of this model gets impractical at high frequencies.
See also
Radio propagation model
Further reading
Introduction to RF Propagation, John S. Seybold, 2005, John Wiley and Sons.
Radio frequency propagation model
|
https://en.wikipedia.org/wiki/ANSI/ISA-95
|
ANSI/ISA-95, or ISA-95 as it is more commonly referred, is an international standard from the International Society of Automation for developing an automated interface between enterprise and control systems. This standard has been developed for global manufacturers. It was developed to be applied in all industries, and in all sorts of processes, like batch processes, continuous and repetitive processes.
The objectives of ISA-95 are to provide consistent terminology that is a foundation for supplier and manufacturer communications, provide consistent information models, and to provide consistent operations models which is a foundation for clarifying application functionality and how information is to be used.
There are 5 parts of the ISA-95 standard.
ANSI/ISA-95.00.01-2000, Enterprise-Control System Integration Part 1: Models and Terminology consists of standard terminology and object models, which can be used to decide which information should be exchanged.
The models help define boundaries between the enterprise systems and the control systems. They help address questions like which tasks can be executed by which function and what information must be exchanged between applications. Here is a .
ISA-95 Models
Context
Hierarchy Models
Scheduling and control (Purdue)
Equipment hierarchy
Functional Data Flow Model
Manufacturing Functions
Data Flows
Object Models
Objects
Object Relationships
Object Attributes
Operations Activity Models
Operations Elements: PO, MO, QO, IO
Operations Data Flow Model
Operations Functions
Operations Flows
ANSI/ISA-95.00.02-2001, Enterprise-Control System Integration Part 2: Object Model Attributes consists of attributes for every object that is defined in part 1. The objects and attributes of Part 2 can be used for the exchange of information between different systems, but these objects and attributes can also be used as the basis for relational databases.
ANSI/ISA-95.00.03-2005, Enterprise-Control System Integration, Part 3: Models
|
https://en.wikipedia.org/wiki/Wayne%20Wesolowski
|
Wayne Wesolowski is builder of miniature models.
Wesolowski's models have been exhibited at the Chicago Museum of Science and Industry, the Springfield, Illinois, Lincoln Home Site, the West Chicago City Museum, RailAmerican, and the National Railroad Museum.
One of his more noted works is a model of Abraham Lincoln's funeral train. This model took 4½ years to build and is 15 feet (4½ meters) long. Wesolowski appeared on an episode of Tracks Ahead featuring this train and his model of Lincoln's home.
Wesolowski has written scores of articles and four books on model building. He has been featured in videos shown on PBS television. Good Morning America selected and showed part of one tape as an example of video education.
Wesolowski holds a Ph.D. in chemistry from the University of Arizona and teaches chemistry there.
Publications
Notes
External links
Description of Lincoln funeral train project
Gallery of Wesolowski models
Introductory statement at University of Arizona
Rail transport modellers
21st-century American chemists
Model makers
Scale modeling
University of Arizona faculty
University of Arizona alumni
Living people
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Flags%20of%20Asia
|
This is a gallery of international and national flags used in Asia.
Supranational and international flags
An incomplete list of flags representing intra-Asian international and supranational organisations, which omits intercontinental organisations such as the United Nations:
Flags of Asian sovereign states
Disputed or partially recognised states
Flags of Asian dependencies
Flags of Asian sub-divisions
China
Georgia
Iraq
Japan
Korea
Philippines
Russia
Uzbekistan
Flags of Asian cities
Flags of cities with over 1 million inhabitants.
Historical flags
Notes
See also
Flags of Africa
Flags of Europe
Flags of Oceania
Flags of North America
Flags of South America
Lists of flags of Asian countries
List of Afghan flags
List of Armenian flags
List of Azerbaijani flags
List of Bangladeshi flags
List of Bhutanese flags
List of Bruneian flags
List of Cambodian flags
List of Chinese flags
List of Cypriot flags
List of East Timorese flags
List of Egyptian flags
List of flags of Georgia (country)
List of Indian flags
List of flags of Indonesia
List of Iranian flags
List of flags of Iraq
List of flags of Israel
List of Japanese flags
List of Kazakh flags
List of North Korean flags
List of South Korean flags
List of Kuwaiti flags
List of Kyrgyz flags
List of flags of Laos
List of Malaysian flags
List of flags of the Maldives
List of Mongolian flags
List of Burmese flags
List of flags of Nepal
List of Omani flags
List of Pakistani flags
List of Palestinian flags
List of flags of the Philippines
List of Qatari flags
List of Russian flags
List of Saudi Arabian flags
List of Singaporean flags
List of Sri Lankan flags
List of Taiwanese flags
List of Thai flags
List of Turkish flags
List of Turkmen flags
List of Uzbek flags
List of flags of Vietnam
Asia
Symbols of Asia
Asia
|
https://en.wikipedia.org/wiki/Biophysical%20Journal
|
The Biophysical Journal is a biweekly peer-reviewed scientific journal published by Cell Press on behalf of the Biophysical Society. The journal was established in 1960 and covers all aspects of biophysics.
The journal occasionally publishes special issues devoted to specific topics. In addition, a supplemental "abstracts issue" is published, containing abstracts of presentations at the Biophysical Society Annual Meeting. The editor-in-chief is Vasanthi Jayaraman.
History
The following persons are or have been editor-in-chief:
|
https://en.wikipedia.org/wiki/Tissue%20inhibitor%20of%20metalloproteinase
|
Tissue inhibitors of metalloproteinases (TIMPs) are specific endogenous protease inhibitors to the matrix metalloproteinases. There are four TIMPs; TIMP1, TIMP2, TIMP3 and TIMP4. TIMP3 has been observed progressively downregulated in Human papillomavirus-positive neoplastic keratinocytes derived from uterine cervical preneoplastic lesions at different levels of malignancy. For this reason, TIMP3 is likely to be associated with tumorigenesis and may be a potential prognostic marker for uterine cervical preneoplastic lesions progression.
Overall, all MMPs are inhibited by TIMPs once they are activated but the gelatinases (MMP-2 and MMP-9) can form complexes with TIMPs when the enzymes are in the latent form.
The complex of latent MMP-2 (pro-MMP-2)with TIMP-2 serves to facilitate the activation of pro-MMP-2 at the cell surface by MT1-MMP (MMP-14), a membrane-anchored MMP.
The role of the pro-MMP-9/TIMP-1 complex is still unknown.
|
https://en.wikipedia.org/wiki/Saprophagy
|
Saprophages are organisms that obtain nutrients by consuming decomposing dead plant or animal biomass. They are distinguished from detritivores in that saprophages are sessile consumers while detritivores are mobile. Typical saprophagic animals include sedentary polychaetes such as amphitrites (Amphitritinae, worms of the family Terebellidae) and other terebellids.
The eating of wood, whether live or dead, is known as xylophagy. The activity of animals feeding only on dead wood is called sapro-xylophagy and those animals, sapro-xylophagous.
Ecology
In food webs, saprophages generally play the roles of decomposers. There are two main branches of saprophages, broken down by nutrient source. There are necrophages which consume dead animal biomass, and thanatophages which consume dead plant biomass.
See also
Detritivore
Decomposer
Saprotrophic nutrition
Consumer-resource systems
|
https://en.wikipedia.org/wiki/Indiglo
|
Indiglo is a product feature on watches marketed by Timex, incorporating an electroluminescent panel as a backlight for even illumination of the watch dial.
The brand is owned by Indiglo Corporation, which is in turn solely owned by Timex, and the name derives from the word indigo, as the original watches featuring the technology emitted a green-blue light.
History
The Indiglo name was originally developed by Austin Innovations Inc.
Timex introduced the Indiglo technology in 1992 in their Ironman watch line and subsequently expanded its use to 70% of their watch line, including men's and women's watches, sport watches and chronographs. Casio introduced their version of electroluminescent backlight technology in 1995.
From 2006-2011, the Timex Group marketed a line of high-end quartz watches under the TX Watch Company brand, using a proprietary six-hand, four-motor, micro-processor controlled movement. To separate the brand from Timex, the movements had luxury features associated with a higher-end brand, e.g., sapphire crystals and stainless steel or titanium casework — and used hands treated with super-luminova luminescent pigment for low-light legibility — rather than indiglo technology.
When the Timex Group migrated the microprocessor-controlled, multi-motor, multi-hand technology to its Timex brand in 2012, it created a sub-collection marketed as Intelligent Quartz (IQ). The line employed the same movements and capabilities from the TX brand, at a much lower price-point -- incorporating indiglo technology rather than the super-luminova pigments.
Design
Indiglo backlights typically emit a distinct greenish-blue color and evenly light the entire display or dial. Certain Indiglo models, e.g., Timex Datalink USB, use a negative liquid-crystal display so that only the digits are illuminated, rather than the entire display.
|
https://en.wikipedia.org/wiki/Wisconsin%20Integrally%20Synchronized%20Computer
|
The Wisconsin Integrally Synchronized Computer (WISC) was an early digital computer designed and built at the University of Wisconsin–Madison. Operational in 1954, it was the first digital computer in the state.
Pioneering computer designer Gene Amdahl drafted the WISC's design as his PhD thesis. The computer was built over the period 1951-1954. It had 1,024 50-bit words (equivalent to about 6 KB) of drum memory, with an operation time of 1/15 second and throughput of 60 operations per second, which was achieved by an early form of instruction pipeline. It was capable of both fixed and floating point operation.
It weighed about .
Part of it was at the Computer History Museum until about 2020, when it was moved to an unknown location.
|
https://en.wikipedia.org/wiki/TJ-2
|
TJ-2 (Type Justifying Program) was published by Peter Samson in May 1963 and is thought to be the first page layout program. Although it lacks page numbering, page headers and footers, TJ-2 is the first word processor to provide a number of essential typographic alignment and automatic typesetting features:
Columnation, indentation, margins, justification, and centering
Word wrap, page breaks and automatic hyphenation
Tab stop simulation
Developed from two earlier Samson programs, Justify and TJ-1, TJ-2 was written for the PDP-1 that was donated to the Massachusetts Institute of Technology in 1961 by Digital Equipment Corporation.
Taking English text as input, TJ-2 aligns left and right margins, justifying the output using white space and word hyphenation. Text is marked-up with single lowercase characters combined with the PDP-1's overline character, carriage returns, and internal concise codes. The computer's six toggle switches control the input and output devices, enable and disable hyphenation and stop the session. Words can be hyphenated with a light pen on the computer's CRT display and from the session's dictionary in memory. On-screen hyphenation has SAVE and FORGET commands and OOPS, the undo.
Comments in the code were quoted thirty years later: "The ways of God are just and can be justified to man" and "Girls who wear pants should be sure that the end justifies the jeans."
TJ-2 was succeeded by TYPSET and RUNOFF, a pair of complementary programs written in 1964 for the CTSS operating system. TYPSET and RUNOFF soon evolved into runoff for Multics, which was in turn ported to Unix in the 1970s as roff.
A similar program for the ITS PDP-6 and later the PDP-10 was TJ6.
See also
Colossal Typewriter
Desktop publishing
Expensive Typewriter
Peter Samson
Text editor
Text Editor and Corrector (TECO)
TYPSET and RUNOFF
Notes
|
https://en.wikipedia.org/wiki/Aquiline%20nose
|
An aquiline nose (also called a Roman nose) is a human nose with a prominent bridge, giving it the appearance of being curved or slightly bent. The word aquiline comes from the Latin word aquilinus ("eagle-like"), an allusion to the curved beak of an eagle. While some have ascribed the aquiline nose to specific ethnic, racial, or geographic groups, and in some cases associated it with other supposed non-physical characteristics (i.e. intelligence, status, personality, etc., see below), no scientific studies or evidence support any such linkage. As with many phenotypical expressions (e.g. 'widow's peak', eye color, earwax type) it is found in many geographically diverse populations.
Distribution
Some writers in the field of racial typology have attributed aquiline noses as a characteristic of different peoples or races; e.g.: according to anthropologist Jan Czekanowski, it is most frequently found amongst members of the Arabid race and Armenoid race. It is also often seen in the Mediterranean race and Dinarid race, where it is known as the "Roman nose" when found amongst Italians, the Southern French, Portuguese and Spanish. Racial theorist William Z. Ripley argued that it is characteristic of peoples of Teutonic descent.
In racialist discourse
In racialist discourse, especially that of post-Enlightenment Western scientists and writers, a Roman nose has been characterized as a marker of beauty and nobility, but the notion itself is found early on in Plutarch, in his description of Mark Antony. The supposed science of physiognomy, popular during the Victorian era, made the "prominent" nose a marker of Aryanness: "the shape of the nose and the cheeks indicated, like the forehead's angle, the subject's social status and level of intelligence. A Roman nose was superior to a snub nose in its suggestion of firmness and power, and heavy jaws revealed a latent sensuality and coarseness".
Among Native Americans
The aquiline nose was deemed a distinctive feature of some N
|
https://en.wikipedia.org/wiki/Xat%C3%B3
|
Xató () is a typical Catalan dish. It is a sauce made with almonds, hazelnuts, breadcrumbs, vinegar, garlic, olive oil, salt, and the nyora pepper. The sauce is often served with an endive salad prepared with anchovy, tuna and dried and salted cod (bacallà).
The "Xató Route" is formed by the following Catalan towns: Canyelles, Calafell, Cubelles, Cunit, El Vendrell, Sant Pere de Ribes, Sitges and Vilanova i la Geltrú. There is a recipe for each town on the 'Xató route'.
Catalonia
The origin of xató lies in the world of wine. Once the wine was about to be tasted, a fundamental ceremony took place in the whole process that consisted of placing a small tap (l'aixetó) that allowed the wine to come out of the container. This moment marked the beginning of the new wine festival, a celebration that was accompanied by a meal made up of salty ingredients such as fish, which were found in the houses of local farmers and fishermen, served with leaves of the vegetable corresponding to the winter season and salad with a special sauce. This ritual meal that accompanied the ceremony of shaking the wine boot is the origin of the current xató.
In spite of everything, the paternity of this traditional dish in the Penedès and Garraf regions continues to be disputed. Currently, practically all the towns of Gran Panadés have their own variant of the recipe for this dish and the traditional xatonades have become popular in the region, which are popular gatherings in which the participants taste this dish.
Ambassador of Xató
1998-1999 : Ferran Adrià
1999-2000 : Xavier Mestres
2000-2001 : Carles Gaig
2001-2002 : Jordi LP
2002-2003 : La Cubana
2003-2004 : Toni Albà
2004-2005 : Rosa Andreu
2005-2006 : Pere Tàpies
2006-2007 : Lax'n'busto
2007-2008 : Anna Barrachina
2008-2009 : Montserrat Estruch
2009-2010 : Oriol Llavina
See also
Salsa Romesco, a Catalan nut and red pepper-based sauce
List of almond dishes
External links
Receta, en español
Recipe, translated from Spanish
Officia
|
https://en.wikipedia.org/wiki/Mixed%20connective%20tissue%20disease
|
Mixed connective tissue disease, commonly abbreviated as MCTD, is an autoimmune disease characterized by the presence of elevated blood levels of a specific autoantibody, now called anti-U1 ribonucleoprotein (RNP) together with a mix of symptoms of systemic lupus erythematosus (SLE), scleroderma, and polymyositis. The idea behind the "mixed" disease is that this specific autoantibody is also present in other autoimmune diseases such as systemic lupus erythematosus, polymyositis, scleroderma, etc. MCTD was characterized as an individual disease in 1972 by Sharp et al., and the term was introduced by Leroy in 1980.
It is sometimes said to be the same as undifferentiated connective tissue disease, but other experts specifically reject this idea because undifferentiated connective tissue disease is not necessarily associated with serum antibodies directed against the U1-RNP, and MCTD is associated with a more clearly defined set of signs/symptoms.
Signs and symptoms
MCTD combines features of scleroderma, polymyositis, systemic lupus erythematosus, and rheumatoid arthritis (with some sources adding myositis, dermatomyositis, and inclusion body myositis) and is thus considered an overlap syndrome.
The initial clinical manifestations of MCTD usually are unspecific. They can consist of general malaise, arthralgias, myalgias, and fever. The specific signs to suspect this disease is the presence of positive antinuclear antibodies (ANA), specifically anti-RNP, associated with Raynaud's phenomenon. Almost every organ can be affected by MCTD. Raynaud's phenomenon is the most common presenting symptom seen in patients, with arthralgia and swollen hands being the second and third most common respectively. With patients that meet full criteria for MCTD, arthritis is the most common symptom with Raynaud's, swollen hands, leukopenia/lymphopenia, and heartburn following in descending order. A 2016 epidemiological population based study found 3.6 years to be the average amount of ti
|
https://en.wikipedia.org/wiki/Digital%20cross-connect%20system
|
A digital cross-connect system (DCS or DXC) is a piece of circuit-switched network equipment, used in telecommunications networks, that allows lower-level TDM bit streams, such as DS0 bit streams, to be rearranged and interconnected among higher-level TDM signals, such as DS1 bit streams. DCS units are available that operate on both older T-carrier/E-carrier bit streams, as well as newer SONET/SDH bit streams.
DCS devices can be used for "grooming" telecommunications traffic, switching traffic from one circuit to another in the event of a network failure, supporting automated provisioning, and other applications. Having a DCS in a circuit-switched network provides important flexibility that can otherwise only be obtained at higher cost using manual "DSX" cross-connect patch panels.
It is important to realize that while DCS devices "switch" traffic, they are not packet switches—they switch circuits, not packets, and the circuit arrangements they are used to manage tend to persist over very long time spans, typically months or longer, as compared to packet switches, which can route every packet differently, and operate on micro- or millisecond time spans.
DCS units are also sometimes colloquially called "DACS" units, after a proprietary brand name of DCS units created and sold by AT&T's Western Electric division, now Alcatel-Lucent.
Modern digital access and cross-connect systems are not limited to the T-carrier system, and may accommodate high data rates such as those of SONET.
Transmuxing
Transmuxing (transmux: transcode multiplexing) is a telecommunications signaling format change between two signaling methods, typically synchronous optical network signals, SONET, and various time-division multiplexing, TDM, signals. Transmuxing changes the “container” without changing the “contents.” Transmuxing provides the carrier the capability to embed a telecommunications signal from one logical TDM circuit to another within SONET without physically breaking down the
|
https://en.wikipedia.org/wiki/Flags%20of%20Oceania
|
This is a gallery of national flags of Oceania.
Flags of Oceanian sovereign states
Flags of Oceanian dependencies and other territories
Flags of Oceanian sub-divisions
States of Australia
Territories of Australia
Associated states of New Zealand
Regions of New Zealand
Components of the Federated States of Micronesia
Components of French Polynesia
States of the United States
Flags of Oceanian cities
Flags of cities with over 1 million inhabitants.
Historical flags
See also
Lists of flags of Oceanian countries
List of Australian flags
List of Fijian flags
List of Nauruan flags
List of New Zealand flags
List of Palauan flags
List of Papua New Guinean flags
List of Samoan flags
List of Vanuatuan flags
Notes
|
https://en.wikipedia.org/wiki/Gonfalon
|
The gonfalon, gonfanon, gonfalone (from the early Italian confalone) is a type of heraldic flag or banner, often pointed, swallow-tailed, or with several streamers, and suspended from a crossbar in an identical manner to the ancient Roman vexillum. It was first adopted by Italian medieval communes, and later, by local guilds, corporations and districts. The difference between a gonfalon with long tails and a standard is that a gonfalon displays the device on the non-tailed area, and the standard displays badges down the whole length of the flag.
Background
A gonfalon can include a badge or coat of arms, or decoration. Today, every Italian comune (municipality) has a gonfalon sporting its coat of arms. The gonfalon has long been used for ecclesiastical ceremonies and processions. The papal "ombrellino", a symbol of the pope, is often mistakenly called "gonfalone" by the Italians because the pope's ceremonial umbrella was often depicted on the banner.
Gonfalone was originally the name given to a neighbourhood meeting in medieval Florence, each neighbourhood having its own flag and coat of arms, leading to the word gonfalone eventually becoming associated with the flag.
Gonfalons are also used in some university ceremonies, such as those at The College of New Jersey, University of Chicago,
Rowan University, Rutgers University, Princeton University, University of Toronto, Loyola University New Orleans, the University of St. Thomas, and the University of Western Ontario.
A Gonfalon of State (Dutch: Rijksvaandel or Rijksbanier) is part of the Regalia of the Netherlands. The banner is made of silk and it has been painted with the sovereign's coat of arms. The Gonfalon of State is only used when a new king or queen is sworn in.
A picture of a gonfalon is itself a heraldic charge in the coat of arms of the Counts Palatine of Tübingen and their cadet branches.
Religious significance
These religious objects consisted of a cloth, usually of canvas but occasionally of sil
|
https://en.wikipedia.org/wiki/Horse%20behavior
|
Horse behavior is best understood from the view that horses are prey animals with a well-developed fight-or-flight response. Their first reaction to a threat is often to flee, although sometimes they stand their ground and defend themselves or their offspring in cases where flight is untenable, such as when a foal would be threatened.
Nonetheless, because of their physiology horses are also suited to a number of work and entertainment-related tasks. Humans domesticated horses thousands of years ago, and they have been used by humans ever since. Through selective breeding, some breeds of horses have been bred to be quite docile, particularly certain large draft horses. On the other hand, most light horse riding breeds were developed for speed, agility, alertness, and endurance; building on natural qualities that extended from their wild ancestors.
Horses' instincts can be used to human advantage to create a bond between human and horse. These techniques vary, but are part of the art of horse training.
The "fight-or-flight" response
Horses evolved from small mammals whose survival depended on their ability to flee from predators (for example: wolves, big cats, bears). This survival mechanism still exists in the modern domestic horse. Humans have removed many predators from the life of the domestic horse; however, its first instinct when frightened is to escape. If running is not possible, the horse resorts to biting, kicking, striking or rearing to protect itself. Many of the horse's natural behavior patterns, such as herd-formation and social facilitation of activities, are directly related to their being a prey species.
The fight-or-flight response involves nervous impulses which result in hormone secretions into the bloodstream. When a horse reacts to a threat, it may initially "freeze" in preparation to take flight. The fight-or-flight reaction begins in the amygdala, which triggers a neural response in the hypothalamus. The initial reaction is followed b
|
https://en.wikipedia.org/wiki/Early%20effect
|
The Early effect, named after its discoverer James M. Early, is the variation in the effective width of the base in a bipolar junction transistor (BJT) due to a variation in the applied base-to-collector voltage. A greater reverse bias across the collector–base junction, for example, increases the collector–base depletion width, thereby decreasing the width of the charge carrier portion of the base.
Explanation
In Figure 1, the neutral (i.e. active) base is green, and the depleted base regions are hashed light green. The neutral emitter and collector regions are dark blue and the depleted regions hashed light blue. Under increased collector–base reverse bias, the lower panel of Figure 1 shows a widening of the depletion region in the base and the associated narrowing of the neutral base region.
The collector depletion region also increases under reverse bias, more than does that of the base, because the collector is less heavily doped than the base. The principle governing these two widths is charge neutrality. The narrowing of the collector does not have a significant effect as the collector is much longer than the base. The emitter–base junction is unchanged because the emitter–base voltage is the same.
Base-narrowing has two consequences that affect the current:
There is a lesser chance for recombination within the "smaller" base region.
The charge gradient is increased across the base, and consequently, the current of minority carriers injected across the collector-base junction increases, which net current is called .
Both these factors increase the collector or "output" current of the transistor with an increase in the collector voltage, but only the second is called Early effect. This increased current is shown in Figure 2. Tangents to the characteristics at large voltages extrapolate backward to intercept the voltage axis at a voltage called the Early voltage, often denoted by the symbol VA.
Large-signal model
In the forward active region the Early e
|
https://en.wikipedia.org/wiki/Collagenase
|
Collagenases are enzymes that break the peptide bonds in collagen. They assist in destroying extracellular structures in the pathogenesis of bacteria such as Clostridium. They are considered a virulence factor, facilitating the spread of gas gangrene. They normally target the connective tissue in muscle cells and other body organs.
Collagen, a key component of the animal extracellular matrix, is made through cleavage of pro-collagen by collagenase once it has been secreted from the cell. This stops large structures from forming inside the cell itself.
In addition to being produced by some bacteria, collagenase can be made by the body as part of its normal immune response. This production is induced by cytokines, which stimulate cells such as fibroblasts and osteoblasts, and can cause indirect tissue damage.
Therapeutic uses
Collagenases have been approved for medical uses for:
treatment of Dupuytren's contracture and Peyronie's disease (Xiaflex).
wound healing (Santyl)
cellulite (Qwo)
The MEROPS M9 family
This group of metallopeptidases constitutes the MEROPS peptidase family M9, subfamilies M9A and M9B (microbial collagenase, clan MA(E)). The protein fold of the peptidase domain for members of this family resembles that of thermolysin, the type example for clan MA and the predicted active site residues for members of this family and thermolysin occur in the motif HEXXH.
Microbial collagenases have been identified from bacteria of both the Vibrio and Clostridium genera. Collagenase is used during bacterial attack to degrade the collagen barrier of the host during invasion. Vibrio bacteria are sometimes used in hospitals to remove dead tissue from burns and ulcers. Clostridium histolyticum is a pathogen that causes gas gangrene; nevertheless, the isolated collagenase has been used to treat bed sores. Collagen cleavage occurs at an in Vibrio bacteria and at bonds in Clostridium collagenases.
Analysis of the primary structure of the gene product from C
|
https://en.wikipedia.org/wiki/Dermott%27s%20law
|
Dermott's law is an empirical formula for the orbital period of major satellites orbiting planets in the Solar System. It was identified by the celestial mechanics researcher Stanley Dermott in the 1960s and takes the form:
for
Where T(n) is the orbital period of the nth satellite, T(0) is of the order of days and C is a constant of the satellite system in question. Specific values are:
Jovian system: T(0) = 0.444 d, C = 2.03
Saturnian system: T(0)=0.462 d, C = 1.59
Uranian system: T(0) = 0.760 d, C = 1.80
Such power-laws may be a consequence of collapsing-cloud models of planetary and satellite systems possessing various symmetries; see Titius-Bode law. They may also reflect the effect of resonance-driven commensurabilities in the various systems.
|
https://en.wikipedia.org/wiki/Commensurability%20%28astronomy%29
|
Commensurability is the property of two orbiting objects, such as planets, satellites, or asteroids, whose orbital periods are in a rational proportion.
Examples include the 2:3 commensurability between the orbital periods of Neptune and Pluto, the 3:4 commensurability between the orbital periods of the Saturnian satellites Titan and Hyperion, the orbital periods associated with the Kirkwood gaps in the asteroid belt relative to that of Jupiter, and the 2:1 commensurability between Gliese 876 b and Gliese 876 c.
Commensurabilities are normally the result of an orbital resonance, rather than being due to coincidence.
See also
Harmonic
Ratio
|
https://en.wikipedia.org/wiki/Cash-flow%20return%20on%20investment
|
Cash-flow return on investment (CFROI) is a valuation model that assumes the stock market sets prices based on cash flow, not on corporate performance and earnings.
For the corporation, it is essentially internal rate of return (IRR). CFROI is compared to a hurdle rate to determine if investment/product is performing adequately. The hurdle rate is the total cost of capital for the corporation calculated by a mix of cost of debt financing plus investors' expected return on equity investments. The CFROI must exceed the hurdle rate to satisfy both the debt financing and the investors expected return.
Michael J. Mauboussin in his 2006 book More Than You Know: Finding Financial Wisdom in Unconventional Places, quoted an analysis by Credit Suisse First Boston, that, measured by CFROI, performance of companies tend to converge after five years in terms of their survival rates.
The CFROI for a firm or a division can then be written as follows:
This annuity is called the economic depreciation:
where n is the expected life of the asset and Kc is the replacement cost in current dollars.
See also
Return of capital
|
https://en.wikipedia.org/wiki/Incidental%20medical%20findings
|
Incidental medical findings are previously undiagnosed medical or psychiatric conditions that are discovered unintentionally and during evaluation for a medical or psychiatric condition. Such findings may occur in a variety of settings, including routine medical care, during biomedical research, during post-mortem autopsy, or during genetic testing.
Medical imaging
An incidentaloma is a tumor found by coincidence which is often benign and does not cause any clinically significant symptoms; however a small percentage do turn out to be malignant. Incidentalomas are common, with up to 7% of all patients over 60 harboring a benign growth, often of the adrenal gland, which is detected when diagnostic imaging is used for the analysis of unrelated symptoms.
As 37% of patients receiving whole-body CT scan may have abnormal findings that need further evaluation and with the increase of "whole-body CT scanning" as part of health screening programs, the chance of finding incidentalomas is expected to increase.
Neuroimaging
Incidental findings in neuroimaging are common, with the prevalence of neoplastic incidental brain findings increasing with age.
Even in healthy subjects acting as controls in research incidental findings are not rare. As most neuroimaging studies are performed in adults, less is known about the prevalence incidental findings in children. A study in 2017 in nearly 4000 children between 8 and 12 reported that approximately 1 in 200 children showed asymptomatic incidental findings that required clinical follow-up.
Pituitary adenomas are tumors that occur in the pituitary gland, and account for about 15% of intracranial neoplasms. They often remain undiagnosed, and are often an incidental finding during autopsy. Microadenomas (<10mm) have an estimated prevalence of 16.7% (14.4% in autopsy studies and 22.5% in radiologic studies).
Genetic testing
Unintentional genetic findings (aka "incidentalomes") are more commonly encountered with the advent of biomed
|
https://en.wikipedia.org/wiki/Inexact%20differential
|
An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally within mathematics as a type of differential form. In contrast, an integral of an exact differential is always path independent since the integral acts to invert the differential operator. Consequently, a quantity with an inexact differential cannot be expressed as a function of only the variables within the differential. I.e., its value cannot be inferred just by looking at the initial and final states of a given system. Inexact differentials are primarily used in calculations involving heat and work because they are path functions, not state functions.
Definition
An inexact differential is a differential for which the integral over some two paths with the same end points is different. Specifically, there exist integrable paths such that , and
In this case, we denote the integrals as and respectively to make explicit the path dependence of the change of the quantity we are considering as .
More generally, an inexact differential is a differential form which is not an exact differential, i.e., for all functions ,
The fundamental theorem of calculus for line integrals requires path independence in order to express the values of a given vector field in terms of the partial derivatives of another function that is the multivariate analogue of the antiderivative. This is because there can be no unique representation of an antiderivative for inexact differentials since their variation is inconsistent along different paths. This stipulation of path independence is a necessary addendum to the fundamental theorem of calculus because in one-dimensional calculus there is only one path in between two points defined by a function.
Notation
Thermodynamics
Instead of the differential symbol , the symbol is used, a convention wh
|
https://en.wikipedia.org/wiki/Sensitivity%20and%20specificity
|
Sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives:
Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive.
Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative.
If the true status of the condition cannot be known, sensitivity and specificity can be defined relative to a "gold standard test" which is assumed correct. For all testing, both diagnostic and screening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa.
A test which reliably detects the presence of a condition, resulting in a high number of true positives and low number of false negatives, will have a high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects.
A test which reliably excludes individuals who do not have the condition, resulting in a high number of true negatives and low number of false positives, will have a high specificity. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc.
The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947.
There are different definitions within laboratory quality control, wherein "analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymo
|
https://en.wikipedia.org/wiki/Heteroduplex%20analysis
|
Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene.
|
https://en.wikipedia.org/wiki/Annexin%20A5%20affinity%20assay
|
In molecular biology, an annexin A5 affinity assay is a test to quantify the number of cells undergoing apoptosis. The assay uses the protein annexin A5 to tag apoptotic and dead cells, and the numbers are then counted using either flow cytometry or a fluorescence microscope.
The annexin a5 protein binds to apoptotic cells in a calcium-dependent manner using phosphatidylserine-containing membrane surfaces that are usually present only on the inner leaflet of the membrane.
Background
Apoptosis is a form of programmed cell death that is used by the body to remove unwanted, damaged, or senescent cells from tissues. Removal of apoptotic cells is carried out via phagocytosis by white blood cells such as macrophages and dendritic cells. Phagocytic white blood cells recognize apoptotic cells by their exposure of negatively charged phospholipids (phosphatidylserine) on the cell surface.
In normal cells, the negative phospholipids reside on the inner side of the cellular membrane while the outer surface of the membrane is occupied by uncharged phospholipids. After a cell has entered apoptosis, the negatively charged phospholipids are transported to the outer cell surface by a hypothetical protein known as scramblase. Phagocytic white blood cells express a receptor that can bind to and detect the negatively charged phospholipids on the apoptotic cell surfaces. After detection the apoptotic cells are removed.
Detection of cell death with annexin A5
Healthy individual apoptotic cells are rapidly removed by phagocytes. However, in pathological processes, the removal of apoptotic cells may be delayed or even absent. Dying cells in tissue can be detected with annexin A5. Labeling of annexin A5 with fluorescent or radioactive molecules makes it possible to detect binding of labeled annexin A5 to the cell surface of apoptotic cells. After binding to the phospholipid surface, annexin A5 assembles into a trimeric cluster. This trimer consists of three annexin A5 molecules that ar
|
https://en.wikipedia.org/wiki/Alcohol%20consumption%20recommendations
|
Recommendations for consumption of the drug alcohol (also known formally as ethanol) vary from recommendations to be alcohol-free to daily or weekly drinking "safe limits" or maximum intakes. Many governmental agencies and organizations have issued guidelines. These recommendations concerning maximum intake are distinct from any legal restrictions, for example countries with drunk driving laws or countries that have prohibited alcohol.
General recommendations
These guidelines apply to men, and women who are neither pregnant nor breastfeeding.
Alcohol-free recommendations
The World Health Organization published a statement in The Lancet Public Health in April 2023 that "there is no safe amount that does not affect health"'.
The 2023 Nordic Nutrition Recommendations state "Since no safe limit for alcohol consumption can be provided, the recommendation in NNR2023 is that everyone should avoid drinking alcohol."''
The American Heart Association recommends that those who do not already consume alcoholic beverages should not start doing so because of the negative long-term effects of alcohol consumption.
The Canadian Centre on Substance Use and Addiction states "Not drinking has benefits, such as better health, and better sleep."
Alcohol intake recommendations by country
Some governments set the same recommendation for both sexes, while others give separate limits. The guidelines give drink amounts in a variety of formats, such as standard drinks, fluid ounces, or milliliters, but have been converted to grams of ethanol for ease of comparison.
Overall, the daily limits range from 10–37 g per day for men and 10-16 g per day for women. Weekly limits range from 27–170 g/week for men and 27–140 g/week for women. The weekly limits are lower than the daily limits, meaning intake on a particular day may be higher than one-seventh of the weekly amount, but consumption on other days of the week should be lower. The limits for women are consistently lower than those for m
|
https://en.wikipedia.org/wiki/Minimum%20inhibitory%20concentration
|
In microbiology, the minimum inhibitory concentration (MIC) is the lowest concentration of a chemical, usually a drug, which prevents visible in vitro growth of bacteria or fungi. MIC testing is performed in both diagnostic and drug discovery laboratories.
The MIC is determined by preparing a dilution series of the chemical, adding agar or broth, then inoculating with bacteria or fungi, and incubating at a suitable temperature. The value obtained is largely dependent on the susceptibility of the microorganism and the antimicrobial potency of the chemical, but other variables can affect results too. The MIC is often expressed in micrograms per milliliter (μg/mL) or milligrams per liter (mg/L).
In diagnostic labs, MIC test results are used to grade the susceptibility of microbes. These grades are assigned based on agreed upon values called breakpoints. Breakpoints are published by standards development organizations such as the U.S. Clinical and Laboratory Standards Institute (CLSI), the British Society for Antimicrobial Chemotherapy (BSAC) and the European Committee on Antimicrobial Susceptibility Testing (EUCAST). The purpose of measuring MICs and grading microbes is to enable physicians to prescribe the most appropriate antimicrobial treatment.
The first step in drug discovery is often measurement of the MICs of biological extracts, isolated compounds or large chemical libraries against bacteria and fungi of interest. MIC values provide a quantitative measure of an extract or compound’s antimicrobial potency. The lower the MIC, the more potent the antimicrobial. When in vitro toxicity data is available, MICs can also be used to calculate selectivity index values, a measure of off-target to target toxicity.
History
After the discovery and commercialization of antibiotics, microbiologist, pharmacologist, and physician Alexander Fleming developed the broth dilution technique using the turbidity of the broth for assessment. This is commonly believed to be
|
https://en.wikipedia.org/wiki/Vagovagal%20reflex
|
Vagovagal reflex refers to gastrointestinal tract reflex circuits where afferent and efferent fibers of the vagus nerve coordinate responses to gut stimuli via the dorsal vagal complex in the brain. The vagovagal reflex controls contraction of the gastrointestinal muscle layers in response to distension of the tract by food. This reflex also allows for the accommodation of large amounts of food in the gastrointestinal tracts.
The vagus nerve, composed of both sensory afferents and parasympathetic efferents, carries signals from stretch receptors, osmoreceptors, and chemoreceptors to dorsal vagal complex where the signal may be further transmitted to autonomic centers in the medulla. Efferent fibers of the vagus then carry signals to the gastrointestinal tract up to two-thirds of the transverse colon (coinciding with the second GI watershed point).
Function
The vagovagal reflex is active during the receptive relaxation of the stomach in response to swallowing of food (prior to it reaching the stomach). When food enters the stomach a "vagovagal" reflex goes from the stomach to the brain, and then back again to the stomach causing active relaxation of the smooth muscle in the stomach wall. If vagal innervation is interrupted then intra-gastric pressure increases. This is a potential cause of vomiting due to the inability of the proximal stomach smooth muscle to undergo receptive relaxation.
The vagal afferents are activated during the gastric phase of digestion when the corpus and fundus of the stomach are distended secondary to the entry of a food bolus. The stimulation of the mechanical receptors located in the gastric mucosa stimulates the vagus afferents. The completion of the reflex circuit by vagus efferents leads to the stimulation of postganglionic muscarinic nerves. These nerves release acetylcholine to stimulate two end effects. One, the parietal cells in the body of the stomach are stimulated to release H+. Two, the ECL cells of the lamina propria of t
|
https://en.wikipedia.org/wiki/Quotition%20and%20partition
|
In arithmetic, quotition and partition are two ways of viewing fractions and division.
In quotition division one asks, "how many parts are there?"; while in partition division one asks, "what is the size of each part?".
For example, the expression can be constructed of either of two ways:
"How many parts of the size of 2 must be added to get the amount of 6?" (Quotition division)
One can write
Since it takes 3 parts, the conclusion is that
"What is the size of 2 equal parts whose sum is that of 6?". (Partition division)
One can write
Since the size of each part is 3, the conclusion is that
It is a fact of elementary theoretical mathematics that the numerical answer is always the same no matter which way you put it, 6 ÷ 2 = 3. This is essentially equivalent to the commutativity of multiplication in multiplication arithmetic.
Division involves thinking about a whole in terms of its parts. One frequent division special case, is that of a natural number (positive integers) of equal parts, is known to teachers as a partition or sharing: the whole entity becomes an integer number with equal parts.
What quotition focuses on, is explained by removing the word integer in the last sentence. Allow the number to be any fraction and you may have a quotition instead of a partition.
See also
List of partition topics
|
https://en.wikipedia.org/wiki/Fullscreen%20%28aspect%20ratio%29
|
Fullscreen (or full screen) refers to the 4:3 (1.:1) aspect ratio of early standard television screens and computer monitors. Widescreen ratios started to become more popular in the 1990s and 2000s.
Film originally created in the 4:3 aspect ratio does not need to be altered for full-screen release. In contrast, other aspect ratios can be converted to full screen using techniques such as pan and scan, open matte or reframing. In pan and scan, the 4:3 image is extracted from within the original frame by cropping the sides of the film. In open matte, the 4:3 image is extracted from parts of the original negative which were shot but not intended to be used for the theatrical release. In reframing, elements within the image are repositioned. Reframing is used for entirely CG movies, where the elements can be easily moved.
History
Full-screen aspect ratios in standard television have been in use since the invention of moving picture cameras. Early computer monitors employed the same aspect ratio. The aspect ratio 4:3 was used for 35 mm films in the silent era. It is also very close to the 1.375:1 Academy ratio, defined by the Academy of Motion Picture Arts and Sciences as a standard after the advent of optical sound-on-film. By having TV match this aspect ratio, movies originally photographed on 35 mm film could be satisfactorily viewed on TV in the early days of television (i.e. the 1940s and the 1950s). When cinema attendance dropped, Hollywood created widescreen aspect ratios (such as 1.85:1) in order to differentiate the film industry from TV. However, at the start of the 21st century, broadcasters worldwide began phasing out the 4:3 standard entirely and manufacturers started to favor the 16:9 aspect ratio for modern high-definition television sets, broadcast cameras and computer monitors.
See also
Aspect ratio (image)
|
https://en.wikipedia.org/wiki/Intrusion%20tolerance
|
Intrusion tolerance is a fault-tolerant design approach to defending information systems against malicious attacks. In that sense, it is also a computer security approach. Abandoning the conventional aim of preventing all intrusions, intrusion tolerance instead calls for triggering mechanisms that prevent intrusions from leading to a system security failure.
Distributed computing
In distributed computing there are two major variants of intrusion tolerance mechanisms: mechanisms based on redundancy, such as the Byzantine fault tolerance, as well as mechanisms based on intrusion detection as implemented in intrusion detection system) and intrusion reaction.
Intrusion-tolerant server architectures
Intrusion-tolerance has started to influence the design of server architectures in academic institutions, and industry. Examples of such server architectures include KARMA, Splunk IT Service Intelligence (ITSI), project ITUA, and the practical Byzantine Fault Tolerance (pBFT) model.
See also
Intrusion detection system evasion techniques
|
https://en.wikipedia.org/wiki/Catenin%20beta-1
|
Catenin beta-1, also known as β-catenin (beta-catenin), is a protein that in humans is encoded by the CTNNB1 gene.
β-Catenin is a dual function protein, involved in regulation and coordination of cell–cell adhesion and gene transcription. In humans, the CTNNB1 protein is encoded by the CTNNB1 gene. In Drosophila, the homologous protein is called armadillo. β-catenin is a subunit of the cadherin protein complex and acts as an intracellular signal transducer in the Wnt signaling pathway. It is a member of the catenin protein family and homologous to γ-catenin, also known as plakoglobin. β-Catenin is widely expressed in many tissues. In cardiac muscle, β-catenin localizes to adherens junctions in intercalated disc structures, which are critical for electrical and mechanical coupling between adjacent cardiomyocytes.
Mutations and overexpression of β-catenin are associated with many cancers, including hepatocellular carcinoma, colorectal carcinoma, lung cancer, malignant breast tumors, ovarian and endometrial cancer. Alterations in the localization and expression levels of β-catenin have been associated with various forms of heart disease, including dilated cardiomyopathy. β-Catenin is regulated and destroyed by the beta-catenin destruction complex, and in particular by the adenomatous polyposis coli (APC) protein, encoded by the tumour-suppressing APC gene. Therefore, genetic mutation of the APC gene is also strongly linked to cancers, and in particular colorectal cancer resulting from familial adenomatous polyposis (FAP).
Discovery
β-Catenin was initially discovered in the early 1990s as a component of a mammalian cell adhesion complex: a protein responsible for cytoplasmatic anchoring of cadherins. But very soon, it was realized that the Drosophila protein armadillo – implicated in mediating the morphogenic effects of Wingless/Wnt – is homologous to the mammalian β-catenin, not just in structure but also in function. Thus, β-catenin became one of the first example
|
https://en.wikipedia.org/wiki/Mr.%20Blue%20Sky
|
"Mr. Blue Sky" is a song by the Electric Light Orchestra (ELO), featured on the band's seventh studio album Out of the Blue (1977). Written and produced by frontman Jeff Lynne, the song forms the fourth and final track of the "Concerto for a Rainy Day" suite on side three of the original double album. "Mr. Blue Sky" was the second single to be taken from Out of the Blue, peaking at number 6 in the UK Singles Chart and number 35 in the US Billboard Charts.
Promotional copies were released on blue vinyl, like the album from which the single was issued. Due to its popularity and frequent use in multiple television shows and movies, it has sometimes been described as the band's signature song.
Inspiration
In a BBC Radio interview, Lynne talked about writing "Mr. Blue Sky" after locking himself away in a Swiss chalet and attempting to write ELO's follow-up to A New World Record:
The song's arrangement has been called "Beatlesque", bearing similarities to Beatles songs "Martha My Dear" and "A Day in the Life" while harmonically it shares its unusual first four chords and harmonic rhythm with "Yesterday". The song's piano and drum intro is borrowed from the Kinks' 1968 song "Do You Remember Walter".
Arrangement
The arrangement makes prominent use of a cowbell-like sound, which is credited on the album, to percussionist Bev Bevan, as that of a "fire extinguisher". When the song is performed live, a drumstick strikes the side of a fire extinguisher, producing the sound.
Describing the song for the BBC, Dominic King said:
Lots of Gibb Brothers' vocal inflexions and Beatles' arrangement quotes (Penny Lane bell, Pepper panting, Abbey Road arpeggio guitars). But this fabulous madness creates its own wonder – the bendy guitar solo, funky cello stop-chorus, and the most freakatastic vocoder since Sparky's Magic Piano. Plus, the musical ambush on "way" at 2.51 still thrills. And that's before the Swingle Singers/RKO Tarzan movie/Rachmaninoff symphonic finale gets underway. K
|
https://en.wikipedia.org/wiki/Cryptogenic%20species
|
A cryptogenic species ("cryptogenic" being derived from Greek "κρυπτός", meaning hidden, and "γένεσις", meaning origin) is a species whose origins are unknown. The cryptogenic species can be an animal or plant, including other kingdoms or domains, such as fungi, algae, bacteria, or even viruses.
In ecology, a cryptogenic species is one which may be either a native species or an introduced species, clear evidence for either origin being absent. An example is the Northern Pacific seastar (Asterias amurensis) in Alaska and Canada.
In palaeontology, a cryptogenic species is one which appears in the fossil record without clear affinities to an earlier species.
See also
Cosmopolitan distribution
Cryptozoology
|
https://en.wikipedia.org/wiki/Smart%20contract
|
A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need for trusted intermediators, arbitration costs, and fraud losses, as well as the reduction of malicious and accidental exceptions. Smart contracts are commonly associated with cryptocurrencies, and the smart contracts introduced by Ethereum are generally considered a fundamental building block for decentralized finance (DeFi) and NFT applications.
Vending machines are mentioned as the oldest piece of technology equivalent to smart contract implementation. The original Ethereum white paper by Vitalik Buterin in 2014 describes the Bitcoin protocol as a weak version of the smart contract concept as originally defined by Nick Szabo, and proposed a stronger version based on the Solidity language, which is Turing complete. Since Bitcoin, various cryptocurrencies have supported programming languages which allow for more advanced smart contracts between untrusted parties.
A smart contract should not be confused with a smart legal contract, which refers to a traditional, natural-language, legally-binding agreement that has selected terms expressed and implemented in machine-readable code.
Etymology
Smart contracts were first proposed in the early 1990s by Nick Szabo, who coined the term, using it to refer to "a set of promises, specified in digital form, including protocols within which the parties perform on these promises". In 1998, the term was used to describe objects in rights management service layer of the system The Stanford Infobus, which was a part of Stanford Digital Library Project.
Legal status of smart contracts
A smart contract does not typically constitute a valid binding agreement at law, although a smart legal contract is intended to be both executable by a machine and legally enforceable.
Smar
|
https://en.wikipedia.org/wiki/Leeuwenhoek%20Lecture
|
The Leeuwenhoek Lecture is a prize lecture of the Royal Society to recognize achievement in microbiology. The prize was originally given in 1950 and awarded annually, but from 2006 to 2018 was given triennially. From 2018 it will be awarded biennially.
The prize is named after the Dutch microscopist Antonie van Leeuwenhoek and was instituted in 1948 from a bequest from George Gabb. A gift of £2000 is associated with the lecture.
Leeuwenhoek Lecturers
The following is a list of Leeuwenhoek Lecture award winners along with the title of their lecture:
21st Century
2024 Joanne Webster, for her achievements in advancing control of disease in humans and animals caused by parasites in Asia and Africa
2022 Sjors Scheres, for ground-breaking contributions and innovations in image analysis and reconstruction methods in electron cryo-microscopy, enabling the structure determination of complex macromolecules of fundamental biological and medical importance to atomic resolution
2020 Geoffrey L. Smith, for his studies of poxviruses which has had major impact in wider areas, notably vaccine development, biotechnology, host-pathogen interactions and innate immunity
2018 Sarah Cleaveland, Can we make rabies history? Realising the value of research for the global elimination of rabies
2015 Jeffrey Errington, for his seminal discoveries in relation to the cell cycle and cell morphogenesis in bacteria
2012 Brad Amos, How new science is transforming the optical microscope
2010 Robert Gordon Webster, Pandemic Influenza: one flu over the cuckoo's nest
2006 Richard Anthony Crowther, Microscopy goes cold: frozen viruses reveal their structural secrets.
2005 Keith Chater, Streptomyces inside out: a new perspective on the bacteria that provide us with antibiotics.
2004 David Sherratt, A bugs life
2003 Brian Spratt, Bacterial populations and bacterial disease
2002 Stephen West, DNA repair from microbes to man
2001 Robin Weiss, From Pan to pandemic: animal to human infection
|
https://en.wikipedia.org/wiki/List%20of%20leaf%20vegetables
|
This is a list of vegetables which are grown or harvested primarily for the consumption of their leafy parts, either raw or cooked. Many vegetables with leaves that are consumed in small quantities as a spice such as oregano, for medicinal purposes such as lime, or used in infusions such as tea, are not included in this list.
List
Key
Citations marked with Ecoport are from the Ecoport Web site, an ecology portal developed in collaboration with the FAO.
Those marked with GRIN are from the GRIN Taxonomy of Food Plants.
Sources marked with Duke are from James Duke's book Handbook of Energy Crops.
See also
List of vegetables
List of foods
List of vegetable dishes
|
https://en.wikipedia.org/wiki/Hyper%20Sports
|
Hyper Sports, known in Japan as is an Olympic-themed sports video game released by Konami for arcades in 1984. It is the sequel to 1983's Track & Field and features seven new Olympic events. Like its predecessor, Hyper Sports has two run buttons and one action button per player. The Japanese release of the game sported an official license for the 1984 Summer Olympics.
Gameplay
The gameplay is much the same as Track & Field in that the player competes in an event and tries to score the most points based on performance criteria, and also by beating the computer entrants in that event. Also, the player tries to exceed a qualification time, distance, or score to advance to the next event. In Hyper Sports, if all of the events are passed successfully, the player advances to the next round of the same events which are faster and harder to qualify for.
The events changed to include these new sports:
Swimming - swimming speed is controlled by two run buttons, and breathing is controlled by the action button when prompted by swimmer on screen. There is one re-do if a player fouls due to launching before the gun, but only one "run" at the qualifying time.
Skeet shooting - selecting left or right shot via the two run buttons while a clay-bird is in the sight. There are three rounds to attempt to pass the qualifying score. If a perfect score is attained then a different pattern follows allowing for a higher score.
Long horse - speed to run at horse is computer controlled, player jumps and pushes off horse via the action button, and rotates as many times as possible via run buttons (and tries to land straight up on feet). There are three attempts at the qualifying score.
Archery - firing of the arrow controlled by action button; the elevation angle is controlled by depressing the action button and releasing at the proper time. There are three attempts at passing the qualifying score.
Triple jump - speed is controlled by the run buttons, jump and angle are controlled by actio
|
https://en.wikipedia.org/wiki/Batman%3A%20Contagion
|
Contagion is the name of a story arc that ran through the various Batman comic book series. It concerns the outbreak of a lethal disease in Gotham City, and Batman's attempts to combat it. The events of this story led into Batman: Legacy and Batman: Cataclysm, which itself leads into Batman: No Man's Land. It ran from March through April 1996.
Much of the plot centers around a gated community in the middle of Gotham City, whose wealthy residents believe they can protect themselves from the plague by sealing themselves inside, only to discover that one of their number is the plague's first carrier. In this the story parallels the plot of Edgar Allan Poe's short story The Masque of the Red Death, which one of the community's dying residents mentions.
Plot
Batman: Shadow of the Bat #48
Azrael sends a report to Batman that a plague connected to The Sacred Order of Saint Dumas is on its way to Gotham City. Robin finds that a private plane has just landed in Gotham from Africa. The passenger, Peter Maris, enters his exclusive condominium complex, Babylon Towers. Believing that the plague is about to infect Gotham, he proposes to the other homeowners that they dismiss their servants and seal the building, which is entirely self-sufficient, and they agree. What Maris does not know is that he is already infected, and has passed the infection to his pilot, who joins the servants entering the city. Batman infiltrates a military facility where the Ebola Gulf-A virus, also known as "the Clench", was being studied. The military's head of research, General Derwent, has been accidentally infected and is slowly dying.
Detective Comics #695 / Robin #27 / Catwoman #31
Batman and Robin trace the original source of the outbreak to Babylon Towers, eavesdropping while Peter Maris, now realizing that he is infected, tells the other residents about a survivor from a previous outbreak in Greenland. The residents post an enormous reward for his live capture. As a result, when Robin and Al
|
https://en.wikipedia.org/wiki/Proof%20of%20the%20Euler%20product%20formula%20for%20the%20Riemann%20zeta%20function
|
Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737.
The Euler product formula
The Euler product formula for the Riemann zeta function reads
where the left hand side equals the Riemann zeta function:
and the product on the right hand side extends over all prime numbers p:
Proof of the Euler product formula
This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage:
Subtracting the second equation from the first we remove all elements that have a factor of 2:
Repeating for the next term:
Subtracting again we get:
where all elements having a factor of 3 or 2 (or both) are removed.
It can be seen that the right side is being sieved. Repeating infinitely for where is prime, we get:
Dividing both sides by everything but the ζ(s) we obtain:
This can be written more concisely as an infinite product over all primes p:
To make this proof rigorous, we need only to observe that when , the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for .
The case s = 1
An interesting result can be found for ζ(1), the harmonic series:
which can also be written as,
which is,
as,
thus,
While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g.,
.
Instead, the denominator may be written in terms of the primorial numerator so that divergence is clear
given the trivial composed logarithmic divergence of an inverse prime series.
Another proof
Each factor (for a given prime p) in the product above can be expanded to a geometric se
|
https://en.wikipedia.org/wiki/Thermostability
|
In materials science and molecular biology, thermostability is the ability of a substance to resist irreversible change in its chemical or physical structure, often by resisting decomposition or polymerization, at a high relative temperature.
Thermostable materials may be used industrially as fire retardants. A thermostable plastic, an uncommon and unconventional term, is likely to refer to a thermosetting plastic that cannot be reshaped when heated, than to a thermoplastic that can be remelted and recast.
Thermostability is also a property of some proteins. To be a thermostable protein means to be resistant to changes in protein structure due to applied heat.
Thermostable proteins
Most life-forms on Earth live at temperatures of less than 50 °C, commonly from 15 to 50 °C. Within these organisms are macromolecules (proteins and nucleic acids) which form the three-dimensional structures essential to their enzymatic activity. Above the native temperature of the organism, thermal energy may cause the unfolding and denaturation, as the heat can disrupt the intramolecular bonds in the tertiary and quaternary structure. This unfolding will result in loss in enzymatic activity, which is understandably deleterious to continuing life-functions. An example of such is the denaturing of proteins in albumen from a clear, nearly colourless liquid to an opaque white, insoluble gel.
Proteins capable of withstanding such high temperatures compared to proteins that cannot, are generally from microorganisms that are hyperthermophiles. Such organisms can withstand above 50 °C temperatures as they usually live within environments of 85 °C and above. Certain thermophilic life-forms exist which can withstand temperatures above this, and have corresponding adaptations to preserve protein function at these temperatures. These can include altered bulk properties of the cell to stabilize all proteins, and specific changes to individual proteins. Comparing homologous proteins present in
|
https://en.wikipedia.org/wiki/AES51
|
AES51 is a standard first published by the Audio Engineering Society in June 2006 that specifies a method of carrying Asynchronous Transfer Mode (ATM) cells over Ethernet physical structure intended in particular for use with AES47 to carry AES3 digital audio transport structure. The purpose of this is to provide an open standard, Ethernet based approach to the networking of linear (uncompressed) digital audio with extremely high quality-of-service alongside standard Internet Protocol connections.
This standard specifies a method, also known as "ATM-E", of carrying ATM cells over hardware specified for IEEE 802.3 (Ethernet). It is intended as a companion standard to AES47 (Transmission of digital audio over ATM networks), to provide a standard method of carrying ATM cells and real-time clock over hardware specified for Ethernet.
|
https://en.wikipedia.org/wiki/MPEG%20elementary%20stream
|
An elementary stream (ES) as defined by the MPEG communication protocol is usually the output of an audio encoder or video encoder. An ES contains only one kind of data (e.g. audio, video, or closed caption). An elementary stream is often referred to as "elementary", "data", "audio", or "video" bitstreams or streams. The format of the elementary stream depends upon the codec or data carried in the stream, but will often carry a common header when packetized into a packetized elementary stream.
Header for MPEG-2 video elementary stream
General layout of MPEG-1 audio elementary stream
The digitized sound signal is divided up into blocks of 384 samples in Layer I and 1152 samples in Layers II and III. The sound sample block is encoded within an audio frame:
header
error check
audio data
ancillary data
The header of a frame contains general information such as the MPEG Layer, the sampling frequency, the number of channels, whether the frame is CRC protected, whether the sound is the original:
Although most of this information may be the same for all frames, MPEG decided to give each audio frame such a header in order to simplify synchronization and bitstream editing.
See also
MP3
Packetized elementary stream
MPEG program stream
MPEG transport stream
External links
ISO/IEC 11172-3:1993: Information technology -- Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s -- Part 3: Audio
MPEG
|
https://en.wikipedia.org/wiki/Stompers%20%28toy%29
|
Stompers are battery-powered toy cars that use a single AA battery and feature four-wheel drive. They are driven by a single motor that turns both axles. They were the first battery-powered, electric, true 4WD toys. Stompers were created in 1980 by A. Eddy Goldfarb and sold by Schaper Toys. Later, in the United Kingdom, Corgi Toys marketed identical toys in Corgi labeled packaging called Trekkers but made by Schaper. Genuine Stompers were sold by various companies around the globe and were also made by Schaper. There were similar products manufactured by Soma and LJN (Rough Riders). Both companies were involved in lawsuits by Goldfarb and Schaper. Settlements were made and the companies continued their line of toys. As of 2019, Goldfarb continues to live and work at his design studio in Southern California.
History of production
Generation I (Schaper)
Stompers debuted in 1980. Schaper's 1980 catalog showed five Stomper trucks: the Chevrolet K-10 Scottsdale, Chevrolet Blazer, Dodge Warlock, Ford Bronco, and Jeep Honcho. The Stunt Set and Wild Mountain play sets were the only other Stompers products in that year's catalog. The earliest Stompers have clips inside the body that attach to the sides of the chassis; they are known as "side clips". They also came with a set of foam tires. Nineteen eighty-one brought five new Stomper trucks: the Chevrolet LUV, Datsun Li'l Hustler, Jeep Renegade, Subaru BRAT, and Toyota SR5. The Stunt Set and Wild Mountain set also returned, though different pieces were shown in the 1981 catalog. The short-lived Stomper SSC Super Cycles also debuted in 1981. The trucks were also sold with an additional set of rubber tires so that they could be driven outdoors.
The Jeep Cherokee and Scrambler were the new four-wheel-drive trucks for 1982. Fun x4s ("Exclusively designed from the real street hot-rods!") debuted in 1982, consisting of the AMC (American Motors) SX/4, two Chevrolets (van and 1956 Nomad), Jeep CJ, Subaru hatchback, and Volkswage
|
https://en.wikipedia.org/wiki/Exciter%20%28effect%29
|
An exciter (also called a harmonic exciter or aural exciter) is an audio signal processing technique used to enhance a signal by dynamic equalization, phase manipulation, harmonic synthesis of (usually) high frequency signals, and through the addition of subtle harmonic distortion. Dynamic equalization involves variation of the equalizer characteristics in the time domain as a function of the input. Due to the varying nature, noise is reduced compared to static equalizers. Harmonic synthesis involves the creation of higher order harmonics from the fundamental frequency signals present in the recording. As noise is usually more prevalent at higher frequencies, the harmonics are derived from a purer frequency band resulting in clearer highs. Exciters are also used to synthesize harmonics of low frequency signals to simulate deep bass in smaller speakers.
Originally made in valve (tube) based equipment, they are now implemented as part of a digital signal processor, often trying to emulate analogue exciters. Exciters are mostly found as plug-ins for sound editing software and in sound enhancement processors.
Aphex aural exciter
The Aphex aural exciter was one of the first exciter effects. The effect was developed in the mid-1970s by Aphex Electronics. The aural exciter adds phase shift and musically related synthesized harmonics to audio signals. The first Aural Exciter units were available in the mid-1970s, exclusively on the rental basis of $30 per minute of finished recorded time. In the 1970s, certain recording artists, including Anne Murray, Neil Diamond, Jackson Browne, The Four Seasons, Olivia Newton-John, Linda Ronstadt and James Taylor stated in their liner notes "This album was recorded using the Aphex Aural Exciter."
Aphex started selling the professional units, and introduced two low-cost models: Type B and Type C. The Aural Exciter circuit is now licensed by a growing list of manufacturers, including Yamaha, MacKenzie, Gentner, E-mu Systems and Bogen.
|
https://en.wikipedia.org/wiki/HiSoft%20Systems
|
HiSoft Systems is a software company based in the UK, creators of a range of programming tools for microcomputers in 1980s and 1990s.
Products
Their first products were Pascal and Assembler implementations for the NASCOM 1 and 2 kit-based computers, followed by Pascal and C for computers, as well as a BASIC compiler for this platform and a C compiler for CP/M. While compilers for the were typical products for this platform, with integrated editor, compiler and runtime environment fitting in RAM together with program's source, the C compiler for CP/M was typical for this operating system, batch operated, with separate compilation and linking stages.
Their most well-known products were the Devpac assembler IDE environments (earlier known as GenST and GenAm for the Atari ST and Amiga, respectively). The Devpac IDE was a full editor/assembler/debugger environment written entirely in 68k assembler and was a favourite tool among programmers on the Atari GEM platform.
HiSoft also sold HiSoft BASIC and Power BASIC, HiSoft C Interpreter for the Atari ST, Aztec C, Personal Pascal, and FTL Modula-2. They also produced WERCS, the WIMP Environment Resource Construction Set.
Background
The business was created in 1980 and was based in Dunstable, Bedfordshire before relocating to the village of Greenfield in the same county.
In November 2001, HiSoft's staff were employed by Maxon Computer Limited, the UK arm of MAXON Computer GmbH. to work on Cinema 4D.
David Link, the founder and owner, ran a café () in the village of Emsworth for a year until July 2007 and a restaurant/bar/guest house() in Shanklin, Isle of Wight, from 2010 until January 2015.
|
https://en.wikipedia.org/wiki/Open%20book%20decomposition
|
In mathematics, an open book decomposition (or simply an open book) is a decomposition of a closed oriented 3-manifold M into a union of surfaces (necessarily with boundary) and solid tori. Open books have relevance to contact geometry, with a famous theorem of Emmanuel Giroux (given below) that shows that contact geometry can be studied from an entirely topological viewpoint.
Definition and construction
Definition. An open book decomposition of a 3-dimensional manifold M is a pair (B, π) where
B is an oriented link in M, called the binding of the open book;
π: M \ B → S1 is a fibration of the complement of B such that for each θ ∈ S1, π−1(θ) is the interior of a compact surface Σ ⊂ M whose boundary is B. The surface Σ is called the page of the open book.
This is the special case m = 3 of an open book decomposition of an m-dimensional manifold, for any m.
The definition for general m is similar, except that the surface with boundary (Σ, B) is replaced by an (m − 1)-manifold with boundary (P, ∂P). Equivalently, the open book decomposition can be thought of as a homeomorphism of M to the quotient space
where f:P → P is a self-homeomorphism preserving the boundary. This quotient space is called a relative mapping torus.
When Σ is an oriented compact surface with n boundary components and φ: Σ → Σ is a homeomorphism which is the identity near the boundary, we can construct an open book by first forming the mapping torus Σφ. Since φ is the identity on ∂Σ, ∂Σφ is the trivial circle bundle over a union of circles, that is, a union of tori; one torus for each boundary component. To complete the construction, solid tori are glued to fill in the boundary tori so that each circle S1 × {p} ⊂ S1×∂D2 is identified with the boundary of a page. In this case, the binding is the collection of n cores S1×{q} of the n solid tori glued into the mapping torus, for arbitrarily chosen q ∈ D2. It is known that any open book can be constructed this way. As the only information used in
|
https://en.wikipedia.org/wiki/Andrzej%20Bia%C5%82ynicki-Birula
|
Andrzej Białynicki-Birula (26 December 1935 – 19 April 2021) was a Polish mathematician, best known for his work on algebraic geometry. He was considered one of the pioneers of differential algebra. He was a member of the Polish Academy of Sciences.
Białynicki-Birula was born in Nowogrodek, Polish Republic, currently known as Navahrudak, West Belarus. His elder brother, , was born two years earlier and is a theoretical physicist and a fellow member of the Polish Academy of Sciences. He received his Ph.D. from the University of California, Berkeley in 1960. His thesis was written under the direction of Gerhard Hochschild. Since 1970, he was Professor of Mathematics at Warsaw University.
See also
List of Polish mathematicians
|
https://en.wikipedia.org/wiki/Uranium%20in%20the%20environment
|
Uranium in the environment is a global health concern, and comes from both natural and man-made sources. Mining, phosphates in agriculture, weapons manufacturing, and nuclear power are sources of uranium in the environment.
In the natural environment, radioactivity of uranium is generally low, but uranium is a toxic metal that can disrupt normal functioning of the kidney, brain, liver, heart, and numerous other systems. Chemical toxicity can cause public health issues when uranium is present in groundwater, especially if concentrations in food and water are increased by mining activity. The biological half-life (the average time it takes for the human body to eliminate half the amount in the body) for uranium is about 15 days.
Uranium's radioactivity can present health and environmental issues in the case of nuclear waste produced by nuclear power plants or weapons manufacturing.
Uranium is weakly radioactive and remains so because of its long physical half-life (4.468 billion years for uranium-238). The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects.
Natural occurrence
Uranium is a naturally occurring element found in low levels within all rock, soil, and water. This is the highest-numbered element to be found naturally in significant quantities on earth. According to the United Nations Scientific Committee on the Effects of Atomic Radiation the normal concentration of uranium in soil is 300 μg/kg to 11.7 mg/kg.
It is considered to be more plentiful than antimony, beryllium, cadmium, gold, mercury, silver, or tungsten and is about as abundant as tin, arsenic or molybdenum. It is found in many minerals including uraninite (most common uranium ore), autunite, uranophane, torbernite, and coffinite. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercial
|
https://en.wikipedia.org/wiki/Contrast%20%28vision%29
|
Contrast is the difference in luminance or colour that makes an object (or its representation in an image or display) visible on a background of different luminance or color. The human visual system is more sensitive to contrast than to absolute luminance; we can perceive the world similarly regardless of the huge changes in illumination over the day or from place to place. The maximum contrast of an image is the contrast ratio or dynamic range. Images with a contrast ratio close to their medium's maximum possible contrast ratio experience a conservation of contrast, wherein any increase in contrast in some parts of the image must necessarily result in a decrease in contrast elsewhere. Brightening an image will increase contrast in dark areas but decrease contrast in bright areas, while darkening the image will have the opposite effect. Bleach bypass destroys contrast in both the darkest and brightest parts of an image while enhancing luminance contrast in areas of intermediate brightness.
Biological contrast sensitivity
Campbell and Robson (1968) showed that the human contrast sensitivity function shows a typical band-pass filter shape peaking at around 4 cycles per degree, with sensitivity dropping off either side of the peak. This can be observed by changing one's viewing distance from a "sweep grating" (shown below) showing many bars of a sinusoidal grating that go from high to low contrast along the bars, and go from narrow (high spatial frequency) to wide (low spatial frequency) bars across the width of the grating. In the figure below, at an ordinary viewing distance, the bars in the middle appear to be the longest, because of their optimal spatial frequency, whereas at a far viewing distance, the longest visible bars shift to the originally wide bars (now with the same spatial frequency as the middle bars at reading distance).
The high-frequency cut-off represents the optical limitations of the visual system's ability to resolve detail and is typically ab
|
https://en.wikipedia.org/wiki/Contrast%20%28statistics%29
|
In statistics, particularly in analysis of variance and linear regression, a contrast is a linear combination of variables (parameters or statistics) whose coefficients add up to zero, allowing comparison of different treatments.
Definitions
Let be a set of variables, either parameters or statistics, and be known constants. The quantity is a linear combination. It is called a contrast if Furthermore, two contrasts, and , are orthogonal if
Examples
Let us imagine that we are comparing four means, . The following table describes three possible contrasts:
The first contrast allows comparison of the first mean with the second, the second contrast allows comparison of the third mean with the fourth, and the third contrast allows comparison of the average of the first two means with the average of the last two.
In a balanced one-way analysis of variance, using orthogonal contrasts has the advantage of completely partitioning the treatment sum of squares into non-overlapping additive components that represent the variation due to each contrast. Consider the numbers above: each of the rows sums up to zero (hence they are contrasts). If we multiply each element of the first and second row and add those up, this again results in zero, thus the first and second contrast are orthogonal and so on.
Sets of contrast
Orthogonal contrasts are a set of contrasts in which, for any distinct pair, the sum of the cross-products of the coefficients is zero (assume sample sizes are equal). Although there are potentially infinite sets of orthogonal contrasts, within any given set there will always be a maximum of exactly k – 1 possible orthogonal contrasts (where k is the number of group means available).
Polynomial contrasts are a special set of orthogonal contrasts that test polynomial patterns in data with more than two means (e.g., linear, quadratic, cubic, quartic, etc.).
Orthonormal contrasts are orthogonal contrasts which satisfy the additional condition that, for each cont
|
https://en.wikipedia.org/wiki/List%20of%20object%E2%80%93relational%20mapping%20software
|
This is a list of well-known object–relational mapping software.
Java
Apache Cayenne, open-source for Java
Apache OpenJPA, open-source for Java
DataNucleus, open-source JDO and JPA implementation (formerly known as JPOX)
Ebean, open-source ORM framework
EclipseLink, Eclipse persistence platform
Enterprise JavaBeans (EJB)
Enterprise Objects Framework, Mac OS X/Java, part of Apple WebObjects
Hibernate, open-source ORM framework, widely used
Java Data Objects (JDO)
JOOQ Object Oriented Querying (jOOQ)
Kodo, commercial implementation of both Java Data Objects and Java Persistence API
TopLink by Oracle
iOS
Core Data by Apple for Mac OS X and iOS
.NET
Base One Foundation Component Library, free or commercial
Dapper, open source
Entity Framework, included in .NET Framework 3.5 SP1 and above
iBATIS, free open source, maintained by ASF but now inactive.
LINQ to SQL, included in .NET Framework 3.5
NHibernate, open source
nHydrate, open source
Quick Objects, free or commercial
Objective-C, Cocoa
Enterprise Objects, one of the first commercial OR mappers, available as part of WebObjects
Core Data, object graph management framework with several persistent stores, ships with Mac OS X and iOS
Perl
DBIx::Class
PHP
Laravel, framework that contains an ORM called "Eloquent" an ActiveRecord implementation.
Doctrine, open source ORM for PHP 5.2.3, 5.3.X., 7.4.X Free software (MIT)
CakePHP, ORM and framework for PHP 5, open source (scalars, arrays, objects); based on database introspection, no class extending
CodeIgniter, framework that includes an ActiveRecord implementation
Yii, ORM and framework for PHP 5, released under the BSD license. Based on the ActiveRecord pattern
FuelPHP, ORM and framework for PHP 5.3, released under the MIT license. Based on the ActiveRecord pattern.
Laminas, framework that includes a table data gateway and row data gateway implementations
Propel, ORM and query-toolkit for PHP 5, inspired by Apache Torque, free software, MIT
Qcodo, ORM and framework
|
https://en.wikipedia.org/wiki/Eduardo%20D.%20Sontag
|
Eduardo Daniel Sontag (born April 16, 1951, in Buenos Aires, Argentina) is an Argentine-American mathematician, and distinguished university professor at Northeastern University, who works in the fields control theory, dynamical systems, systems molecular biology, cancer and immunology, theoretical computer science, neural networks, and computational biology.
Biography
Sontag received his Licenciado degree from the mathematics department at the University of Buenos Aires in 1972, and his Ph.D. in Mathematics under Rudolf Kálmán at the Center for Mathematical Systems Theory at the University of Florida in 1976.
From 1977 to 2017, he was with the department of mathematics at Rutgers, The State University of New Jersey, where he was a Distinguished Professor of Mathematics as well as a Member of the Graduate Faculty of the Department of Computer Science and the Graduate Faculty of the Department of Electrical and Computer Engineering, and a Member of the Rutgers Cancer Institute of NJ. In addition, Dr. Sontag served as the head of the undergraduate Biomathematics Interdisciplinary Major, director of the Center for Quantitative Biology, and director of graduate studies of the Institute for Quantitative Biomedicine. In January 2018, Dr. Sontag was appointed as a University Distinguished Professor in the Department of Electrical and Computer Engineering and the Department of BioEngineering at Northeastern University, where he is also an affiliate member of the Department of Mathematics and the Department of Chemical Engineering. Since 2006, he has been a research affiliate at the Laboratory for Information and Decision Systems, MIT, and since 2018 he has been a member of the faculty in the Program in Therapeutic Science, Laboratory for Systems Pharmacology, at Harvard Medical School.
Eduardo Sontag has authored over five hundred research papers and monographs and book chapters in the above areas with about 60,000 citations and an h-index of 104. He is in the edit
|
https://en.wikipedia.org/wiki/Bioluminescence%20imaging
|
Bioluminescence imaging (BLI) is a technology developed over the past decades (1990's and onward). that allows for the noninvasive study of ongoing biological processes Recently, bioluminescence tomography (BLT) has become possible and several systems have become commercially available. In 2011, PerkinElmer acquired one of the most popular lines of optical imaging systems with bioluminescence from Caliper Life Sciences.
Background
Bioluminescence is the process of light emission in living organisms. Bioluminescence imaging utilizes native light emission from one of several organisms which bioluminesce, also known as luciferase enzymes. The three main sources are the North American firefly, the sea pansy (and related marine organisms), and bacteria like Photorhabdus luminescens and Vibrio fischeri. The DNA encoding the luminescent protein is incorporated into the laboratory animal either via a viral vector or by creating a transgenic animal. Rodent models of cancer spread can be studied through bioluminescence imaging.for e.g.Mouse models of breast cancer metastasis.
Systems derived from the three groups above differ in key ways:
Firefly luciferase requires D-luciferin to be injected into the subject prior to imaging. The peak emission wavelength is about 560 nm. Due to the attenuation of blue-green light in tissues, the red-shift (compared to the other systems) of this emission makes detection of firefly luciferase much more sensitive in vivo.
Renilla luciferase (from the Sea pansy) requires its substrate, coelenterazine, to be injected as well. As opposed to luciferin, coelenterazine has a lower bioavailability (likely due to MDR1 transporting it out of mammalian cells). Additionally, the peak emission wavelength is about 480 nm.
Bacterial luciferase has an advantage in that the lux operon used to express it also encodes the enzymes required for substrate biosynthesis. Although originally believed to be functional only in prokaryotic organisms, where it is
|
https://en.wikipedia.org/wiki/Moisture%20sorption%20isotherm
|
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
|
https://en.wikipedia.org/wiki/Academic%20detailing
|
Academic detailing is "university or non-commercial-based educational outreach." The process involves face-to-face education of prescribers by trained health care professionals, typically pharmacists, physicians, or nurses. The goal of academic detailing is to improve prescribing of targeted drugs to be consistent with medical evidence from randomized controlled trials, which ultimately improves patient care and can reduce health care costs. A key component of non-commercial or university-based academic detailing programs is that they (academic detailers/clinical educators, management, staff, program developers, etc.) do not have any financial links to the pharmaceutical industry.
Academic detailing has been studied for over 25 years and has been shown to be effective at improving prescribing of targeted medications about 5% from baseline. Though it is primarily used to affect prescribing, it is also used to educate providers regarding other non-drug interventions, such as screening guidelines.
Organizations
Many academic detailing programs exist around the world. In the United States, university-based state programs exist in Vermont, Oregon and South Carolina. The nonprofit organization Alosa Health runs an academic detailing program in Pennsylvania, Massachusetts and Washington, DC called the Independent Drug Information Service (IDIS). The National Resource Center for Academic Detailing (NaRCAD), funded by the Agency for Healthcare Research and Quality (AHRQ) in 2010, was created to help organizations with limited resources to establish and improve their own programs and to create a network of academic detailing programs. The U.S. Department of Veterans Affairs Pharmacy Benefits Management pilot tested the National Academic Detailing Service in 2010 to enhance veterans' outcomes by empowering clinicians and promoting the use of evidence-based treatments using delivered by clinical pharmacy specialists. After the pilot, in March 2015, the Interim Under Secret
|
https://en.wikipedia.org/wiki/TDtv
|
TDtv combines IPWireless commercial UMTS TD-CDMA solution and 3GPP Release 6 Multimedia Broadcast Multicast Service (MBMS) to deliver Mobile TV. TDtv operates in the universal unpaired 3G spectrum bands that are available worldwide at 1900 MHz and 2100 MHz. It allows UMTS operators to fully utilize their existing spectrum and base stations to offer mobile TV and multimedia packages without affecting other voice and data 3G services.
External links
NextWave Wireless dead link due to the merger of the company
Streaming television
|
https://en.wikipedia.org/wiki/Cooperative%20multitasking
|
Cooperative multitasking, also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Instead, in order to run multiple applications concurrently, processes voluntarily yield control periodically or when idle or logically blocked. This type of multitasking is called cooperative because all programs must cooperate for the scheduling scheme to work.
In this scheme, the process scheduler of an operating system is known as a cooperative scheduler whose role is limited to starting the processes and letting them return control back to it voluntarily.
This is related to the asynchronous programming approach.
Usage
Although it is rarely used as the primary scheduling mechanism in modern operating systems, it is widely used in memory-constrained embedded systems and also, in specific applications such as CICS or the JES2 subsystem. Cooperative multitasking was the primary scheduling scheme for 16-bit applications employed by Microsoft Windows before Windows 95 and Windows NT, and by the classic Mac OS. Windows 9x used non-preemptive multitasking for 16-bit legacy applications, and the PowerPC Versions of Mac OS X prior to Leopard used it for classic applications. NetWare, which is a network-oriented operating system, used cooperative multitasking up to NetWare 6.5. Cooperative multitasking is still used on RISC OS systems.
Cooperative multitasking is used with await in languages, such as JavaScript or Python, that feature a single-threaded event-loop in their runtime. This contrasts with operating system cooperative multitasking as await is scoped only to the function or block, meaning other tasks may run concurrently in other parts of the code while a single function is waiting. In most modern languages, async and await are implemented as coroutines.
Problems
As a cooperatively multitasked system relies on each process regularly giving up t
|
https://en.wikipedia.org/wiki/Egli%20model
|
The Egli model is a terrain model for radio frequency propagation. This model, which was first introduced by John Egli in his 1957 paper, was derived from real-world data on UHF and VHF television transmissions in several large cities. It predicts the total path loss for a point-to-point link. Typically used for outdoor line-of-sight transmission, this model provides the path loss as a single quantity.
Applicable to/under conditions
The Egli model is typically suitable for cellular communication scenarios where one antenna is fixed and another is mobile. The model is applicable to scenarios where the transmission has to go over an irregular terrain. However, the model does not take into account travel through some vegetative obstruction, such as trees or shrubbery.
Coverage
Frequency: The model is typically applied to VHF and UHF spectrum transmissions.
Mathematical formulation
The Egli model is formally expressed as:
Where,
= Receive power [W]
= Transmit power [W]
= Absolute gain of the base station antenna.
= Absolute gain of the mobile station antenna.
= Height of the base station antenna. [m]
= Height of the mobile station antenna. [m]
= Distance from base station antenna. [m]
= Frequency of transmission. [MHz]
Limitations
This model predicts the path loss as a whole and does not subdivide the loss into free space loss and other losses.
See also
Longley–Rice model
ITU terrain model
International Telecommunication Union
|
https://en.wikipedia.org/wiki/ITU%20terrain%20model
|
The ITU terrain loss model is a radio propagation model that provides a method to predict the median path loss for a telecommunication link. Developed on the basis of diffraction theory, this model predicts the path loss as a function of the height of path blockage and the First Fresnel zone for the transmission link.
Applicable to / under conditions
This model is applicable on any terrain.
This model accounts for obstructions in the middle of the telecommunication link, and therefore, is suitable to be used inside cities as well as in open fields.
Coverage
Frequency: Any
Distance: Any
Mathematical formulation
The model is mathematically formulated as:
Where,
= Additional loss (in excess of free-space loss) due to diffraction (dB)
= Normalized terrain clearance
= The height difference (negative in the case that the LOS path is completely obscured) (m)
= Height of the line-of-sight link (m)
= Height of the obstruction (m)
= Radius of the first Fresnel zone (m)
= Distance of obstruction from one terminal (km)
= Distance of obstruction from the other terminal (km)
= Frequency of transmission (GHz)
= Distance from transmitter to receiver (km)
To use the model, one computes the additional loss to each path obstruction (A). These losses are summed and then added to the predicted line of sight path loss which is calculated using Friis transmission equation or a similar theoretical or empirical model.
Limitations
This model is considered valid for losses over 15 dB and may be valid for losses as low as 6 dB. In the event that the loss is less than 6 dB or is negative (i.e., gain), this A-value should be discarded.
This model's output is only as good as the data on which it is based and the LOS model it is used to correct.
See also
Egli model
Radio propagation model
Longley–Rice model
|
https://en.wikipedia.org/wiki/Kauffman%E2%80%93White%20classification
|
The Kauffmann–White classification or Kauffmann and White classification scheme is a system that classifies the genus Salmonella into serotypes, based on surface antigens. It is named after Philip Bruce White and Fritz Kauffmann. First the "O" antigen type is determined based on oligosaccharides associated with lipopolysaccharide. Then the "H" antigen is determined based on flagellar proteins (H is short for the German Hauch meaning "breath" or "mist"; O stands for German ohne meaning "without"). Since Salmonella typically exhibit phase variation between two motile phenotypes, different "H" antigens may be expressed. Salmonella that can express only one "H" antigen phase consequently have motile and non-motile phenotypes and are termed monophasic, whilst isolates that lack any "H" antigen expression are termed non-motile. Pathogenic strains of Salmonella Typhi, Salmonella Paratyphi C, and Salmonella Dublin carry the capsular "Vi" antigen (Vi for virulence), which is a special subtype of the capsule's K antigen (from the German word Kapsel meaning capsule).
Kauffmann–White classification for Salmonella
Salmonella (species) serotype (O antigen) : (H1 antigen) : (H2 antigen)
Examples
Salmonella enterica serotype Typhimurium 1,4,5,12:i:1,2
monophasic variant of Salmonella Typhimurium 1,4,5,12:i:-
Antigens in brackets are those that are rarely expressed in that serovar.
The cost of maintaining a full set of antisera precludes all but reference laboratories from performing a complete serological identification of salmonella isolates. Most laboratories stock only a limited range of antisera, and the choice of stock sera is largely determined by the nature of the specimens to be processed.
Representative stock of antisera
A common set of working antisera is shown below:
Laboratories that are likely to investigate typhoid also carry antiserum raised against the Vi antigen.
A set of "Rapid Diagnostic Sera" is also held and is used for determination of common speci
|
https://en.wikipedia.org/wiki/Moving%20magnet%20and%20conductor%20problem
|
The moving magnet and conductor problem is a famous thought experiment, originating in the 19th century, concerning the intersection of classical electromagnetism and special relativity. In it, the current in a conductor moving with constant velocity, v, with respect to a magnet is calculated in the frame of reference of the magnet and in the frame of reference of the conductor. The observable quantity in the experiment, the current, is the same in either case, in accordance with the basic principle of relativity, which states: "Only relative motion is observable; there is no absolute standard of rest". However, according to Maxwell's equations, the charges in the conductor experience a magnetic force in the frame of the magnet and an electric force in the frame of the conductor. The same phenomenon would seem to have two different descriptions depending on the frame of reference of the observer.
This problem, along with the Fizeau experiment, the aberration of light, and more indirectly the negative aether drift tests such as the Michelson–Morley experiment, formed the basis of Einstein's development of the theory of relativity.
Introduction
Einstein's 1905 paper that introduced the world to relativity opens with a description of the magnet/conductor problem.
An overriding requirement on the descriptions in different frameworks is that they be consistent. Consistency is an issue because Newtonian mechanics predicts one transformation (so-called Galilean invariance) for the forces that drive the charges and cause the current, while electrodynamics as expressed by Maxwell's equations predicts that the fields that give rise to these forces transform differently (according to Lorentz invariance). Observations of the aberration of light, culminating in the Michelson–Morley experiment, established the validity of Lorentz invariance, and the development of special relativity resolved the resulting disagreement with Newtonian mechanics. Special relativity revised th
|
https://en.wikipedia.org/wiki/Neuroectoderm
|
Neuroectoderm (or neural ectoderm or neural tube epithelium) consists of cells derived from the ectoderm. Formation of the neuroectoderm is the first step in the development of the nervous system. The neuroectoderm receives bone morphogenetic protein-inhibiting signals from proteins such as noggin, which leads to the development of the nervous system from this tissue. Histologically, these cells are classified as pseudostratified columnar cells.
After recruitment from the ectoderm, the neuroectoderm undergoes three stages of development: transformation into the neural plate, transformation into the neural groove (with associated neural folds), and transformation into the neural tube. After formation of the tube, the brain forms into three sections; the hindbrain, the midbrain, and the forebrain.
The types of neuroectoderm include:
Neural crest
pigment cells in the skin
ganglia of the autonomic nervous system
dorsal root ganglia.
facial cartilage
aorticopulmonary septum of the developing heart and lungs
ciliary body of the eye
adrenal medulla
Neural tube
brain (rhombencephalon, mesencephalon and prosencephalon)
spinal cord and motor neurons
retina
posterior pituitary
See also
Neural plate
Neuroectodermal neoplasm
Neuroepithelial cell
|
https://en.wikipedia.org/wiki/Bretschneider%27s%20formula
|
In geometry, Bretschneider's formula is a mathematical expression for the area of a general quadrilateral.
It works on both convex and concave quadrilaterals (but not crossed ones), whether it is cyclic or not.
History
The German mathematician Carl Anton Bretschneider discovered the formula in 1842. The formula was also derived in the same year by the German mathematician Karl Georg Christian von Staudt.
Formulation
Bretschneider's formula is expressed as:
Here, , , , are the sides of the quadrilateral, is the semiperimeter, and and are any two opposite angles, since as long as
Proof
Denote the area of the quadrilateral by . Then we have
Therefore
The law of cosines implies that
because both sides equal the square of the length of the diagonal . This can be rewritten as
Adding this to the above formula for yields
Note that: (a trigonometric identity true for all )
Following the same steps as in Brahmagupta's formula, this can be written as
Introducing the semiperimeter
the above becomes
and Bretschneider's formula follows after taking the square root of both sides:
The second form is given by using the cosine half-angle identity
yielding
Emmanuel García has used the generalized half angle formulas to give an alternative proof.
Related formulae
Bretschneider's formula generalizes Brahmagupta's formula for the area of a cyclic quadrilateral, which in turn generalizes Heron's formula for the area of a triangle.
The trigonometric adjustment in Bretschneider's formula for non-cyclicality of the quadrilateral can be rewritten non-trigonometrically in terms of the sides and the diagonals and to give
Notes
|
https://en.wikipedia.org/wiki/Hesperocyon
|
Hesperocyon is an extinct genus of canids (subfamily Hesperocyoninae, family Canidae) that was endemic to North America, ranging from southern Canada to Colorado. It appeared during the Uintan age, –Bridgerian age (NALMA) of the Mid-Eocene– 42.5 Ma to 31.0 Ma. (AEO). Hesperocyon existed for approximately .
Taxonomy
Hesperocyon was assigned to Borophagini by Wang et al. in 1999 and was the earliest of the canids to evolve after the Caniformia-Feliformia split some 42 million years ago. Fossil evidence dates Hesperocyon gregarius to at least 37 mya, but the oldest Hesperocyon has been dated at 39.74 mya from the Duchesnean North American land mammal age.
The Canidae subfamily Hesperocyoninae probably arose out of Hesperocyon to become the first of the three great dogs groups: Hesperocyoninae (~40–30 Ma), Borophaginae (~36–2 Ma), and the Caninae lineage that led to the present-day canids (including grey wolves, foxes, coyotes, jackals and dogs). At least 28 known species of Hesperocyoninae evolved out of Hesperocyon, including those in the following five genera: Ectopocynus (32–19 mya), Osbornodon (32–18 mya), Paraenhydrocyon (20–25 mya), Mesocyon (31–15 mya) and Enhydrocyon (31–15 mya).
Morphology
This early, canine looked more like a civet or a small raccoon. Its body and tail were long and flexible, while its limbs were weak and short. Still, the build of its ossicles and distribution of its teeth showed it was a canid. It may have been an omnivore—unlike the hypercarnivorous Borophaginae that later split from this canid lineage.
Fossil record
The oldest fossil evidence was recovered from Saskatchewan dating from 42.5 mya to 31.0 Ma. The youngest fossil was recovered from the Dog Jaw Butte site, Goshen County, Wyoming dating to the Arikareean age (NALMA) of the Oligocene and Miocene 42.5 mya—31.0 Ma. (AEO).
|
https://en.wikipedia.org/wiki/Toilet%20brush
|
A toilet brush is a tool for cleaning a toilet bowl.
Generally the toilet brush is used with toilet cleaner or bleach. The toilet brush can be used to clean the upper area of the toilet, around the bowl. However, it cannot be used to clean very far into the toilet's U-bend and should not be used to clean the toilet seat.
In many cultures it is considered impolite to clean away biological debris without the use of chemical toilet cleaning products, as this can leave residue on the bristles. By contrast, others consider it impolite not to clean away biological debris immediately using the toilet brush.
A typical toilet brush consists of a hard bristled end, usually with a rounded shape and a long handle. Today toilet brushes are commonly made of plastic, but were originally made of wood with pig bristles or from the hair of horses, oxen, squirrels and badgers. The brush is typically stored in a holder, but in some cases completely hidden in a tube.
An electric toilet brush is a little different from a normal toilet brush. The bristles are fastened on the rotor of a motor which works similar to an electric tooth brush. The power supply is attached without any metal contact via electromagnetic induction.
In recent years, there has been a general shift in design with a new emphasis on ergonomically designed brushes. Further design enhancements have included innovative holders that snap shut around the bristled end, thereby preventing the release of smells, germs and other unpleasantries.
Further development of the traditional toilet brush focus on the risk of germ incubation within the brush holder. A toilet brush has been patented which introduces a reservoir of anti-bacterial fluid, allowing the brush to be dipped and sanitized after each use.
The first successful artificial Christmas tree was made from brush bristles by Addis using the same machinery used to manufacture its toilet brushes. The trees were made from the same animal-hair bristles used in the b
|
https://en.wikipedia.org/wiki/Pancreatic%20veins
|
In human anatomy, the pancreatic veins consist of several small blood vessels which drain the body and tail of the pancreas, and open into the trunk of the great pancreatic vein.
|
https://en.wikipedia.org/wiki/Pancreaticoduodenal%20veins
|
The pancreaticoduodenal veins accompany their corresponding arteries: the superior pancreaticoduodenal artery and the inferior pancreaticoduodenal artery; the lower of the two frequently joins the right gastroepiploic vein.
|
https://en.wikipedia.org/wiki/Right%20gastroepiploic%20vein
|
The right gastroepiploic vein (right gastroomental vein) is a blood vessel that drains blood from the greater curvature and left part of the body of the stomach into the superior mesenteric vein. It runs from left to right along the greater curvature of the stomach between the two layers of the greater omentum, along with the right gastroepiploic artery.
As a tributary of the superior mesenteric vein, it is a part of the hepatic portal system.
|
https://en.wikipedia.org/wiki/Pancreatic%20branches%20of%20splenic%20artery
|
The pancreatic branches or pancreatic arteries are numerous small vessels derived from the splenic artery as it runs behind the upper border of the pancreas, supplying its body and tail.
One of these, larger than the rest, is sometimes given off near the tail of the pancreas; it runs from left to right near the posterior surface of the gland, following the course of the pancreatic duct, and is called the greater pancreatic artery.
These vessels anastomose with the pancreatic branches of the superior and inferior pancreaticoduodenal artery that are given off by the gastroduodenal artery and superior mesenteric artery respectively.
Branches
There are four main pancreatic branches of the splenic artery:
Greater pancreatic artery
Dorsal pancreatic artery
Inferior pancreatic artery (aka transverse pancreatic artery)
Caudal pancreatic artery
|
https://en.wikipedia.org/wiki/Obturator%20veins
|
The obturator vein begins in the upper portion of the adductor region of the thigh and enters the pelvis through the upper part of the obturator foramen, in the obturator canal.
It runs backward and upward on the lateral wall of the pelvis below the obturator artery, and then passes between the ureter and the hypogastric artery, to end in the hypogastric vein.
It has an anterior and posterior branch (similar to obturator artery).
Additional images
|
https://en.wikipedia.org/wiki/Plantar%20digital%20veins
|
The plantar digital veins arise from plexuses on the plantar surfaces of the digits, and, after sending intercapitular veins to join the dorsal digital veins, unite to form four metatarsal veins.
|
https://en.wikipedia.org/wiki/Accessory%20cephalic%20vein
|
The accessory cephalic vein is a variable vein that passes along the radial border of the forearm to join the cephalic vein distal/inferior to the elbow. It may arise from a dorsal forearm venous plexus, or from the ulnar/medial side of the dorsal venous network of hand. In some cases the accessory cephalic springs from the cephalic above the wrist and joins it again higher up. A large oblique branch frequently connects the basilic and cephalic veins on the back of the forearm.
See also
Cephalic vein
|
https://en.wikipedia.org/wiki/Internal%20cerebral%20veins
|
The internal cerebral veins are two veins included in the group of deep cerebral veins that drain the deep parts of the hemispheres; each internal cerebral vein is formed near the interventricular foramina by the union of the superior thalamostriate vein and the superior choroid vein.
They run backward parallel with one another, between the layers of the tela chorioidea of the third ventricle, and beneath the splenium of the corpus callosum, where they unite to form a short trunk, the great cerebral vein of Galen; just before their union each receives the corresponding basal vein.
|
https://en.wikipedia.org/wiki/Cerebral%20veins
|
In human anatomy, the cerebral veins are blood vessels in the cerebral circulation which drain blood from the cerebrum of the human brain. They are divisible into external (superficial cerebral veins) and internal (internal cerebral veins) groups according to the outer or inner parts of the hemispheres they drain into.
External veins
The external cerebral veins known as the superficial cerebral veins are the superior cerebral veins, inferior cerebral veins, and middle cerebral veins. The superior cerebral veins on the upper side surfaces of the hemispheres drain into the superior sagittal sinus.
The superior cerebral veins include the superior anastomotic vein.
Internal veins
The internal cerebral veins are also known as the deep cerebral veins and drain the deep internal parts of the hemispheres.
|
https://en.wikipedia.org/wiki/Cerebellar%20veins
|
The cerebellar veins are veins which drain the cerebellum. They consist of the superior cerebellar veins and the inferior cerebellar veins (dorsal cerebellar veins). The superior cerebellar veins drain to the straight sinus and the internal cerebral veins. The inferior cerebellar veins drain to the transverse sinus, the superior petrosal sinus, and the occipital sinus.
Structure
The superior cerebellar veins pass partly forward and medialward, across the superior cerebellar vermis. They end in the straight sinus, and the internal cerebral veins, partly lateralward to the transverse and superior petrosal sinuses.
The inferior cerebellar veins are larger. They end in the transverse sinus, the superior petrosal sinus, and the occipital sinus.
Clinical significance
The cerebellar veins may be affected by infarction or thrombosis. They may be the draining site of abnormal fistulas.
Additional images
|
https://en.wikipedia.org/wiki/Fibroepithelial%20neoplasm
|
A fibroepithelial neoplasm (or tumor) is a biphasic tumor. They consist of epithelial tissue, and stromal or mesenchymal tissue. They may be benign or malignant.
Examples include:
Brenner tumor of the ovary
Fibroadenoma of the breast
Phyllodes tumor of the breast
|
https://en.wikipedia.org/wiki/Regeneron%20Pharmaceuticals
|
Regeneron Pharmaceuticals, Inc. is an American biotechnology company headquartered in Westchester County, New York. The company was founded in 1988. Originally focused on neurotrophic factors and their regenerative capabilities, giving rise to its name, the company then branched out into the study of both cytokine and tyrosine kinase receptors, which gave rise to their first product, which is a VEGF-trap.
Company history
The company was founded by CEO Leonard Schleifer and scientist George Yancopoulos in 1988.
Regeneron has developed aflibercept, a VEGF inhibitor, and rilonacept, an interleukin-1 blocker. VEGF is a protein that normally stimulates the growth of blood vessels, and interleukin-1 is a protein that is normally involved in inflammation.
On March 26, 2012, Bloomberg announced that Sanofi and Regeneron were in development of a new drug that would help reduce cholesterol up to 72% more than its competitors. The new drug would target the PCSK9 gene.
In July 2015, the company announced a new global collaboration with Sanofi to discover, develop, and commercialize new immuno-oncology drugs, which could generate more than $2 billion for Regeneron, with $640 million upfront, $750 million for proof-of-concept data, and $650 million from the development of REGN2810. REGN2810 was later named cemiplimab. In 2019, Regeneron Pharmaceuticals was announced the 7th best stock of the 2010s, with a total return of 1,457%. Regeneron Pharmaceuticals was home to the two highest-paid pharmaceutical executives as of 2020.
In October 2017, Regeneron made a deal with the Biomedical Advanced Research and Development Authority (BARDA) that the U.S. government would fund 80% of the costs for Regeneron to develop and manufacture antibody-based medications, which subsequently, in 2020, included their COVID-19 treatments, and Regeneron would retain the right to set prices and control production. This deal was criticized in The New York Times. Such deals are not unusual for routin
|
https://en.wikipedia.org/wiki/Neurotrophic%20factors
|
Neurotrophic factors (NTFs) are a family of biomolecules – nearly all of which are peptides or small proteins – that support the growth, survival, and differentiation of both developing and mature neurons. Most NTFs exert their trophic effects on neurons by signaling through tyrosine kinases, usually a receptor tyrosine kinase. In the mature nervous system, they promote neuronal survival, induce synaptic plasticity, and modulate the formation of long-term memories. Neurotrophic factors also promote the initial growth and development of neurons in the central nervous system and peripheral nervous system, and they are capable of regrowing damaged neurons in test tubes and animal models. Some neurotrophic factors are also released by the target tissue in order to guide the growth of developing axons. Most neurotrophic factors belong to one of three families: (1) neurotrophins, (2) glial cell-line derived neurotrophic factor family ligands (GFLs), and (3) neuropoietic cytokines. Each family has its own distinct cell signaling mechanisms, although the cellular responses elicited often do overlap.
Currently, neurotrophic factors are being intensely studied for use in bioartificial nerve conduits because they are necessary in vivo for directing axon growth and regeneration. In studies, neurotrophic factors are normally used in conjunction with other techniques such as biological and physical cues created by the addition of cells and specific topographies. The neurotrophic factors may or may not be immobilized to the scaffold structure, though immobilization is preferred because it allows for the creation of permanent, controllable gradients. In some cases, such as neural drug delivery systems, they are loosely immobilized such that they can be selectively released at specified times and in specified amounts.
List of neurotrophic factors
Although more information is being discovered about neurotrophic factors, their classification is based on different cellular mechanis
|
https://en.wikipedia.org/wiki/Bromine%20monochloride
|
Bromine monochloride, also called bromine(I) chloride, bromochloride, and bromine chloride, is an interhalogen inorganic compound with chemical formula BrCl. It is a very reactive golden yellow gas with boiling point 5 °C and melting point −66 °C. Its CAS number is 13863-41-7, and its EINECS number is 237-601-4. It is a strong oxidizing agent. Its molecular structure in the gas phase was determined by microwave spectroscopy; the Br-Cl bond has a length of re = 2.1360376(18) Å. Its crystal structure was determined by single crystal X-ray diffraction; the bond length in the solid state is 2.179(2) Å and the shortest intermolecular interaction is r(Cl···Br) = 3.145(2) Å.
Uses
Bromine monochloride is used in analytical chemistry in determining low levels of mercury, to quantitatively oxidize mercury in the sample to Hg(II) state.
A common use of bromine monochloride is as an algaecide, fungicide, and disinfectant of industrial recirculating cooling water systems.
Addition of bromine monochloride is used in some types of Li-SO2 batteries to increase voltage and energy density.
See also
List of highly toxic gases
Interhalogen compounds
|
https://en.wikipedia.org/wiki/University%20of%20Michigan%20Biological%20Station
|
The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States.
Overview
Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15.
In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well.
UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station".
The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan
|
https://en.wikipedia.org/wiki/MAPK/ERK%20pathway
|
The MAPK/ERK pathway (also known as the Ras-Raf-MEK-ERK pathway) is a chain of proteins in the cell that communicates a signal from a receptor on the surface of the cell to the DNA in the nucleus of the cell.
The signal starts when a signaling molecule binds to the receptor on the cell surface and ends when the DNA in the nucleus expresses a protein and produces some change in the cell, such as cell division. The pathway includes many proteins, such as mitogen-activated protein kinases (MAPKs), originally called extracellular signal-regulated kinases (ERKs), which communicate by adding phosphate groups to a neighboring protein (phosphorylating it), thereby acting as an "on" or "off" switch.
When one of the proteins in the pathway is mutated, it can become stuck in the "on" or "off" position, a necessary step in the development of many cancers. In fact, components of the MAPK/ERK pathway were first discovered in cancer cells, and drugs that reverse the "on" or "off" switch are being investigated as cancer treatments.
Background
The signal that starts the MAPK/ERK pathway is the binding of extracellular mitogen to a cell surface receptor. This allows a Ras protein (a Small GTPase) to swap a GDP molecule for a GTP molecule, flipping the "on/off switch" of the pathway. The Ras protein can then activate MAP3K (e.g., Raf), which activates MAP2K, which activates MAPK. Finally, MAPK can activate a transcription factor, such as Myc. This process is described in more detail below.
Ras activation
Receptor-linked tyrosine kinases, such as the epidermal growth factor receptor (EGFR), are activated by extracellular ligands, such as the epidermal growth factor (EGF). Binding of EGF to the EGFR activates the tyrosine kinase activity of the cytoplasmic domain of the receptor. The EGFR becomes phosphorylated on tyrosine residues. Docking proteins such as GRB2 contain an SH2 domain that binds to the phosphotyrosine residues of the activated receptor. GRB2 binds to the guanine nuc
|
https://en.wikipedia.org/wiki/Echelon%20Corporation
|
Echelon Corporation was an American company which designed control networks to connect machines and other electronic devices, for the purposes of sensing, monitoring and control. Echelon is now owned by Adesto Technologies.
History
Echelon was founded in February 1988 in Palo Alto, California by Clifford "Mike" Markkula Jr. The chief executive was M. Kenneth Oshman.
Echelon's LonWorks platform for control networking was released in 1990 for use in the building, industrial, transportation, and home automation markets. At their initial public offering on March 31, 1998, their shares were listed on the NASDAQ exchange with the symbol ELON.
Started in 2003, Echelon's Networked Energy Services system was an open metering service. Echelon provides the underlying network technology for the world's largest Advanced Metering Infrastructure (AMI) in Italy with over 27 million connected electricity meters.
Based on the experiences with this installation, Echelon developed the NES (Networked Energy Services) System (including smart meters, data concentrators and a head-end data collection system) in October 2014 with about 3.5 million devices installed.
In August 2014, after quarterly revenues dropped from $24.8 million to $15 million, Echelon announced it was leaving the smart-grid business, shifting its entire corporate focus to the Internet of things as a market for its technology. Echelon committed to only support existing customers, but not grow the grid business, and to potentially seek the sale of its grid business.
Echelon is based in Santa Clara, California, with international offices in China, France, Germany, Italy, Hong Kong, Japan, Korea, The Netherlands, and the United Kingdom.
On June 29, 2018, Adesto Technologies announced its intention to acquire Echelon for $45 million. The acquisition was completed on September 14, 2018.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.