source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Ribosome%20biogenesis
|
Ribosome biogenesis is the process of making ribosomes. In prokaryotes, this process takes place in the cytoplasm with the transcription of many ribosome gene operons. In eukaryotes, it takes place both in the cytoplasm and in the nucleolus. It involves the coordinated function of over 200 proteins in the synthesis and processing of the three prokaryotic or four eukaryotic rRNAs, as well as assembly of those rRNAs with the ribosomal proteins. Most of the ribosomal proteins fall into various energy-consuming enzyme families including ATP-dependent RNA helicases, AAA-ATPases, GTPases, and kinases. About 60% of a cell's energy is spent on ribosome production and maintenance.
Ribosome biogenesis is a very tightly regulated process, and it is closely linked to other cellular activities like growth and division.
Some have speculated that in the origin of life, ribosome biogenesis predates cells, and that genes and cells evolved to enhance the reproductive capacity of ribosomes.
Ribosomes
Ribosomes are the macromolecular machines that are responsible for mRNA translation into proteins. The eukaryotic ribosome, also called the 80S ribosome, is made up of two subunits – the large 60S subunit (which contains the 25S [in plants] or 28S [in mammals], 5.8S, and 5S rRNA and 46 ribosomal proteins) and a small 40S subunit (which contains the 18S rRNA and 33 ribosomal proteins). The ribosomal proteins are encoded by ribosomal genes.
Prokaryotes
There are 52 genes that encode the ribosomal proteins, and they can be found in 20 operons within prokaryotic DNA. Regulation of ribosome synthesis hinges on the regulation of the rRNA itself.
First, a reduction in aminoacyl-tRNA will cause the prokaryotic cell to respond by lowering transcription and translation. This occurs through a series of steps, beginning with stringent factors binding to ribosomes and catalyzing the reaction:GTP + ATP --> pppGpp + AMP
The γ-phosphate is then removed and ppGpp will bind to and inhibit RNA polym
|
https://en.wikipedia.org/wiki/Ramberg%E2%80%93Osgood%20relationship
|
The Ramberg–Osgood equation was created to describe the non linear relationship between stress and strain—that is, the stress–strain curve—in materials near their yield points. It is especially applicable to metals that harden with plastic deformation (see work hardening), showing a smooth elastic-plastic transition. As it is a phenomenological model, checking the fit of the model with actual experimental data for the particular material of interest is essential.
In its original form, the equation for strain (deformation) is
here
is strain,
is stress,
is Young's modulus, and
and are constants that depend on the material being considered. In this form K and n are not the same as the constants commonly seen in the Hollomon equation.
The equation is essentially assuming the elastic strain portion of the stress-strain curve, , can be modeled with a line, while the plastic portion, , can be modeling with a power law. The elastic and plastic components are summed to find the total strain.
The first term on the right side, , is equal to the elastic part of the strain, while the second term, , accounts for the plastic part, the parameters and describing the hardening behavior of the material. Introducing the yield strength of the material, , and defining a new parameter, , related to as , it is convenient to rewrite the term on the extreme right side as follows:
Replacing in the first expression, the Ramberg–Osgood equation can be written as
Hardening behavior and yield offset
In the last form of the Ramberg–Osgood model, the hardening behavior of the material depends on the material constants and . Due to the power-law relationship between stress and plastic strain, the Ramberg–Osgood model implies that plastic strain is present even for very low levels of stress. Nevertheless, for low applied stresses and for the commonly used values of the material constants and , the plastic strain remains negligible compared to the elastic strain. On the other hand
|
https://en.wikipedia.org/wiki/Koszul%E2%80%93Tate%20resolution
|
In mathematics, a Koszul–Tate resolution or Koszul–Tate complex of the quotient ring R/M is a projective resolution of it as an R-module which also has a structure of a dg-algebra over R, where R is a commutative ring and M ⊂ R is an ideal. They were introduced by as a generalization of the Koszul resolution for the quotient R/(x1, ...., xn) of R by a regular sequence of elements. used the Koszul–Tate resolution to calculate BRST cohomology. The differential of this complex is called the Koszul–Tate derivation or Koszul–Tate differential.
Construction
First suppose for simplicity that all rings contain the rational numbers Q. Assume we have a graded supercommutative ring X, so that
ab = (−1)deg(a)deg (b)ba,
with a differential d, with
d(ab) = d(a)b + (−1)deg(a)ad(b)),
and x ∈ X is a homogeneous cycle (dx = 0). Then we can form a new ring
Y = X[T]
of polynomials in a variable T, where the differential is extended to T by
dT=x.
(The polynomial ring is understood in the super sense, so if T has odd degree then T2 = 0.) The result of adding the element T is to kill off the element of the homology of X represented by x, and Y is still a supercommutative ring with derivation.
A Koszul–Tate resolution of R/M can be constructed as follows. We start with the commutative ring R (graded so that all elements have degree 0). Then add new variables as above of degree 1 to kill off all elements of the ideal M in the homology. Then keep on adding more and more new variables (possibly an infinite number) to kill off all homology of positive degree. We end up with a supercommutative graded ring with derivation d whose
homology is just R/M.
If we are not working over a field of characteristic 0, the construction above still works, but it is usually neater to use the following variation of it. Instead of using polynomial rings X[T], one can use a "polynomial ring with divided powers" X〈T〉, which has a basis of elements
T(i) for i ≥ 0,
where
T(i)T(j) = ((i + j)!/i!j!)T(
|
https://en.wikipedia.org/wiki/Gallium%28III%29%20selenide
|
Gallium(III) selenide (Ga2Se3) is a chemical compound. It has a defect sphalerite (cubic form of ZnS) structure. It is a p-type semiconductor
It can be formed by union of the elements. It hydrolyses slowly in water and quickly in mineral acids to form toxic hydrogen selenide gas. The reducing capabilities of the selenide ion make it vulnerable to oxidizing agents. It is advised therefore that it not come into contact with bases.
References
Selenides
Gallium compounds
Semiconductor materials
|
https://en.wikipedia.org/wiki/Saybolt%20universal%20viscosity
|
Saybolt universal viscosity (SUV), and the related Saybolt FUROL viscosity (SFV), are specific standardised tests producing measures of kinematic viscosity. FUROL is an acronym for fuel and road oil. Saybolt universal viscosity is specified by the ASTMD2161. Both tests are considered obsolete to other measures of kinematic viscosity, but their results are quoted widely in technical literature.
In both tests, the time taken for 60ml of the liquid, held at a specific temperature, to flow through a calibrated tube, is measured, using a Saybolt viscometer. The Saybolt universal viscosity test occurs at , or more recently, . The Saybolt FUROL viscosity test occurs at , or more recently, , and uses a larger calibrated tube. This provides for the testing of more viscous fluids, with the result being approximately of the universal viscosity.
The test results are specified in seconds (s), more often than not referencing the test: Saybolt universal seconds (SUS); seconds, Saybolt universal (SSU); seconds, Saybolt universal viscosity (SSUV); and Saybolt FUROL seconds (SFS); seconds, Saybolt FUROL (SSF). The precise temperature at which the test is performed is often specified as well.
References
External links
Online viscosity converter
Measurement apparatus
Useful Saybolt reference
About Saybolt units
Viscosity
Petroleum engineering
Units of measurement
|
https://en.wikipedia.org/wiki/Executor%20%28software%29
|
Executor is a software application that allows Motorola 68000-based classic Mac OS programs to be run on various x86-based operating systems. Executor was created by ARDI (Abacus Research and Development, Inc.). As of 2005, Executor development has been indefinitely postponed; as of 2008, it was made available as open source software.
Overview
Unlike other true Macintosh emulators, Executor requires no startup ROM images or other Apple intellectual property. Executor, much like Wine for running Windows applications on Unix-like platforms, translates Macintosh Toolbox API calls and QuickDraw routines into equivalent Win32 or POSIX API calls. The MS-DOS version of Executor runs using the CWSDPMI protected mode DOS extender.
Executor translates 68k big-endian binary code into x86 little-endian binary code. Executor can only run Macintosh programs designed to run on 68000-based Macintosh hardware. Executor can mimic either Macintosh System 7.0.0, or System 6.0.7 for older applications that are incompatible with System 7.0.0.
Due to the GUI-oriented nature of classic Mac OS applications, Executor has its own GUI environment known as Browser. Browser attempts to somewhat mimic the classic Mac OS desktop and the Finder application without having features such as the trash can or Mac OS control panels. The default Apple menu also does not exist in Browser but is replaced with a rough equivalent; running Mac applications will have Apple menu functions available. Executor does not have support for networking of any type, including AppleTalk support. Executor also lacks the ability to run components (such as extensions or control panels) that are highly integrated with classic Mac OS versions. Due to the differences between the actual MacOS ROM and the emulation provided by Executor, other compatibility issues exist. For example, heise Magazine reports issues with installation of many programs, and running early versions of StarWriter and Adobe PageMill. However, once insta
|
https://en.wikipedia.org/wiki/SGI%20VPro
|
VPro, also known as Odyssey, is a computer graphics architecture for Silicon Graphics workstations. First released on the Octane2, it was subsequently used on the Fuel, Tezro workstations and the Onyx visualization systems, where it was branded InfinitePerformance.
VPro provides some very advanced capabilities such as per-pixel lighting, also known as "phong shading", (through the SGIX_fragment_lighting extension) and 48-bit RGBA color. On the other hand, later designs suffered from constrained bandwidth and poorer texture mapping performance compared to competing GPU solutions, which rapidly caught up to SGI in the market.
Four different Odyssey-based VPro graphics board revisions existed, designated V6, V8, V10 and V12. The first series were the V6 and V8, with 32MB and 128MB of RAM respectively; the V10 and V12 had double the geometry performance of the older V6/V8, but were otherwise similar. The V6 and V10 can have up to 8MB RAM allocated to textures, while V8 and V12 can have up to 108MB RAM used for textures. The V10 and V12 boards used in Fuel, Tezro and Onyx 3000 computers use a different XIO connector than the cards used in Octane2 workstations.
The VPro graphics subsystem consists of an SGI proprietary chip set and associated software. The chip set consists of the buzz ASIC, the pixel blaster and jammer (PB&J) ASIC, and associated SDRAM. The buzz ASIC is a single-chip graphics pipeline. It operates at 251 MHz and contains on-chip SRAM. The buzz ASIC has three interfaces:
Host (16-bit, 400-MHz peer-to-peer XIO link)
SDRAM (The SDRAM is 32 MB (V6 or V10) or 128 MB (V8 or V12); the memory bus operates at half the speed of the buzz ASIC.)
PB&J ASIC
As a result of a patent infringement settlement, SGI acquired rights to some of the Nvidia Quadro GPUs and released VPro-branded products (the V3, VR3, V7 and VR7) based on these (the GeForce 256, Quadro, Quadro 2 MXR, and Quadro 2 Pro, respectively). These cards share nothing with the original Odyssey line
|
https://en.wikipedia.org/wiki/Indium%28III%29%20oxide
|
Indium(III) oxide (In2O3) is a chemical compound, an amphoteric oxide of indium.
Physical properties
Crystal structure
Amorphous indium oxide is insoluble in water but soluble in acids, whereas crystalline indium oxide is insoluble in both water and acids. The crystalline form exists in two phases, the cubic (bixbyite type) and rhombohedral (corundum type). Both phases have a band gap of about 3 eV. The parameters of the cubic phase are listed in the infobox.
The rhombohedral phase is produced at high temperatures and pressures or when using non-equilibrium growth methods. It has a space group Rc No. 167, Pearson symbol hR30, a = 0.5487 nm, b = 0.5487 nm, c = 1.4510 nm, Z = 6 and calculated density 7.31 g/cm3.
Conductivity and magnetism
Thin films of chromium-doped indium oxide (In2−xCrxO3) are a magnetic semiconductor displaying high-temperature ferromagnetism, single-phase crystal structure, and semiconductor behavior with high concentration of charge carriers. It has possible applications in spintronics as a material for spin injectors.
Thin polycrystalline films of indium oxide doped with Zn2+ are highly conductive (conductivity ~105 S/m) and even superconductive at liquid helium temperatures. The superconducting transition temperature Tc depends on the doping and film structure and is below 3.3 K.
Synthesis
Bulk samples can be prepared by heating indium(III) hydroxide or the nitrate, carbonate or sulfate.
Thin films of indium oxide can be prepared by sputtering of indium targets in an argon/oxygen atmosphere. They can be used as diffusion barriers ("barrier metals") in semiconductors, e.g. to inhibit diffusion between aluminium and silicon.
Monocrystalline nanowires can be synthesized from indium oxide by laser ablation, allowing precise diameter control down to 10 nm. Field effect transistors were fabricated from those. Indium oxide nanowires can serve as sensitive and specific redox protein sensors. The sol–gel method is another way to prepare nanowi
|
https://en.wikipedia.org/wiki/Indium%28III%29%20sulfide
|
Indium(III) sulfide (Indium sesquisulfide, Indium sulfide (2:3), Indium (3+) sulfide) is the inorganic compound with the formula In2S3.
It has a "rotten egg" odor characteristic of sulfur compounds, and produces hydrogen sulfide gas when reacted with mineral acids.
Three different structures ("polymorphs") are known: yellow, α-In2S3 has a defect cubic structure, red β-In2S3 has a defect spinel, tetragonal, structure, and γ-In2S3 has a layered structure. The red, β, form is considered to be the most stable form at room temperature, although the yellow form may be present depending on the method of production. In2S3 is attacked by acids and by sulfide. It is slightly soluble in Na2S.
Indium sulfide was the first indium compound ever described, being reported in 1863. Reich and Richter determined the existence of indium as a new element from the sulfide precipitate.
Structure and properties
In2S3 features tetrahedral In(III) centers linked to four sulfido ligands.
α-In2S3 has a defect cubic structure. The polymorph undergoes a phase transition at 420 °C and converts to the spinel structure of β-In2S3. Another phase transition at 740 °C produces the layered γ-In2S3 polymorph.
β-In2S3 has a defect spinel structure. The sulfide anions are closely packed in layers, with octahedrally-coordinated In(III) cations present within the layers, and tetrahedrally-coordinated In(III) cations between them. A portion of the tetrahedral interstices are vacant, which leads to the defects in the spinel.
β-In2S3 has two subtypes. In the T-In2S3 subtype, the tetragonally-coordinated vacancies are in an ordered arrangement, whereas the vacancies in C-In2S3 are disordered. The disordered subtype of β-In2S3 shows activity for photocatalytic H2 production with a noble metal cocatalyst, but the ordered subtype does not.
β-In2S3 is an N-type semiconductor with an optical band gap of 2.1 eV. It has been proposed to replace the hazardous cadmium sulfide, CdS, as a buffer layer in so
|
https://en.wikipedia.org/wiki/Indium%28III%29%20telluride
|
Indium(III) telluride (In2Te3) is a inorganic compound. A black solid, it is sometimes described as an intermetallic compound, because it has properties that are metal-like and salt like. It is a semiconductor that has attracted occasional interest for its thermoelectric and photovoltaic applications. No applications have been implemented commercially however.
Preparation and reactions
A conventional route entails heating the elements in a seal-tube:
Indium(III) telluride reacts with strong acids to produce hydrogen telluride.
Further reading
References
Tellurides
Indium compounds
Semiconductor materials
|
https://en.wikipedia.org/wiki/Marlyn%20Meltzer
|
Marlyn Wescoff Meltzer (1922 – December 7, 2008) was an American mathematician and computer programmer, and one of the six original programmers of ENIAC, the first general-purpose electronic digital computer.
Early life
Meltzer was born Marlyn Wescoff in Philadelphia in 1922. She graduated from Temple University in 1942.
Career
Meltzer was hired by the Moore School of Engineering after graduating to perform weather calculations, mainly because she knew how to operate an adding machine; in 1943, she was hired to perform calculations for ballistics trajectories. At the time, this was accomplished by using manual desktop mechanical calculators. In 1945, she was selected to become one of the 6 original programmers of Electronic Numerical Integrator and Computer.
ENIAC
Meltzer, alongside Kathleen Antonelli, Jean Jennings Bartik, Frances Elizabeth Holberton, Frances Spence and Ruth Teitelbaum, were the original six programmers of ENIAC, a project that originally began in secret at the Moore School of Electrical Engineering at the University of Pennsylvania in 1943.
ENIAC was a huge machine full of black panels and switches, containing 17,468 vacuum tubes, 7200 crystal diodes, 1500 relays, 70,000 resistors, 10,000 capacitors and approximately 5,000,000 hand-soldered joints. It weighed more than 30 short tons, occupied 167m2 and consumed 150 kW of electricity. Its huge power requirement led to a rumor that the lights across Philadelphia would dim every time it was switched on.
ENIAC was unveiled to the public on February 14, 1946, making headlines across the country.
Although mentioned in Woman of the ENIAC at the time, little recognition was attributed to the women working on the computer, with attention focused on the male engineers who built the machine. She resigned from the team in 1947 to get married before ENIAC was relocated to the Aberdeen Proving Grounds.
In 1997, Meltzer was inducted into the Women in Technology International Hall of Fame, along with th
|
https://en.wikipedia.org/wiki/Elkies%20trinomial%20curves
|
In number theory, the Elkies trinomial curves are certain hyperelliptic curves constructed by Noam Elkies which have the property that rational points on them correspond to trinomial polynomials giving an extension of Q with particular Galois groups.
One curve, C168, gives Galois group PSL(2,7) from a polynomial of degree seven, and the other, C1344, gives Galois group AL(8), the semidirect product of a 2-elementary group of order eight acted on by PSL(2, 7), giving a transitive permutation subgroup of the symmetric group on eight roots of order 1344.
The equation of the curve C168 is:
The curve is a plane algebraic curve model for a Galois resolvent for the trinomial polynomial equation x7 + bx + c = 0. If there exists a point (x, y) on the (projectivized) curve, there is a corresponding pair (b, c) of rational numbers, such that the trinomial polynomial either factors or has Galois group PSL(2,7), the finite simple group of order 168. The curve has genus two, and so by Faltings theorem there are only a finite number of rational points on it. These rational points were proven by Nils Bruin using the computer program Kash to be the only ones on C168, and they give only four distinct trinomial polynomials with Galois group PSL(2,7): x7-7x+3 (the Trinks polynomial), (1/11)x7-14x+32 (the Erbach-Fisher-McKay polynomial) and two new polynomials with Galois group PSL(2,7),
and
.
On the other hand, the equation of curve C1344 is:
Once again the genus is two, and by Faltings theorem the list of rational points is finite. It is thought the only rational points on it correspond to polynomials x8+16x+28, x8+576x+1008, 19453x8+19x+2 which have Galois group AL(8), and x8+324x+567, which comes from two different rational points and has Galois group PSL(2, 7) again, this time as the Galois group of a polynomial of degree eight.
References
Galois theory
Number theory
Algebraic curves
|
https://en.wikipedia.org/wiki/SNP%20array
|
In molecular biology, SNP array is a type of DNA microarray which is used to detect polymorphisms within a population. A single nucleotide polymorphism (SNP), a variation at a single site in DNA, is the most frequent type of variation in the genome. Around 335 million SNPs have been identified in the human genome, 15 million of which are present at frequencies of 1% or higher across different populations worldwide.
Principles
The basic principles of SNP array are the same as the DNA microarray. These are the convergence of DNA hybridization, fluorescence microscopy, and solid surface DNA capture. The three mandatory components of the SNP arrays are:
An array containing immobilized allele-specific oligonucleotide (ASO) probes.
Fragmented nucleic acid sequences of target, labelled with fluorescent dyes.
A detection system that records and interprets the hybridization signal.
The ASO probes are often chosen based on sequencing of a representative panel of individuals: positions found to vary in the panel at a specified frequency are used as the basis for probes. SNP chips are generally described by the number of SNP positions they assay. Two probes must be used for each SNP position to detect both alleles; if only one probe were used, experimental failure would be indistinguishable from homozygosity of the non-probed allele.
Applications
An SNP array is a useful tool for studying slight variations between whole genomes. The most important clinical applications of SNP arrays are for determining disease susceptibility and for measuring the efficacy of drug therapies designed specifically for individuals. In research, SNP arrays are most frequently used for genome-wide association studies. Each individual has many SNPs. SNP-based genetic linkage analysis can be used to map disease loci, and determine disease susceptibility genes in individuals. The combination of SNP maps and high density SNP arrays allows SNPs to be used as markers for genetic diseases that hav
|
https://en.wikipedia.org/wiki/NASA%20Space%20Science%20Data%20Coordinated%20Archive
|
The NASA Space Science Data Coordinated Archive (NSSDCA) serves as the permanent archive for NASA space science mission data. "Space science" includes astronomy and astrophysics, solar and space plasma physics, and planetary and lunar science. As the permanent archive, NSSDCA teams with NASA's discipline-specific space science "active archives" which provide access to data to researchers and, in some cases, to the general public. NSSDCA also serves as NASA's permanent archive for space physics mission data. It provides access to several geophysical models and to data from some non-NASA mission data. NSSDCA was called the National Space Science Data Center (NSSDC) prior to March 2015.
NSSDCA supports active space physics and astrophysics researchers. Web-based services allow the NSSDCA to support the general public. This support is in the form of information about spacecraft and access to digital versions of selected imagery. NSSDCA also
provides access to portions of their database contains information about data archived at NSSDCA (and, in some cases, other facilities), the spacecraft which generate space science data and experiments which generate space science data. NSSDCA services also included are data management standards and technologies.
NSSDCA is part of the Solar System Exploration Data Services Office (SSEDSO) in the Solar System Exploration Division at NASA's Goddard Space Flight Center. NSSDCA is sponsored by the Heliophysics Division of NASA's Science Mission Directorate. NSSDCA acts in concert with various NASA discipline data systems in providing certain data and services.
Overview
NSSDCA was first established (as NSSDC) at Goddard Space Flight Center in 1966. NSSDCA's staff consists largely of physical scientists, computer scientists, analysts, programmers, and data technicians. Staffing level, including civil service and onsite contractors, has ranged between 15 and 100 over the life of NSSDCA. Early in its life, NSSDCA accumulated data primari
|
https://en.wikipedia.org/wiki/Dowling%20geometry
|
In combinatorial mathematics, a Dowling geometry, named after Thomas A. Dowling, is a matroid associated with a group. There is a Dowling geometry of each rank for each group. If the rank is at least 3, the Dowling geometry uniquely determines the group. Dowling geometries have a role in matroid theory as universal objects (Kahn and Kung, 1982); in that respect they are analogous to projective geometries, but based on groups instead of fields.
A Dowling lattice is the geometric lattice of flats associated with a Dowling geometry. The lattice and the geometry are mathematically equivalent: knowing either one determines the other. Dowling lattices, and by implication Dowling geometries, were introduced by Dowling (1973a,b).
A Dowling lattice or geometry of rank n of a group G is often denoted Qn(G).
The original definitions
In his first paper (1973a) Dowling defined the rank-n Dowling lattice of the multiplicative group of a finite field F. It is the set of all those subspaces of the vector space Fn that are generated by subsets of the set E that consists of vectors with at most two nonzero coordinates. The corresponding Dowling geometry is the set of 1-dimensional vector subspaces generated by the elements of E.
In his second paper (1973b) Dowling gave an intrinsic definition of the rank-n Dowling lattice of any finite group G. Let S be the set {1,...,n}. A G-labelled set (T, α) is a set T together with a function α: T → G. Two G-labelled sets, (T, α) and (T, β), are equivalent if there is a group element, g, such that β = gα.
An equivalence class is denoted [T, α].
A partial G-partition of S is a set γ = {[B1,α1], ..., [Bk,αk]} of equivalence classes of G-labelled sets such that B1, ..., Bk are nonempty subsets of S that are pairwise disjoint. (k may equal 0.)
A partial G-partition γ is said to be ≤ another one, γ*, if
every block of the second is a union of blocks of the first, and
for each Bi contained in B*j, αi is equivalent to the restriction of α*j t
|
https://en.wikipedia.org/wiki/Gain%20graph
|
A gain graph is a graph whose edges are labelled "invertibly", or "orientably", by elements of a group G. This means that, if an edge e in one direction has label g (a group element), then in the other direction it has label g −1. The label function φ therefore has the property that it is defined differently, but not independently, on the two different orientations, or directions, of an edge e. The group G is called the gain group, φ is the gain function, and the value φ(e) is the gain of e (in some indicated direction). A gain graph is a generalization of a signed graph, where the gain group G has only two elements. See Zaslavsky (1989, 1991).
A gain should not be confused with a weight on an edge, whose value is independent of the orientation of the edge.
Applications
Some reasons to be interested in gain graphs are their connections to network flow theory in combinatorial optimization, to geometry, and to physics.
The mathematics of a network with gains, or generalized network, is connected with the frame matroid of the gain graph.
Suppose we have some hyperplanes in R n given by equations of the form xj = g xi . The geometry of the hyperplanes can be treated by using the following gain graph: The vertex set is {1,2,...,n}. There is an edge ij with gain g (in the direction from i to j) for each hyperplane with equation xj = g xi . These hyperplanes are treated through the frame matroid of the gain graph (Zaslavsky 2003).
Or, suppose we have hyperplanes given by equations of the form xj = xi + g. The geometry of these hyperplanes can be treated by using the gain graph with the same vertex set and an edge ij with gain g (in the direction from i to j) for each hyperplane with equation xj = xi + g. These hyperplanes are studied via the lift matroid of the gain graph (Zaslavsky 2003).
Suppose the gain group has an action on a set Q. Assigning an element si of Q to each vertex gives a state of the gain graph. An edge is satisfied if, for each edge i
|
https://en.wikipedia.org/wiki/Halpin%E2%80%93Tsai%20model
|
Halpin–Tsai model is a mathematical model for the prediction of elasticity of composite material based on the geometry and orientation of the filler and the elastic properties of the filler and matrix. The model is based on the self-consistent field method although often consider to be empirical.
See also
Cadec-online.com implements the Halpin–Tsai model among others.
References
J. C. Halpin Effect of Environmental Factors on Composite Materials, US Air Force Material Laboratory, Technical Report AFML-TR-67-423, June 1969
J.C. Halpin and J. L. Kardos Halpin-Tsai equations:A review, Polymer Engineering and Science, 1976, v16, N5, pp 344-352
Halpin-Tsai model on about.com
Composite materials
Continuum mechanics
Materials science
|
https://en.wikipedia.org/wiki/EtherCAT
|
EtherCAT (Ethernet for Control Automation Technology) is an Ethernet-based fieldbus system developed by Beckhoff Automation. The protocol is standardized in IEC 61158 and is suitable for both hard and soft real-time computing requirements in automation technology.
The goal during development of EtherCAT was to apply Ethernet for automation applications requiring short data update times (also called cycle times; ≤ 100 μs) with low communication jitter (for precise synchronization purposes; ≤ 1 μs) and reduced hardware costs. Typical application fields for EtherCAT are machine controls (e.g. semiconductor tools, metal forming, packaging, injection molding, assembly systems, printing machines, robotics). Remote controlled hump yard facilities used in the railroad industry.
Alternative technologies for networking in the industrial environment are EtherNet/IP, Profinet and Profibus.
Features
Principles
With EtherCAT, the standard Ethernet packet or frame (according to IEEE 802.3) is no longer received, interpreted, and copied as process data at every node. The EtherCAT slave devices read the data addressed to them while the telegram passes through the device, processing data "on the fly". In other words, real-time data and messages are prioritized over more general, less time-sensitive or heavy load data.
Similarly, input data are inserted while the telegram passes through. A frame is not completely received before being processed; instead processing starts as soon as possible. Sending is also conducted with a minimum delay of small bit times. Typically the entire network can be addressed with just one frame.
ISO/OSI Reference Model
Protocol
The EtherCAT protocol is optimized for process data and is transported directly within the standard IEEE 802.3 Ethernet frame using Ethertype 0x88a4. It may consist of several sub-telegrams, each serving a particular memory area of the logical process images that can be up to 4 gigabytes in size. The data sequence is indep
|
https://en.wikipedia.org/wiki/International%20Federation%20of%20Automatic%20Control
|
The International Federation of Automatic Control (IFAC), founded in September 1957 in France, is a multinational federation of 49 national member organizations (NMO), each one representing the engineering and scientific societies concerned with automatic control in its own country.
The aim of the Federation is to promote the science and technology of control in the broadest sense in all systems, whether, for example, engineering, physical, biological, social or economic, in both theory and application. IFAC is also concerned with the impact of control technology on society.
IFAC pursues its purpose by organizing technical meetings, by publications, and by any other means consistent with its constitution and which will enhance the interchange and circulation of information on automatic control activities.
International World Congresses are held every three years. Between congresses, IFAC sponsors many symposia, conferences and workshops covering particular aspects of automatic control.
The official journals of IFAC are Automatica, Control Engineering Practice, Annual Reviews in Control, Journal of Process Control, Engineering Applications of Artificial Intelligence, the Journal of Mechatronics, Nonlinear Analysis: Hybrid Systems, and the IFAC Journal of Systems and Control.
Awards
IFAC Fellows
Major Medals
Giorgio Quazza Medal
Nathaniel B. Nichols Medal
Industrial Achievement Award
Manfred Thoma Medal
High Impact Paper Award
Automatica Prize Paper Award
Control Engineering Practice Prize Paper Award
Journal of Process Control Prize Paper Award
Engineering Applications of Artificial Intelligence Prize Paper Award
Mechatronics Journal Prize Paper Award
Congress Applications Paper Prize
IFAC Congress Young Author Prize
Control Engineering Textbook Prize
Congress Poster Paper Prize
Outstanding Service Award
See also
American Automatic Control Council
Harold Chestnut
Israel Association for Automatic Control
Karl Reinisch
Li Huatian
John C. Loz
|
https://en.wikipedia.org/wiki/Application%20virtualization
|
Application virtualization is a software technology that encapsulates computer programs from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees.
In this context, the term "virtualization" refers to the artifact being encapsulated (application), which is quite different from its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).
Description
Full application virtualization requires a virtualization layer. Application virtualization layers replace part of the runtime environment normally provided by the operating system. The layer intercepts all disk operations of virtualized applications and transparently redirects them to a virtualized location, often a single file. The application remains unaware that it accesses a virtual resource instead of a physical one. Since the application is now working with one file instead of many files spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side by side. Examples of this technology for the Windows platform include:
Cameyo
Ceedo
Citrix XenApp
Microsoft App-V
Numecent Cloudpaging
Oracle Secure Global Desktop
Sandboxie
Turbo (software) (formerly Spoon and Xenocode)
Symantec Workspace Virtualization
VMware ThinApp
V2 Cloud
Benefits
Application virtualization allows applications to run in environments that do not suit the native application. For example, Wine allows some Microsoft Windows applications to run on Linux.
Application virtualization reduces system integration and administration costs by maintaining a common software baseline across multipl
|
https://en.wikipedia.org/wiki/Daniel%20Kleitman
|
Daniel J. Kleitman (born October 4, 1934) is an American mathematician and professor of applied mathematics at MIT. His research interests include combinatorics, graph theory, genomics, and operations research.
Biography
Kleitman was born in 1934 in Brooklyn, New York, the younger of Bertha and Milton Kleitman's two sons. His father was a lawyer who after WWII became a commodities trader and investor. In 1942 the family moved to Morristown, New Jersey, and he graduated from Morristown High School in 1950.
Kleitman then attended Cornell University, from which he graduated in 1954, and received his PhD in Physics from Harvard University in 1958 under Nobel Laureates Julian Schwinger and Roy Glauber. He is the "k" in G. W. Peck, a pseudonym for a group of six mathematicians that includes Kleitman. Formerly a physics professor at Brandeis University, Kleitman was encouraged by Paul Erdős to change his field of study to mathematics. Perhaps humorously, Erdős once asked him, "Why are you only a physicist?"
Kleitman joined the applied mathematics faculty at MIT in 1966, and was promoted to professor in 1969.
Kleitman coauthored at least six papers with Erdős, giving him an Erdős number of 1.
He was a math advisor and extra for the film Good Will Hunting. Since Minnie Driver, who appeared in Good Will Hunting, also appeared in Sleepers with Kevin Bacon, Kleitman has a Bacon number of 2. Adding the two numbers results in an Erdős–Bacon number of 3, which is a tie with Bruce Reznick for the lowest number anyone has.
Personal life
On July 26, 1964 Kleitman married Sharon Ruth Alexander. They have three children.
Selected publications
See also
Kleitman–Wang algorithms
Littlewood–Offord problem
References
External links
Kleitman's homepage
(article available on Douglas West's web page, University of Illinois at Urbana–Champaign)
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
American operations researchers
Harvard Unive
|
https://en.wikipedia.org/wiki/Mixed%20flow%20compressor
|
A mixed flow compressor, or diagonal compressor, combines axial and radial components to produce a diagonal airflow compressor stage. The exit mean radius is greater than at the inlet, like a centrifugal design, but the flow tends to exit in an axial rather than radial direction. This eliminates the need for a relatively large diameter exit diffuser associated with centrifugal compressors. The impeller can be machined from solid using NC machines, in much the same way as that of a centrifugal design.
Diagonal compressors were widely experimented during and just after World War II, but did not see much service use. A diagonal-flow compressor is featured since 2001 in the Pratt & Whitney Canada PW600 series turbofan engines used in the Phenom 100, Eclipse 500, Cessna Citation Mustang and other very light jet aircraft.
See also
Gas compressor
Gas compressors
References
|
https://en.wikipedia.org/wiki/Boolean%20delay%20equation
|
A Boolean Delay Equation (BDE) is an evolution rule for the state of dynamical variables whose values may be represented by a finite discrete numbers os states, such as 0 and 1. As a novel type of semi-discrete dynamical systems, Boolean delay equations (BDEs) are models with Boolean-valued variables that evolve in continuous time. Since at the present time, most phenomena are too complex to be modeled by partial differential equations (as continuous infinite-dimensional systems), BDEs are intended as a (heuristic) first step on the challenging road to further understanding and modeling them. For instance, one can mention complex problems in fluid dynamics, climate dynamics, solid-earth geophysics, and many problems elsewhere in natural sciences where much of the discourse is still conceptual.
One example of a BDE is the Ring oscillator equation: , which produces periodic oscillations. More complex equations can display richer behavior, such as nonperiodic and chaotic (deterministic) behavior.
External links
Boolean Delay Equations: A New Type of Dynamical Systems and Its Applications to Climate and Earthquakes
References
Dynamical systems
Mathematical modeling
|
https://en.wikipedia.org/wiki/Space%20Communications%20Protocol%20Specifications
|
The Space Communications Protocol Specifications (SCPS) are a set of extensions to existing protocols and new protocols developed by the Consultative Committee for Space Data Systems (CCSDS) to improve performance of Internet protocols in space environments. The SCPS protocol stack consists of:
SCPS-FP—A set of extensions to FTP to make it more bit efficient and to add advanced features such as record update within a file and integrity checking on file transfers.
SCPS-TP—A set of TCP options and sender-side modifications to improve TCP performance in stressed environments including long delays, high bit error rates, and significant asymmetries. The SCPS-TP options are TCP options registered with the Internet Assigned Numbers Authority (IANA) and hence SCPS-TP is compatible with other well-behaved TCP implementations.
SCPS-SP—A security protocol comparable to IPsec
SCPS-NP—A bit-efficient network protocol analogous to but not interoperable with IP
The SCPS protocol that has seen the most use commercially is SCPS-TP, usually deployed as a Performance Enhancing Proxy (PEP) to improve TCP performance over satellite links.
External links
www.scps.org is a web page devoted to the SCPS protocols and contains links to the protocol specifications, briefing material, and test results. ()
The Open Channel Foundation distributes a free reference implementation of the SCPS protocols that includes a transport-layer PEP application.
CCSDS.org is the main web page for the CCSDS.
Space standards
Networking standards
Consultative Committee for Space Data Systems
Internet protocols
|
https://en.wikipedia.org/wiki/V%2B
|
V+ (previously known as TVDrive) is a set-top box for Virgin Media's Virgin TV service, which provides personal video recording (PVR) and high definition (HD) functionality to customers who subscribe to the service. Virgin TV have taken a different approach from rival Sky's Sky+ and later Sky+ HD services, by implementing a rental scheme for the V+ Box. When Virgin TV was launched, there was an installation charge (waived under certain circumstances) and a monthly charge for all customers with a discount for XL customers. On 1 June 2007 pricing was revised, with all customers paying a one-off set-up fee and TV M and L customers paid a monthly charge, while TV XL customers had no extra charges. Various deals to lower the set-up fee have been made available to all customers in order to compete with rival Sky.
The V+ set-top box is technically on lease, still owned by Virgin Media, who provide technical support for it free of charge if a problem occurs for the life of a contract. Should the customer downgrade from the V+ service, the recording functions of the V+ box and access to all high definition channels and on demand content will be blocked, effectively acting as a standard V Box. As of Q1 2010, there were a total of 939,900 V+ customers, representing 25% of all Virgin TV subscribers.
History
The V+ Box derives from Telewest's silver TVDrive, and was initially only available to Telewest cable customers. The TVDrive began roll-out on 1 December 2005 on a commercial pilot basis before a full launch in March 2006, becoming the first HD service in the UK. Due to the merger between NTL and Telewest, the TVDrive was made available to NTL cable customers in the Teesside and Glasgow areas on 16 November 2006, in turn NTL ceased development of their MPEG-4 compatible HD PVR, the Scientific-Atlanta Explorer 8450DVB. In January 2007, NTL:Telewest began renting the set-top box nationwide and since the licensing of the Virgin Media name, it became officially available in a
|
https://en.wikipedia.org/wiki/Kolk%20%28vortex%29
|
A kolk (colc) is an underwater vortex created when rapidly rushing water passes an underwater obstacle in boundary areas of high shear. High-velocity gradients produce a violently rotating column of water, similar to a tornado. Kolks can pluck multiple-ton blocks of rock and transport them in suspension for thousands of metres.
Kolks leave clear evidence in the form of plucked-bedrock pits, called rock-cut basins or kolk lakes and downstream deposits of gravel-supported blocks that show percussion but no rounding.
Examples
Kolks were first identified by the Dutch, who observed kolks hoisting several-ton blocks of riprap from dikes and transporting them away, suspended above the bottom. The Larrelt kolk near Emden appeared during the 1717 Christmas flood which broke through a long section of the dyke. The newly formed body of water measured roughly 500 × 100 m and was 25 m deep. In spite of the repair to the dyke, another breach occurred in 1721, which produced more kolks between 15 and 18 m deep. In 1825 during the February flood near Emden, a kolk of 31 m depth was created. The soil was saturated from here for a further 5 km inland.
Kolks are credited with creating the pothole-like features in the highly jointed basalts in the channeled scablands of the Columbia Basin region in Eastern Washington. Depressions were scoured out within the scablands that resemble virtually circular steep-sided potholes. Examples from the Missoula floods in this area include:
The region below Dry Falls includes a number of lakes scoured out by kolks.
Sprague Lake is a kolk-formed basin created by a flow estimated to be wide and deep.
The Alberton Narrows on the Clark Fork River show evidence that kolks plucked boulders from the canyon and deposited them in a rock and gravel bar immediately downstream of the canyon.
The south wall of Hellgate Canyon in Montana shows the rough-plucked surface characteristic of kolk-eroded rock.
Both the walls of the Wallula Gap and the
|
https://en.wikipedia.org/wiki/Defective%20matrix
|
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.
An n × n defective matrix always has fewer than n distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues λ with algebraic multiplicity m > 1 (that is, they are multiple roots of the characteristic polynomial), but fewer than m linearly independent eigenvectors associated with λ. If the algebraic multiplicity of λ exceeds its geometric multiplicity (that is, the number of linearly independent eigenvectors associated with λ), then λ is said to be a defective eigenvalue. However, every eigenvalue with algebraic multiplicity m always has m linearly independent generalized eigenvectors.
A Hermitian matrix (or the special case of a real symmetric matrix) or a unitary matrix is never defective; more generally, a normal matrix (which includes Hermitian and unitary as special cases) is never defective.
Jordan block
Any nontrivial Jordan block of size or larger (that is, not completely diagonal) is defective. (A diagonal matrix is a special case of the Jordan normal form with all trivial Jordan blocks of size and is not defective.) For example, the Jordan block
has an eigenvalue, with algebraic multiplicity n (or greater if there are other Jordan blocks with the same eigenvalue), but only one distinct eigenvector , where The other canonical basis vectors form a chain of generalized eigenvectors such that for .
Any defective matrix has a nontrivial Jordan normal form, which is as close as one can come to diagon
|
https://en.wikipedia.org/wiki/FERMIAC
|
The Monte Carlo trolley, or FERMIAC, was an analog computer invented by physicist Enrico Fermi to aid in his studies of neutron transport.
Operation
The FERMIAC employed the Monte Carlo method to model neutron transport in various types of nuclear systems. Given an initial distribution of neutrons, the goal of the process is to develop numerous "neutron genealogies", or models of the behavior of individual neutrons, including each collision, scattering, and fission. When a fission occurs, the number of emerging neutrons is predicted, and the behavior of each of these neutrons is eventually modeled in the same manner as the first. At each stage, pseudo-random numbers are used to make decisions that affect the behavior of each neutron.
The FERMIAC used this method to create two-dimensional neutron genealogies on a scale diagram of a nuclear device. A series of drums on the device were set according to the material being crossed and a random choice between fast and slow neutrons. Random numbers also determined the direction of travel and the distance until the next collision. Once the drums were set, the trolley was rolled across the diagram, drawing a path as it went. Any time a change in material was indicated on the diagram, the drum settings were adjusted accordingly before continuing.
History
In the early 1930s, Italian physicist Enrico Fermi led a team of young scientists, dubbed the "Via Panisperna boys", in their now-famous experiments in nuclear physics. During this time, Fermi developed "statistical sampling" techniques that he effectively employed to predict the results of experiments.
Years later, in 1946, Fermi participated in the initial review of results from the ENIAC. Among the others present was Los Alamos mathematician Stanislaw Ulam, who was familiar with the use of statistical sampling techniques similar to those previously developed by Fermi. Such techniques had mainly fallen out of use, due to the long, repetitious calculations requ
|
https://en.wikipedia.org/wiki/Voltage%20converter
|
A voltage converter is an electric power converter which changes the voltage of an electrical power source. It may be combined with other components to create a power supply.
AC and DC
AC voltage conversion uses a transformer. Conversion from one DC voltage to another requires electronic circuitry (electromechanical equipment was required before the development of semiconductor electronics), like a DC-DC converter. Mains power (called household current in the US) is universally AC.
Practical voltage converters
Mains converters
A common use of the voltage converter is for a device that allows appliances made for the mains voltage of one geographical region to operate in an area with different voltage. Such a device may be called a voltage converter, power converter, travel adapter, etc. Most single phase alternating-current electrical outlets in the world supply power at 210–240 V or at 100–120 V. A transformer or autotransformer can be used; (auto)transformers are inherently reversible, so the same transformer can be used to step the voltage up, or step it down by the same ratio. Lighter and smaller devices can be made using electronic circuitry; reducing the voltage electronically is simpler and cheaper than increasing it. Small, inexpensive, travel adapters suitable for low-power devices such as electric shavers, but not, say, hairdryers, are available; travel adapters usually include plug-end adapters for the different standards used in different countries. A transformer would be used for higher power.
Transformers do not change the frequency of electricity; in many regions with 100–120 V, electricity is supplied at 60 Hz, and 210–240 V regions tend to use 50 Hz. This may affect operation of devices which depend on mains frequency (some audio turntables and mains-only electric clocks, etc., although modern equipment is less likely to depend upon mains frequency). Equipment with high-powered motors or internal transformers designed to operate at 60 Hz may over
|
https://en.wikipedia.org/wiki/Wylie%20Dufresne
|
Wylie Dufresne (born 1970) is the chef and owner of Du's Donuts and the former chef and owner of the wd~50 and Alder restaurants in Manhattan. Dufresne is a leading American proponent of molecular gastronomy, the movement to incorporate science and new techniques in the preparation and presentation of food.
Early life
Born in 1970 in Providence, Rhode Island, Dufresne is a graduate of Friends Seminary and The French Culinary Institute (now known as The International Culinary Center) in New York. In 1992, he completed a B.A. in philosophy at Colby College in Waterville, Maine.
Career
From 1994 through 1999, he worked for Jean-Georges Vongerichten, where he was eventually named sous chef at Vongerichten's eponymous Jean Georges. In 1998 he was chef de cuisine at Vongerichten's Prime in The Bellagio, Las Vegas. In 1999, he left to become the first chef at 71 Clinton Fresh Food. In April 2003, he opened his 70-seat restaurant, wd~50 (named for the chef's initials and the street address, as well as a pun on WD-40) on Clinton Street on Manhattan's Lower East Side. In March 2013, he opened a second restaurant Alder in the East Village. wd-50 closed 30 November 2014 and Alder closed in August 2015.
Dufresne was a James Beard Foundation nominee for Rising Star Chef of the Year in 2000 and chosen the same year by New York Magazine for their New York Awards. Food & Wine magazine named him one of 2001 America's Ten Best Chefs award and, in 2006, New York Magazine's Adam Platt placed wd-50 fourth in his list of New York's 101 best restaurants. He was awarded a star in Michelin's New York City Guide, 2006, 2007, and 2008, the first Red Guide for North America, and was nominated for Best Chef New York by the James Beard Foundation. His signature preparations include Pickled Beef Tongue with Fried Mayonnaise and Carrot-Coconut Sunnyside-Up.
In 2006, Dufresne lost to Mario Batali on Iron Chef America. In 2007, he began making appearances as a judge on Bravo's Top Chef, which in
|
https://en.wikipedia.org/wiki/Muscle%20atrophy
|
Muscle atrophy is the loss of skeletal muscle mass. It can be caused by immobility, aging, malnutrition, medications, or a wide range of injuries or diseases that impact the musculoskeletal or nervous system. Muscle atrophy leads to muscle weakness and causes disability.
Disuse causes rapid muscle atrophy and often occurs during injury or illness that requires immobilization of a limb or bed rest. Depending on the duration of disuse and the health of the individual, this may be fully reversed with activity. Malnutrition first causes fat loss but may progress to muscle atrophy in prolonged starvation and can be reversed with nutritional therapy. In contrast, cachexia is a wasting syndrome caused by an underlying disease such as cancer that causes dramatic muscle atrophy and cannot be completely reversed with nutritional therapy. Sarcopenia is age-related muscle atrophy and can be slowed by exercise. Finally, diseases of the muscles such as muscular dystrophy or myopathies can cause atrophy, as well as damage to the nervous system such as in spinal cord injury or stroke. Thus, muscle atrophy is usually a finding (sign or symptom) in a disease rather than being a disease by itself. However, some syndromes of muscular atrophy are classified as disease spectrums or disease entities rather than as clinical syndromes alone, such as the various spinal muscular atrophies.
Muscle atrophy results from an imbalance between protein synthesis and protein degradation, although the mechanisms are incompletely understood and are variable depending on the cause. Muscle loss can be quantified with advanced imaging studies but this is not frequently pursued. Treatment depends on the underlying cause but will often include exercise and adequate nutrition. Anabolic agents may have some efficacy but are not often used due to side effects. There are multiple treatments and supplements under investigation but there are currently limited treatment options in clinical practice. Given the im
|
https://en.wikipedia.org/wiki/Space%20%28mathematics%29
|
In mathematics, a space is a set (sometimes called a universe) with some added structure.
While modern mathematics uses many types of spaces, such as Euclidean spaces, linear spaces, topological spaces, Hilbert spaces, or probability spaces, it does not define the notion of "space" itself.
A space consists of selected mathematical objects that are treated as points, and selected relationships between these points.
The nature of the points can vary widely: for example, the points can be elements of a set, functions on another space, or subspaces of another space. It is the relationships that define the nature of the space. More precisely, isomorphic spaces are considered identical, where an isomorphism between two spaces is a one-to-one correspondence between their points that preserves the relationships. For example, the relationships between the points of a three-dimensional Euclidean space are uniquely determined by Euclid's axioms, and all three-dimensional Euclidean spaces are considered identical.
Topological notions such as continuity have natural definitions in every Euclidean space.
However, topology does not distinguish straight lines from curved lines, and the relation between Euclidean and topological spaces is thus "forgetful". Relations of this kind are treated in more detail in the Section "Types of spaces".
It is not always clear whether a given mathematical object should be considered as a geometric "space", or an algebraic "structure". A general definition of "structure", proposed by Bourbaki, embraces all common types of spaces, provides a general definition of isomorphism, and justifies the transfer of properties between isomorphic structures.
History
Before the golden age of geometry
In ancient Greek mathematics, "space" was a geometric abstraction of the three-dimensional reality observed in everyday life. About 300 BC, Euclid gave axioms for the properties of space. Euclid built all of mathematics on these geometric foundations, goin
|
https://en.wikipedia.org/wiki/Random%20compact%20set
|
In mathematics, a random compact set is essentially a compact set-valued random variable. Random compact sets are useful in the study of attractors for random dynamical systems.
Definition
Let be a complete separable metric space. Let denote the set of all compact subsets of . The Hausdorff metric on is defined by
is also а complete separable metric space. The corresponding open subsets generate a σ-algebra on , the Borel sigma algebra of .
A random compact set is а measurable function from а probability space into .
Put another way, a random compact set is a measurable function such that is almost surely compact and
is a measurable function for every .
Discussion
Random compact sets in this sense are also random closed sets as in Matheron (1975). Consequently, under the additional assumption that the carrier space is locally compact, their distribution is given by the probabilities
for
(The distribution of а random compact convex set is also given by the system of all inclusion probabilities )
For , the probability is obtained, which satisfies
Thus the covering function is given by
for
Of course, can also be interpreted as the mean of the indicator function :
The covering function takes values between and . The set of all with is called the support of . The set , of all with is called the kernel, the set of fixed points, or essential minimum . If , is а sequence of i.i.d. random compact sets, then almost surely
and converges almost surely to
References
Matheron, G. (1975) Random Sets and Integral Geometry. J.Wiley & Sons, New York.
Molchanov, I. (2005) The Theory of Random Sets. Springer, New York.
Stoyan D., and H.Stoyan (1994) Fractals, Random Shapes and Point Fields. John Wiley & Sons, Chichester, New York.
Random dynamical systems
Statistical randomness
|
https://en.wikipedia.org/wiki/Dead-end%20elimination
|
The dead-end elimination algorithm (DEE) is a method for minimizing a function over a discrete set of independent variables. The basic idea is to identify "dead ends", i.e., combinations of variables that are not necessary to define a global minimum because there is always a way of replacing such combination by a better or equivalent one. Then we can refrain from searching such combinations further. Hence, dead-end elimination is a mirror image of dynamic programming, in which "good" combinations are identified and explored further.
Although the method itself is general, it has been developed and applied mainly to the problems of predicting and designing the structures of proteins. It closely related to the notion of dominance in optimization also known as substitutability in a Constraint Satisfaction Problem. The original description and proof of the dead-end elimination theorem can be found in .
Basic requirements
An effective DEE implementation requires four pieces of information:
A well-defined finite set of discrete independent variables
A precomputed numerical value (considered the "energy") associated with each element in the set of variables (and possibly with their pairs, triples, etc.)
A criterion or criteria for determining when an element is a "dead end", that is, when it cannot possibly be a member of the solution set
An objective function (considered the "energy function") to be minimized
Note that the criteria can easily be reversed to identify the maximum of a given function as well.
Applications to protein structure prediction
Dead-end elimination has been used effectively to predict the structure of side chains on a given protein backbone structure by minimizing an energy function . The dihedral angle search space of the side chains is restricted to a discrete set of rotamers for each amino acid position in the protein (which is, obviously, of fixed length). The original DEE description included criteria for the elimination of single rot
|
https://en.wikipedia.org/wiki/Masking%20threshold
|
Masking threshold within acoustics (a branch of physics that deals with topics such as vibration, sound, ultrasound , and infrasound), refers to a process where if there are two concurrent sounds and one sound is louder than the other, a person may be unable to hear the soft sound because it is masked by the louder sound.
So the masking threshold is the sound pressure level of a sound needed to make the sound audible in the presence of another noise called a "masker". This threshold depends upon the frequency, the type of masker, and the kind of sound being masked. The effect is strongest between two sounds close in frequency.
In the context of audio transmission, there are some advantages to being unable to perceive a sound. In audio encoding , for example, better compression can be achieved by omitting the inaudible tones. This requires fewer bits to encode the sound and reduces the size of the final file.
Applications in audio compression
It is uncommon to work with only one tone. Most sounds are composed of multiple tones. There can be many possible maskers at the same frequency. In this situation, it would be necessary to compute the global masking threshold using a high resolution Fast Fourier transform via 512 or 1024 points to determine the frequencies that comprise the sound. Because there are bandwidths that humans are not able to hear, it is necessary to know the signal level, masker type, and the frequency band before computing the individual thresholds. To avoid having the masking threshold under the threshold in quiet, one adds the last one to the computation of partial thresholds. This allows computation of the signal-to-mask ratio (SMR).
The psychoacoustic model
The MPEG audio encoding process leverages the masking threshold. In this process, there is a block called "Psychoacoustic model". This is communicated with the band filter and the quantify block. The psychoacoustic model analyzes the samples sent to it by the filter band, computing the
|
https://en.wikipedia.org/wiki/Teliospore
|
Teliospore (sometimes called teleutospore) is the thick-walled resting spore of some fungi (rusts and smuts), from which the basidium arises.
Development
They develop in telia (sing. telium or teliosorus).
The telial host is the primary host in heteroecious rusts. The aecial host is the alternate host (look for pycnia and aecia).
These terms apply when two hosts are required by a heteroecious rust fungus to complete its life cycle.
Morphology
Teliospores consist of one, two or more dikaryote cells.
Teliospores are often dark-coloured and thick-walled, especially in species where they overwinter (acting as chlamydospores).
Two-celled teliospores formerly defined the genus Puccinia. Here the wall is particularly thick at the tip of the terminal cell which extends into a beak in some species.
Teliospores consist of dikaryote cells. As the teliospore cells germinate, the nuclei undergo karyogamy and thereafter meiosis, giving rise to a four-celled basidium with haploid basidiospores.
See also
Chlamydospore
Urediniomycetes
Pycniospore
Aeciospore
Urediniospore
Ustilaginomycetes
Rust fungus: Spores
References
C.J. Alexopolous, Charles W. Mims, M. Blackwell, Introductory Mycology, 4th ed. (John Wiley and Sons, Hoboken NJ, 2004)
Germ cells
Fungal morphology and anatomy
Mycology
|
https://en.wikipedia.org/wiki/Chlamydospore
|
A chlamydospore is the thick-walled large resting spore of several kinds of fungi, including Ascomycota such as Candida, Basidiomycota such as Panus, and various Mortierellales species. It is the life-stage which survives in unfavourable conditions, such as dry or hot seasons. Fusarium oxysporum which causes the plant disease Fusarium wilt is one which forms chlamydospores in response to stresses like nutrient depletion. Mycelia of the pathogen can survive in this manner and germinate in favorable conditions.
Chlamydospores are usually dark-coloured, spherical, and have a smooth (non-ornamented) surface. They are multicellular, with cells connected by pores in the septae between cells.
Chlamydospores are a result of asexual reproduction (in which case they are conidia called chlamydoconidia) or sexual reproduction (rare).
Teliospores are special kind of chlamydospores formed by rusts and smuts.
See also
Conidium
Resting spore
Zygospore
References
External links
The chlamydospores of Candida albicans,
Chlamydospore development
Germ cells
Fungal morphology and anatomy
Mycology
|
https://en.wikipedia.org/wiki/Phonovision
|
Phonovision was a patented concept to create pre-recorded mechanically scanned television recordings on gramophone records. Attempts at developing Phonovision were undertaken in the late 1920s in London by its inventor, Scottish television pioneer John Logie Baird. The objective was not simply to record video, but to record it synchronously, as Baird intended playback from an inexpensive playback device, which he called a 'Phonovisor'. Baird stated that he had several records made of the sound of the vision signal but that the quality was poor. Unlike Baird's other experiments (including stereoscopy, colour and infra-red night-vision), there is no evidence of him having demonstrated playback of pictures, though he did play back the sound of the vision signal to audiences. Baird moved on leaving behind several discs in the hands of museums and favoured company members. Until 1982, this was the extent of knowledge regarding Phonovision.
Discoveries and Restoration
From 1982, Donald F. McLean undertook a forensic-level investigation that identified a total of five different disc recordings dated 1927-28 that closely aligned with the principles of Baird's Phonovision patents. In addition, study of the distortions in the recordings led to a new understanding of the mechanical problems Baird had encountered explaining why these discs were never good enough for picture playback. The problems were largely corrected by software, and the resultant images project a far better quality image than what would have been seen in Baird's laboratories at the time.
Despite its technical problems, Phonovision remains the very earliest means of recording a television signal. In a sense, it can be seen as the progenitor of other disc-based systems, such as the European TelDec system of the early 1970s and RCA's Capacitance Electronic Disc, known as SelectaVision.
The Experimental Phonovision Discs (1927-28)
The earliest surviving Phonovision disc depicts one of the dummy heads tha
|
https://en.wikipedia.org/wiki/Impedance%20cardiography
|
Impedance cardiography (ICG) is a non-invasive technology measuring total electrical conductivity of the thorax and its changes in time to process continuously a number of cardiodynamic parameters, such as stroke volume (SV), heart rate (HR), cardiac output (CO), ventricular ejection time (VET), pre-ejection period and used to detect the impedance changes caused by a high-frequency, low magnitude current flowing through the thorax between additional two pairs of electrodes located outside of the measured segment. The sensing electrodes also detect the ECG signal, which is used as a timing clock of the system.
Introduction
Impedance cardiography (ICG), also referred to as electrical impedance plethysmography (EIP) or Thoracic Electrical Bioimpedance (TEB) has been researched since the 1940s. NASA helped develop the technology in the 1960s. The use of impedance cardiography in psychophysiological research was pioneered by the publication of an article by Miller and Horvath in 1978. Subsequently, the recommendations of Miller and Horvath were confirmed by a standards group in 1990. A comprehensive list of references is available at ICG Publications. With ICG, the placement of four dual disposable sensors on the neck and chest are used to transmit and detect electrical and impedance changes in the thorax, which are used to measure and calculate cardiodynamic parameters.
Process
Four pairs of electrodes are placed at the neck and the diaphragm level, delineating the thorax
High frequency, low magnitude current is transmitted through the chest in a direction parallel with the spine from the set of outside pairs
Current seeks path of least resistance: the blood filled aorta (the systolic phase signal) and both vena cava superior and inferior (the diastolic phase signal, mostly related to respiration)
The inside pairs, placed at the anatomic landmarks delineating thorax, sense the impedance signals and the ECG signal
ICG measures the baseline impedance (resistanc
|
https://en.wikipedia.org/wiki/HACS
|
High Angle Control System (HACS) was a British anti-aircraft fire-control system employed by the Royal Navy from 1931 and used widely during World War II. HACS calculated the necessary deflection required to place an explosive shell in the location of a target flying at a known height, bearing and speed.
Early history
The HACS was first proposed in the 1920s and began to appear on Royal Navy (RN) ships in January 1930, when HACS I went to sea in . HACS I did not have any stabilization or power assist for director training. HACS III which appeared in 1935, had provision for stabilization, was hydraulically driven, featured much improved data transmission and it introduced the HACS III Table. The HACS III table (computer) had numerous improvements including raising maximum target speed to 350 knots, continuous automatic fuze prediction, improved geometry in the deflection Screen, and provisions for gyro inputs to provide stabilization of data received from the director. The HACS was a control system and was made possible by an effective data transmission network between an external gun director, a below decks fire control computer, and the ship's medium calibre anti-aircraft (AA) guns.
Development
Operation
The bearing and altitude of the target was measured directly on the UD4 Height Finder/Range Finder, a coincidence rangefinder located in the High Angle Director Tower (HADT). The direction of travel was measured by aligning a binocular graticule with the target aircraft fuselage. The early versions of HACS, Mk. I through IV, did not measure target speed directly, but estimated this value based on the target type. All of these values were sent via selsyn to the HACS in the High Angle Calculating Position (HACP) located below decks. The HACS used these values to calculate the range rate (often called rate along in RN parlance), which is the apparent target motion along the line of sight. This was also printed on a paper plot so that a range rate officer could ass
|
https://en.wikipedia.org/wiki/Comparison%20of%20version-control%20software
|
In software development, version control is a class of systems responsible for managing changes to computer programs or other collections of information such that revisions have a logical and consistent organization. The following tables include general and technical information on notable version control and software configuration management (SCM) software. For SCM software not suitable for source code, see Comparison of open-source configuration management software.
General information
Table explanation
Repository model describes the relationship between various copies of the source code repository. In a client–server model, users access a master repository via a client; typically, their local machines hold only a working copy of a project tree. Changes in one working copy must be committed to the master repository before they are propagated to other users. In a distributed model, repositories act as peers, and users typically have a local repository with version history available, in addition to their working copies.
Concurrency model describes how changes to the working copy are managed to prevent simultaneous edits from causing nonsensical data in the repository. In a lock model, changes are disallowed until the user requests and receives an exclusive lock on the file from the master repository. In a merge model, users may freely edit files, but are informed of possible conflicts upon checking their changes into the repository, whereupon the version control system may merge changes on both sides, or let the user decide when conflicts arise. Distributed version control systems usually use a merge concurrency model.
Technical information
Table explanation
Software: The name of the application that is described.
Programming language: The coding language in which the application is being developed
Storage Method: Describes the form in which files are stored in the repository. A snapshot indicates that a committed file(s) is stored in its entirety—usually
|
https://en.wikipedia.org/wiki/Body%20capacitance
|
Body capacitance is the physical property of the human body that it acts as a capacitor. Like any other electrically conductive object, a human body can store electric charge if insulated. The actual amount of capacitance varies with the surroundings; it would be low when standing on top of a pole with nothing nearby, but high when leaning against an insulated, but grounded large metal surface, such as a household refrigerator, or a metal wall in a factory.
Properties
Synthetic fabrics and friction can charge a human body to about 3 kV. Low potentials may not have any notable effect, but some electronic devices can be damaged by modest voltages of 100 volts. Electronics factories are very careful to prevent people from becoming charged up. A whole branch of the electronics industry deals with preventing static charge build-up and protecting products against electrostatic discharge.
Notably, a combination of footwear with some sole materials, low humidity, and a dry carpet (synthetic fiber in particular) can cause footsteps to charge a person's body capacitance to as much as a few tens of kilovolts with respect to the earth. The human and surroundings then constitute a highly charged capacitor. A close approach to any conductive object connected to earth (ground) can create a shock, even a visible spark.
Body capacitance was a significant nuisance when tuning the earliest radios; touching a tuning knob would couple the body capacitance into the tuning circuit, slightly changing its resonant frequency. However, body capacitance is very useful in the Theremin, a musical instrument in which it causes slight frequency shifts of the instrument's internal oscillators. One of them changes pitch, and the other causes loudness (volume) to change smoothly between silence and full amount.
Capacitance of a human body in normal surroundings is typically in the tens to low hundreds of picofarads, which is small by typical electronic standards. While humans are much larger th
|
https://en.wikipedia.org/wiki/Information%20sensitivity
|
Information sensitivity is the control of access to information or knowledge that might result in loss of an advantage or level of security if disclosed to others.
Loss, misuse, modification, or unauthorized access to sensitive information can adversely affect the privacy or welfare of an individual, trade secrets of a business or even the security and international relations of a nation depending on the level of sensitivity and nature of the information.
Non-sensitive information
Public information
This refers to information that is already a matter of public record or knowledge. With regard to government and private organizations, access to or release of such information may be requested by any member of the public, and there are often formal processes laid out for how to do so. The accessibility of government-held public records is an important part of government transparency, accountability to its citizens, and the values of democracy. Public records may furthermore refer to information about identifiable individuals that is not considered confidential, including but not limited to: census records, criminal records, sex offender registry files, and voter registration.
Routine business information
This includes business information that is not subjected to special protection and may be routinely shared with anyone inside or outside of the business.
Types of sensitive information
Confidential information is used in a general sense to mean sensitive information whose access is subject to restriction, and may refer to information about an individual as well as that which pertains to a business.
However, there are situations in which the release of personal information could have a negative effect on its owner. For example, a person trying to avoid a stalker will be inclined to further restrict access to such personal information. Furthermore, a person's SSN or SIN, credit card numbers, and other financial information may be considered private if their disclo
|
https://en.wikipedia.org/wiki/Tert-Butylhydroquinone
|
tert-Butylhydroquinone (TBHQ, tertiary butylhydroquinone) is a synthetic aromatic organic compound which is a type of phenol. It is a derivative of hydroquinone, substituted with a tert-butyl group.
Applications
Food preservative
In foods, TBHQ is used as a preservative for unsaturated vegetable oils and many edible animal fats. It does not cause discoloration even in the presence of iron, and does not change flavor or odor of the material to which it is added. It can be combined with other preservatives such as butylated hydroxyanisole (BHA). As a food additive, its E number is E319. It is added to a wide range of foods. Its primary advantage is extending storage life.
Other
In perfumery, it is used as a fixative to lower the evaporation rate and improve stability.
It is used industrially as a stabilizer to inhibit autopolymerization of organic peroxides.
It is used as an antioxidant in biodiesel.
It is also added to varnishes, lacquers, resins, and oil-field additives.
Safety and regulation
The European Food Safety Authority (EFSA) and the United States Food and Drug Administration (FDA) have evaluated TBHQ and determined that it is safe to consume at the concentration allowed in foods. The FDA and European Union both set an upper limit of 0.02% (200mg/kg) of the oil or fat content in foods. At very high doses, it has some negative health effects on lab animals, such as producing precursors to stomach tumors and damage to DNA.
A number of studies have shown that prolonged exposure to very high doses of TBHQ may be carcinogenic, especially for stomach tumors. Other studies, however, have shown opposite effects including inhibition against HCA-induced carcinogenesis (by depression of metabolic activation) for TBHQ and other phenolic antioxidants (TBHQ was one of several, and not the most potent). The EFSA considers TBHQ to be noncarcinogenic. A 1986 review of scientific literature concerning the toxicity of TBHQ determined that a wide margin of safety
|
https://en.wikipedia.org/wiki/Chakana
|
The chakana (Andean cross, "stepped cross" or "step motif" or "stepped motif") is a stepped cross motif used by the Inca and pre-incan Andean societies. The most commonly used variation of this symbol used today is made up of an equal-armed cross indicating the cardinal points of the compass and a superimposed square. Chakana means 'bridge', and means 'to cross over' in Quechua. The Andean cross motif appears in pre-contact artifacts such as textiles and ceramics from such cultures as the Chavín, Wari, Ica, and Tiwanaku, but with no particular emphasis and no key or guide to a means of interpretation. The anthropologist Alan Kolata calls the Andean cross the "one of the most ubiquitous, if least understood elements in Tiwanaku iconography". The Andean cross symbol has long cultural tradition spanning 4,000 years up to the Inca empire.
Andean cross with central eye motif
Ancient Tiwanaku Qirus sometimes bear Andean crosses with central eye motifs. The central eye sometimes is vertically divided. The anthropologist Scott C. Smith interprets the Andean cross motif as a top view of a platform mound (like the Akapana or Pumapunku). According to anthropologist Robin Beck the cross motif in Yaya-Mama stone carving may have been a precursor of the Tiwanaku Andean cross. Beck suggests that the Tiwanaku Andean cross is a representation of a "platform-chamber complex".
Historical evidence
The Andean cross is one of the oldest symbols in the Andes. It appears as a prominent element of the decoration of the Tello Obelisk, a decorated monolithic pillar discovered by Peruvian archaeologist Julio C. Tello at the Chavín culture site of Chavín de Huántar. Construction of Chavín de Huántar began around 1200 BCE and the site continued in use to about 400 BCE. The exact date of the Tello Obelisk is not known, but based on its style it probably dates to the middle of this range, around 800 BCE. The form of the Andean cross may be replicated in the Akapana, a large terraced platform mo
|
https://en.wikipedia.org/wiki/Grey%20noise
|
Grey noise is random noise whose frequency spectrum follows an equal-loudness contour (such as an inverted A-weighting curve).
The result is that grey noise contains all frequencies with equal loudness, as opposed to white noise, which contains all frequencies with equal energy. The difference between the two is the result of psychoacoustics, more specifically the fact that the human hearing is more sensitive to some frequencies than others.
Since equal-loudness curves depend not only on the individual but also on the volume at which the noise is played back, there is no one true grey noise. A mathematically simpler and clearly defined approximation of an equal-loudness noise is pink noise which creates an equal amount of energy per octave, not per hertz (i.e. a logarithmic instead of a linear behavior), so pink noise is closer to "equally loud at all frequencies" than white noise is.
See also
Colors of noise
References
Noise (electronics)
|
https://en.wikipedia.org/wiki/Blackbird%20%28software%29
|
Blackbird (formerly named FORscene) is an integrated internet video platform, video editing software, covering non-linear editing and publishing for broadcast, web and mobile.
Designed by Blackbird plc to allow collaborative editing of video at resolutions of up to 540p and up to 60 frames per second on bandwidths as low as 2MBit/s, it is capable of video logging, reviewing, publishing and hosting through HD and 4K to UHD quality from original sources. The system is implemented as a mobile app for Android and iOS devices, a Java applet and a pure JavaScript web application as part of its user interface. The latter runs on platforms without application installation, codec installation, or machine configuration and has Web 2.0 features.
Blackbird won the Royal Television Society's award for Technology in the post-production process in December 2005.
Usage
The Blackbird platform's functionality makes it suitable for multiple uses in the video editing workflow.
For editors and producers wanting to produce broadcast-quality output, Blackbird provides an environment for the early stages of post-production to happen remotely and cheaply (logging, shot selection, collaborative reviewing, rough cutting and offline editing, for example) and more recently fine cut editing. Blackbird then outputs instructions in standard formats which can be applied to the high-quality master-footage for detailed and high-quality editing prior to broadcast.
Other users want to prepare footage for publishing to lower-quality media - the small screens of mobile phones and video iPods, and to the web where bandwidth restricts the quality of video it is currently practical to output. For these users, all editing can be carried out in Blackbird, before publishing to social media and online video channels, OTT or commercial cloud storage. Video can also be saved in MPEG, Ogg, HTML5, podcasting formats as well as Blackbird's proprietary player.
The platform was reported in July 2012 as being u
|
https://en.wikipedia.org/wiki/D/U%20ratio
|
In the design of radio broadcast systems, especially television systems, the desired-to-undesired channel ratio (D/U ratio) is a measure of the strength of the broadcast signal for a particular channel compared with the strength of undesired broadcast signals in the same channel (e.g. from other nearby transmitting stations).
See also
Signal-to-noise ratio
References
ATSC A/74 compliance and tuner design implications; eetimes.com
Engineering ratios
Noise (electronics)
|
https://en.wikipedia.org/wiki/Radium%20and%20radon%20in%20the%20environment
|
Radium and radon are important contributors to environmental radioactivity. Radon occurs naturally as a result of decay of radioactive elements in soil and it can accumulate in houses built on areas where such decay occurs. Radon is a major cause of cancer; it is estimated to contribute to ~2% of all cancer related deaths in Europe.
Radium, like radon, is radioactive and is found in small quantities in nature and is hazardous to life if radiation exceeds 20-50 mSv/year. Radium is a decay product of uranium and thorium. Radium may also be released into the environment by human activity, for example, in improperly discarded products painted with radioluminescent paint.
Radium
In the oil and gas industries
Residues from the oil and gas industry often contain radium and its daughters. The sulfate scale from an oil well can be very radium rich. The water inside an oil field is often very rich in strontium, barium and radium while seawater is very rich in sulfate so if water from an oil well is discharged into the sea or mixed with seawater the radium is likely to be brought out of solution by the barium/strontium sulfate which acts as a carrier precipitate.
Radioluminescent (glow in the dark) products
Local contamination from radium-based radioluminescent paints having been improperly disposed of is not unknown.
In radioactive quackery
Eben Byers was a wealthy American socialite whose death in 1932 from using a radioactive quackery product called Radithor is a prominent example of a death caused by radium. Radithor contained ~1 μCi (40 kBq) of 226Ra and 1 μCi of 228Ra per bottle. Radithor was taken by mouth and radium, being a calcium mimic, has a very long biological halflife in bone.
Radon
Most of the dose is due to the decay of the polonium (218Po) and lead (214Pb) daughters of 222Rn. By controlling exposure to the daughters the radioactive dose to the skin and lungs can be reduced by at least 90%. This can be done by wearing a dust mask, and wearing a
|
https://en.wikipedia.org/wiki/Western%20Australian%20borders
|
The land border of the state of Western Australia (WA) bisects mainland Australia, nominally along 129th meridian east longitude (129° East). That land border divides WA from the Northern Territory (NT) and South Australia (SA). However, for various reasons, the actual border (as surveyed and marked or otherwise indicated on the ground) deviates from 129° East, and is not a single straight line.
The Western Australian town closest to the border is Kununurra, which is about west of the border with the NT. The settlement outside WA that is closest to the border is Border Village, SA, which adjoins the border; the centre of Border Village is about from the border, on the Eyre Highway.
Border delineation
In some cases, the physical signage and structures that mark the actual border deviate from the 129th meridian. The Northern Territory border with Western Australia and the South Australian border with Western Australia are displaced east–west by approximately , as a result of errors caused by the technical limits of surveying technology in the 1920s, when the current border was surveyed.
Consequently, since the 1920s, the border has included an approximately east–west "dog-leg", which runs along the 26th parallel south latitude (26° south), immediately west of Surveyor Generals Cornerthe point at which WA officially meets both the NT and SA. In June 1968, monuments were erected to mark both ends of this east–west line.
History
1788–1825
In 1788 Governor Phillip claimed the continent of Australia only as far west as the 135th meridian east (135° east) in accordance with his commission. (26 January 1788 – MAP)
It has been suggested that the 1788 claim by the British of 135° east was in reference to Spain's claims under the Treaty of Tordesillas. Spain was seen as no longer having an interest in the area. On the other hand, the other signatories to the treaty, the Portuguese still had a presence in Macau and East Timor. Adoption of 135° east as a boundary wou
|
https://en.wikipedia.org/wiki/Grand%20potential
|
The grand potential or Landau potential or Landau free energy is a quantity used in statistical mechanics, especially for irreversible processes in open systems.
The grand potential is the characteristic state function for the grand canonical ensemble.
Definition
Grand potential is defined by
where U is the internal energy, T is the temperature of the system, S is the entropy, μ is the chemical potential, and N is the number of particles in the system.
The change in the grand potential is given by
where P is pressure and V is volume, using the fundamental thermodynamic relation (combined first and second thermodynamic laws);
When the system is in thermodynamic equilibrium, ΦG is a minimum. This can be seen by considering that dΦG is zero if the volume is fixed and the temperature and chemical potential have stopped evolving.
Landau free energy
Some authors refer to the grand potential as the Landau free energy or Landau potential and write its definition as:
named after Russian physicist Lev Landau, which may be a synonym for the grand potential, depending on system stipulations. For homogeneous systems, one obtains .
Homogeneous systems (vs. inhomogeneous systems)
In the case of a scale-invariant type of system (where a system of volume has exactly the same set of microstates as systems of volume ), then when the system expands new particles and energy will flow in from the reservoir to fill the new volume with a homogeneous extension of the original system.
The pressure, then, must be constant with respect to changes in volume:
and all extensive quantities (particle number, energy, entropy, potentials, ...) must grow linearly with volume, e.g.
In this case we simply have , as well as the familiar relationship for the Gibbs free energy.
The value of can be understood as the work that can be extracted from the system by shrinking it down to nothing (putting all the particles and energy back into the reservoir). The fact that is negative implies
|
https://en.wikipedia.org/wiki/System%20monitor
|
A system monitor is a hardware or software component used to monitor system resources and performance in a computer system.
Among the management issues regarding use of system monitoring tools are resource usage and privacy. Monitoring can track both input and output values and events of systems.
Overview
Software monitors occur more commonly, sometimes as a part of a widget engine. These monitoring systems are often used to keep track of system resources, such as CPU usage and frequency, or the amount of free RAM. They are also used to display items such as free space on one or more hard drives, the temperature of the CPU and other important components, and networking information including the system IP address and current rates of upload and download. Other possible displays may include the date and time, system uptime, computer name, username, hard drive S.M.A.R.T. data, fan speeds, and the voltages being provided by the power supply.
Less common are hardware-based systems monitoring similar information. Customarily these occupy one or more drive bays on the front of the computer case, and either interface directly with the system hardware or connect to a software data-collection system via USB. With either approach to gathering data, the monitoring system displays information on a small LCD panel or on series of small analog or LED numeric displays. Some hardware-based system monitors also allow direct control of fan speeds, allowing the user to quickly customize the cooling in the system.
A few very high-end models of hardware system monitor are designed to interface with only a specific model of motherboard. These systems directly utilize the sensors built into the system, providing more detailed and accurate information than less-expensive monitoring systems customarily provide.
Software monitoring
Software monitoring tools operate within the device they're monitoring.
Hardware monitoring
Unlike software monitoring tools, hardware measurement too
|
https://en.wikipedia.org/wiki/Semiconductor%20device%20modeling
|
Semiconductor device modeling creates models for the behavior of the electrical devices based on fundamental physics, such as the doping profiles of the devices. It may also include the creation of compact models (such as the well known SPICE transistor models), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. Normally it starts from the output of a semiconductor process simulation.
Introduction
The figure to the right provides a simplified conceptual view of “the big picture.” This figure shows two inverter stages and the resulting input-output voltage-time plot of the circuit. From the digital systems point of view the key parameters of interest are: timing delays, switching power, leakage current and cross-coupling (crosstalk) with other blocks. The voltage levels and transition speed are also of concern.
The figure also shows schematically the importance of Ion versus Ioff, which in turn is related to drive-current (and mobility) for the “on” device and several leakage paths for the “off” devices. Not shown explicitly in the figure are the capacitances—both intrinsic and parasitic—that affect dynamic performance.
The power scaling which is now a major driving force in the industry is reflected in the simplified equation shown in the figure — critical parameters are capacitance, power supply and clocking frequency. Key parameters that relate device behavior to system performance include the threshold voltage, driving current and subthreshold characteristics.
It is the confluence of system performance issues with the underlying technology and device design variables that results in the ongoing scaling laws that we now codify as Moore’s law.
Device modeling
The physics and modeling of devices in integrated circuits is dominated by MOS and bipolar transistor modeling. However, other devices are important, such as memory devices, that have rather different modeling requirements.
|
https://en.wikipedia.org/wiki/Coastline%20paradox
|
The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.
The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.
The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amount—that is, to measure it within a certain degree of uncertainty. The more accurate the measurement device, the closer results will be to the true length of the edge. When measuring a coastline, however, the closer measurement does not result in an increase in accuracy—the measurement only increases in length; unlike with the metal bar, there is no way to obtain a maximum value for the length of the coastline.
In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces, whereby the area of a surface varies depending on the measurement resolution.
Mathematical aspects
The basic concept of length originates from Euclidean distance. In Euclidean geometry, a straight line repr
|
https://en.wikipedia.org/wiki/Privilege%20bracketing
|
In computer security, privilege bracketing is a temporary increase in software privilege within a process to perform a specific function, assuming those necessary privileges at the last possible moment and dismissing them as soon as no longer strictly necessary, therefore ostensibly avoiding fallout from erroneous code that unintentionally exploits more privilege than is merited. It is an example of the use of principle of least privilege in defensive programming.
It should be distinguished from privilege separation, which is a much more effective security measure that separates the privileged parts of the system from its unprivileged parts by putting them into different processes, as opposed to switching between them within a single process.
A known example of privilege bracketing is in Debian/Ubuntu: using the 'sudo' tool to temporarily acquire 'root' privileges to perform an administrative command. A Microsoft Powershell equivalent is "Just In Time, Just Enough Admin".
See also
Privilege revocation (computing)
References
Computer security procedures
|
https://en.wikipedia.org/wiki/Mobile%20television
|
Mobile television is television watched on a small handheld or mobile device, typically developed for that purpose. It includes service delivered via mobile phone networks, received free-to-air via terrestrial television stations, or via satellite broadcast. Regular broadcast standards or special mobile TV transmission formats can be used. Additional features include downloading TV programs and podcasts from the Internet and storing programming for later viewing.
According to the Harvard Business Review, the growing adoption of smartphones allowed users to watch as much mobile video in three days of the 2010 Winter Olympics as they watched throughout the entire 2008 Summer Olympics, a five-fold increase. However, except in South Korea, consumer acceptance of broadcast mobile TV has been limited due to lack of compatible devices.
Early mobile TV receivers were based on old analog television systems. They were the earliest televisions that could be placed in a coat pocket. The first was the Panasonic IC TV MODEL TR-001, introduced in 1970. The second was sold to the public by Clive Sinclair in January 1977. It was called the Microvision or the MTV-1. It had a two-inch (50 mm) CRT screen and was also the first television which could pick up signals in multiple countries. It measured x × and was sold for less than £100 in the UK and for around $400 in the United States. The project took over ten years to develop and was funded by around £1.6 million in British government grants.
In 2002, South Korea was the first country to introduce commercial mobile TV via 2G CDMA IS95-C, and 3G (CDMA2000 1X EVDO) networks. In 2005, South Korea became the first country to broadcast satellite mobile TV via DMB (S-DMB) on May 1, and terrestrial DMB (T-DMB) on December 1. South Korea and Japan are developing the sector. Mobile TV services were launched in Hong Kong during March 2006 by the operator CSL on the 3G network. BT launched mobile TV in the United Kingdom in September 2006
|
https://en.wikipedia.org/wiki/SCARA
|
The SCARA is a type of industrial robot. The acronym stands for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm.
By virtue of the SCARA's parallel-axis joint layout, the arm is slightly compliant in the X-Y direction but rigid in the Z direction, hence the term selective compliance. This is advantageous for many types of assembly operations, for example, inserting a round pin in a round hole without binding.
The second attribute of the SCARA is the jointed two-link arm layout similar to human arms, hence the often-used term, articulated. This feature allows the arm to extend into confined areas and then retract or "fold up" out of the way. This is advantageous for transferring parts from one cell to another or for loading or unloading process stations that are enclosed.
SCARAs are generally faster than comparable Cartesian robot systems. Their single pedestal mount requires a small footprint and provides an easy, unhindered form of mounting. On the other hand, SCARAs can be more expensive than comparable Cartesian systems and the controlling software requires inverse kinematics for linear interpolated moves. However, this software typically comes with the SCARA and is usually transparent to the end-user.
Sankyo Seiki, Pentel and NEC presented the SCARA robot as a completely new concept for assembly robots in 1981. The robot was developed under the guidance of Hiroshi Makino, a professor at the University of Yamanashi. Its arm was rigid in the Z-axis and pliable in the XY-axes, which allowed it to adapt to holes in the XY-axes.
Gallery
Source:
See also
Articulated robot
Schoenflies displacement
References
External links
Why SCARA? A Case Study – A Comparison between 3-axis r-theta robot vs. 4-axis SCARA robot by Innovative Robotics, a division of Ocean Bay and Lake Company
-
Robotic manipulators
1981 in robotics
|
https://en.wikipedia.org/wiki/230%20%28number%29
|
230 (two hundred [and] thirty) is the natural number following 229 and preceding 231.
Additionally, 230 is:
a composite number, with its divisors being 2, 5, 10, 23, 46, and 115.
a sphenic number because it is the product of 3 primes. It is also the first sphenic number to immediately precede another sphenic number.
palindromic and a repdigit in bases 22 (AA22), 45 (5545), 114 (22114), 229 (11229)
a Harshad number in bases 2, 6, 10, 12, 23 (and 16 other bases).
a happy number.
a nontotient since there is no integer with 230 coprimes below it.
the sum of the coprime counts for the first 27 integers.
the aliquot sum of both 454 and 52441.
part of the 41-aliquot tree.
the maximal number of pieces that can be obtained by cutting an annulus with 20 cuts.
The aliquot sequence starting at 224 is: 224, 280, 440, 640, 890, 730, 602, 454, 230, 202, 104, 106, 56, 64, 63, 41, 1, 0.
There are 230 unique space groups describing all possible crystal symmetries.
Integers between 231 and 239
231
232
233
234
235
236
237
238
239
References
Integers
|
https://en.wikipedia.org/wiki/240%20%28number%29
|
240 (two hundred [and] forty) is the natural number following 239 and preceding 241.
Additionally, 240 is:
a semiperfect number.
a concatenation of two of its proper divisors.
a highly composite number since it has 20 divisors total (1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 20, 24, 30, 40, 48, 60, 80, 120, and 240), more than any previous number.
a refactorable number or tau number, since it has 20 divisors and 20 divides 240.
a highly totient number, since it has 31 totient answers, more than any previous integer.
a pronic number since it can be expressed as the product of two consecutive integers, 15 and 16.
palindromic in bases 19 (CC19), 23 (AA23), 29 (8829), 39 (6639), 47 (5547) and 59 (4459).
a Harshad number in bases 2, 3, 4, 5, 6, 7, 9, 10, 11, 13, 14, 15 (and 73 other bases).
the aliquot sum of 120 and 57121.
part of the 12161-aliquot tree. The aliquot sequence starting at 120 is: 120, 240, 504, 1056, 1968, 3240, 7650, 14112, 32571, 27333, 12161, 1, 0.
There would be exactly 240 visible pieces of the 4D version of the Rubik's Revenge, a 4x4x4 Rubik's Cube. Like a Rubik's Revenge in 3D has 64 minus 8 (56) visible pieces, a Rubik's Revenge in 4D would have 256 minus 16 (240) visible pieces.
240 is the smallest number that can be expressed as a sum of consecutive primes in three different ways: 240 = 113 + 127 = 53 + 59 + 61 + 67 = 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43.
E8 has 240 roots.
There are 240 distinct solutions of the Soma cube puzzle.
References
Integers
|
https://en.wikipedia.org/wiki/Sexual%20mimicry
|
Sexual mimicry occurs when one sex mimics the opposite sex in its behavior, appearance, or chemical signalling.
It is more commonly seen within invertebrate species, although sexual mimicry is also seen among vertebrates such as spotted hyenas.
Sexual mimicry is commonly used as a mating strategy to gain access to a mate, a defense mechanism to avoid more dominant individuals, or a survival strategy. It can also be a physical characteristic that establishes an individual's place in society. Sexual mimicry is employed differently across species and it is part of their strategy for survival and reproduction.
Examples of intraspecific sexual mimicry in animals include the spotted hyena, certain types of fish, passerine birds and some species of insect.
Interspecific sexual mimicry can also occur in some plant species, especially orchids. In plants employing sexual mimicry, flowers mimic mating signals of their pollinator insects. These insects are attracted and pollinate the flowers through pseudocopulations or other sexual behaviors performed on the flower.
Social systems
Sexual mimicry can play a role in the development of a species' social system. Perhaps the most extreme example of this can be seen in the spotted hyena, Crocuta crocuta. Female hyenas of all ages possess pseudomasculinized genitalia, including a pseudopenis formed from the clitoris, and a false scrotum formed from the labia. These characteristics likely initially evolved to reduce rates of intrasex aggression received by cub and juvenile females from adult females. However, the trait has evolved beyond its initial use to become highly relevant to spotted hyena communication. Subordinate hyenas will greet dominant individuals by erecting their penis or pseudopenis and allowing the dominant individual to lick it. This likely initially evolved as a means of discerning the sex of the subordinate individual, as the pseudopenis less closely resembles a genuine penis when erect, and tasting the a
|
https://en.wikipedia.org/wiki/Minifloat
|
In computing, minifloats are floating-point values represented with very few bits. Predictably, they are not well suited for general-purpose numerical calculations. They are used for special purposes, most often in computer graphics, where iterations are small and precision has aesthetic effects. Machine learning also uses similar formats like bfloat16. Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures of floating-point arithmetic and IEEE 754 numbers.
Minifloats with 16 bits are half-precision numbers (opposed to single and double precision). There are also minifloats with 8 bits or even fewer.
Minifloats can be designed following the principles of the IEEE 754 standard. In this case they must obey the (not explicitly written) rules for the frontier between subnormal and normal numbers and must have special patterns for infinity and NaN. Normalized numbers are stored with a biased exponent. The new revision of the standard, IEEE 754-2008, has 16-bit binary minifloats.
Notation
A minifloat is usually described using a tuple of four numbers, (S, E, M, B):
S is the length of the sign field. It is usually either 0 or 1.
E is the length of the exponent field.
M is the length of the mantissa (significand) field.
B is the exponent bias.
A minifloat format denoted by (S, E, M, B) is, therefore, bits long.
In computer graphics minifloats are sometimes used to represent only integral values. If at the same time subnormal values should exist, the least subnormal number has to be 1. The bias value would be in this case, assuming two special exponent values are used per IEEE.
The (S, E, M, B) notation can be converted to a (B, P, L, U) format as (with IEEE use of exponents).
Example 8-bit float
A minifloat in 1 byte (8 bit) with 1 sign bit, 4 exponent bits and 3 significand bits (in short, a 1.4.3 minifloat) is demonstrated here. The exponent bias is defined as 7 to cente
|
https://en.wikipedia.org/wiki/Georg%20Nees
|
Georg Nees (23 June 1926 – 3 January 2016) was a German academic who was a pioneer of computer art and generative graphics. He studied mathematics, physics and philosophy in Erlangen and Stuttgart and was scientific advisor at the SEMIOSIS, International Journal of semiotics and aesthetics. In 1977, he was appointed Honorary Professor of Applied computer science at the University of Erlangen Nees is one of the "3N" computer pioneers, an abbreviation that has become acknowledged for Frieder Nake, Georg Nees and A. Michael Noll, whose computer graphics were created with digital computers.
Early life and studies
Georg Nees was born in 1926 in Nuremberg, where he spent his childhood. He showed scientific curiosity and interest in art from a young age and among his favorite pastimes were viewing art postcards and looking through a microscope. He attended a school in Schwabach near Nuremberg, graduating in 1945. From 1945 to 1951, he studied mathematics and physics at the University of Erlangen then worked as an industry mathematician for the Siemens Schuckertwerk in Erlangen from 1951 to 1985. There he started to write his first programs in 1959. The company was later incorporated into the Siemens AG.
From 1964 onwards, he studied philosophy at the Technische Hochschule Stuttgart (since 1967 the University of Stuttgart), under Max Bense. He received his doctorate with his thesis on Generative Computergraphik under Max Bense in 1969. His work is considered one of the first theses on Generative Computer Graphics. In 1969, his thesis was published as a book entitled "Generative Computergraphik" and also included examples of program code and graphics produced thereby. After his retirement in 1985 Nees worked as an author and in the field of computer art.
Computer art
In February 1965, Nees showed - as works of art - the world's first computer graphics created with a digital computer. The exhibition, titled computer graphik took place at the public premises of the "Stud
|
https://en.wikipedia.org/wiki/Partition%20topology
|
In mathematics, the partition topology is a topology that can be induced on any set by partitioning into disjoint subsets these subsets form the basis for the topology. There are two important examples which have their own names:
The is the topology where and Equivalently,
The is defined by letting and
The trivial partitions yield the discrete topology (each point of is a set in so ) or indiscrete topology (the entire set is in so ).
Any set with a partition topology generated by a partition can be viewed as a pseudometric space with a pseudometric given by:
This is not a metric unless yields the discrete topology.
The partition topology provides an important example of the independence of various separation axioms. Unless is trivial, at least one set in contains more than one point, and the elements of this set are topologically indistinguishable: the topology does not separate points. Hence is not a Kolmogorov space, nor a T1 space, a Hausdorff space or an Urysohn space. In a partition topology the complement of every open set is also open, and therefore a set is open if and only if it is closed. Therefore, is regular, completely regular, normal and completely normal. is the discrete topology.
See also
References
Topological spaces
|
https://en.wikipedia.org/wiki/Particular%20point%20topology
|
In mathematics, the particular point topology (or included point topology) is a topology where a set is open if it contains a particular point of the topological space. Formally, let X be any non-empty set and p ∈ X. The collection
of subsets of X is the particular point topology on X. There are a variety of cases that are individually named:
If X has two points, the particular point topology on X is the Sierpiński space.
If X is finite (with at least 3 points), the topology on X is called the finite particular point topology.
If X is countably infinite, the topology on X is called the countable particular point topology.
If X is uncountable, the topology on X is called the uncountable particular point topology.
A generalization of the particular point topology is the closed extension topology. In the case when X \ {p} has the discrete topology, the closed extension topology is the same as the particular point topology.
This topology is used to provide interesting examples and counterexamples.
Properties
Closed sets have empty interior
Given a nonempty open set every is a limit point of A. So the closure of any open set other than is . No closed set other than contains p so the interior of every closed set other than is .
Connectedness Properties
Path and locally connected but not arc connected
For any x, y ∈ X, the function f: [0, 1] → X given by
is a path. However, since p is open, the preimage of p under a continuous injection from [0,1] would be an open single point of [0,1], which is a contradiction.
Dispersion point, example of a set with
p is a dispersion point for X. That is X \ {p} is totally disconnected.
Hyperconnected but not ultraconnected
Every non-empty open set contains p, and hence X is hyperconnected. But if a and b are in X such that p, a, and b are three distinct points, then {a} and {b} are disjoint closed sets and thus X is not ultraconnected. Note that if X is the Sierpiński space then no such a and b exist
|
https://en.wikipedia.org/wiki/FSU%20Young%20Scholars%20Program
|
FSU Young Scholars Program (YSP) is a six-week residential science and mathematics summer program for 40 high school students from Florida, USA, with significant potential for careers in the fields of science, technology, engineering and mathematics. The program was developed in 1983 and is currently administered by the Office of Science Teaching Activities in the College of Arts and Sciences at Florida State University (FSU).
Academic program
Each young scholar attends three courses in the fields of mathematics, science and computer programming. The courses are designed specifically for this program — they are neither high school nor college courses.
Research
Each student who attends YSP is assigned an independent research project (IRP) based on his or her interests. Students join the research teams of FSU professors, participating in scientific research for two days each week. The fields of study available include robotics, molecular biology, chemistry, geology, physics and zoology. At the conclusion of the program, students present their projects in an academic conference, documenting their findings and explaining their projects to both students and faculty.
Selection process
YSP admits students who have completed the eleventh grade in a Florida public or private high school. A few exceptionally qualified and mature tenth graders have been selected in past years, though this is quite rare.
All applicants must have completed pre-calculus and maintain at least a 3.0 unweighted GPA to be considered for acceptance. Additionally, students must have scored at the 90th percentile or better in science or mathematics on a nationally standardized exam, such as the SAT, PSAT, ACT or PLAN. Students are required to submit an application package, including high school transcripts and a letter of recommendation.
Selection is extremely competitive, as there are typically over 200 highly qualified applicants competing for only 40 positions. The majority of past participant
|
https://en.wikipedia.org/wiki/Gradient%20theorem
|
The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space (generally n-dimensional) rather than just the real line.
For as a differentiable function and as any continuous curve in which starts at a point and ends at a point , then
where denotes the gradient vector field of .
The gradient theorem implies that line integrals through gradient fields are path-independent. In physics this theorem is one of the ways of defining a conservative force. By placing as potential, is a conservative field. Work done by conservative forces does not depend on the path followed by the object, but only the end points, as the above equation shows.
The gradient theorem also has an interesting converse: any path-independent vector field can be expressed as the gradient of a scalar field. Just like the gradient theorem itself, this converse has many striking consequences and applications in both pure and applied mathematics.
Proof
If is a differentiable function from some open subset to and is a differentiable function from some closed interval to (Note that is differentiable at the interval endpoints and . To do this, is defined on an interval that is larger than and includes .), then by the multivariate chain rule, the composite function is differentiable on :
for all in . Here the denotes the usual inner product.
Now suppose the domain of contains the differentiable curve with endpoints and . (This is oriented in the direction from to ). If parametrizes for in (i.e., represents as a function of ), then
where the definition of a line integral is used in the first equality, the above equation is used in the second equality, and the second fundamental theorem of calculus
|
https://en.wikipedia.org/wiki/OPIE%20Authentication%20System
|
OPIE is the initialism of "One time Passwords In Everything".
Opie is a mature, Unix-like login and password package
installed on the server and the client which makes untrusted networks safer against password-sniffing packet-analysis software like dSniff and safe against shoulder surfing.
It works by circumventing the delayed attack method because the same password is never used twice after installing Opie.
OPIE implements a one-time password (OTP) scheme based on S/KEY, which will require a secret passphrase (not echoed) to generate a password for the current session, or a list of passwords you can print and carry on your person.
OPIE uses an MD4 or MD5 hash function to generate passwords.
OPIE can restrict its logins based on IP address. It uses its own passwd and login modules.
If the Enter key is pressed at the password prompt, it will turn echo on, so what is being typed can be seen when entering an unfamiliar password from a printout.
OPIE can improve security when accessing online banking at conferences, hotels and airports. Some countries require banks to implement OTP.
OPIE shipped with DragonFly BSD, FreeBSD and OpenSUSE. It can be installed on a Unix-like server and clients for improved security.
The commands are
opiepasswd
opiekey
See also
OTPW
External links
OPIE @ Linux wiki
Opie text from FreeBSD Manual
Cryptographic software | Password authentication
|
https://en.wikipedia.org/wiki/Crystal%20engineering
|
Crystal engineering studies the design and synthesis of solid-state structures with desired properties through deliberate control of intermolecular interactions. It is an interdisciplinary academic field, bridging solid-state and supramolecular chemistry.
The main engineering strategies currently in use are hydrogen- and halogen bonding and coordination bonding. These may be understood with key concepts such as the supramolecular synthon and the secondary building unit.
History of term
The term 'crystal engineering' was first used in 1955 by R. Pepinsky but the starting point is often credited to Gerhard Schmidt in connection with photodimerization reactions in crystalline cinnamic acids. Since this initial use, the meaning of the term has broadened considerably to include many aspects of solid state supramolecular chemistry. A useful modern definition is that provided by Gautam Desiraju, who in 1988 defined crystal engineering as "the understanding of intermolecular interactions in the context of crystal packing and the utilization of such understanding in the design of new solids with desired physical and chemical properties." Since many of the bulk properties of molecular materials are dictated by the manner in which the molecules are ordered in the solid state, it is clear that an ability to control this ordering would afford control over these properties.
Non-covalent control of structure
Crystal engineering relies on noncovalent bonding to achieve the organization of molecules and ions in the solid state. Much of the initial work on purely organic systems focused on the use of hydrogen bonds, although coordination and halogen bonds provide additional control in crystal design.
Molecular self-assembly is at the heart of crystal engineering, and it typically involves an interaction between complementary hydrogen bonding faces or a metal and a ligand. "Supramolecular synthons" are building blocks that are common to many structures and hence can be use
|
https://en.wikipedia.org/wiki/Fujitsu%20Eagle
|
The M2351 "Eagle" was a hard disk drive manufactured by Fujitsu with an SMD interface that was used on many servers in the mid-1980s. It offered an unformatted capacity of 470 MB in (6U) of 19-inch rack space, at a retail price of about US$10,000.
The data density, access speed, reliability, use of a standard interface, and price point combined to make it a very popular product used by many system manufacturers, such as Sun Microsystems. The Eagle was also popular at installations of DEC VAX systems, as third-party storage systems were often dramatically more cost-effective and space-dense than those vendor-supplied.
The model 2351A incorporated eleven platters rotating at 3,960 rpm, taking half a minute to spin up. The Eagle used platters, unlike most of its competitors, which still used the standard set in 1962 by the IBM 1311. One moving head accessed each data surface (20 total), one more head was dedicated to the servo mechanism. The model 2351AF added 60 fixed heads (20 surfaces × 3 cylinders) for access to a separate area of 1.7 MB.
The Eagle achieved a data transfer rate of 1.8 MB/s (a contemporary PC disk would only deliver 0.4 MB/s).
Power consumption (of the drive alone) was about 600 watts.
Notes
External links
Computer storage devices
Fujitsu
|
https://en.wikipedia.org/wiki/Bicycle%20and%20motorcycle%20dynamics
|
Bicycle and motorcycle dynamics is the science of the motion of bicycles and motorcycles and their components, due to the forces acting on them. Dynamics falls under a branch of physics known as classical mechanics. Bike motions of interest include balancing, steering, braking, accelerating, suspension activation, and vibration. The study of these motions began in the late 19th century and continues today.
Bicycles and motorcycles are both single-track vehicles and so their motions have many fundamental attributes in common and are fundamentally different from and more difficult to study than other wheeled vehicles such as dicycles, tricycles, and quadracycles. As with unicycles, bikes lack lateral stability when stationary, and under most circumstances can only remain upright when moving forward. Experimentation and mathematical analysis have shown that a bike stays upright when it is steered to keep its center of mass over its wheels. This steering is usually supplied by a rider, or in certain circumstances, by the bike itself. Several factors, including geometry, mass distribution, and gyroscopic effect all contribute in varying degrees to this self-stability, but long-standing hypotheses and claims that any single effect, such as gyroscopic or trail, is solely responsible for the stabilizing force have been discredited.
While remaining upright may be the primary goal of beginning riders, a bike must lean in order to maintain balance in a turn: the higher the speed or smaller the turn radius, the more lean is required. This balances the roll torque about the wheel contact patches generated by centrifugal force due to the turn with that of the gravitational force. This lean is usually produced by a momentary steering in the opposite direction, called countersteering. Countersteering skill is usually acquired by motor learning and executed via procedural memory rather than by conscious thought. Unlike other wheeled vehicles, the primary control input on bikes is
|
https://en.wikipedia.org/wiki/Fetch%21%20with%20Ruff%20Ruffman
|
Fetch! with Ruff Ruffman (sometimes shortened as Fetch!) is an American live-action/animated television series that aired on PBS Kids Go! and is largely targeted toward children ages 6–10. It is a reality-game show that is hosted by an animated anthropomorphic dog named Ruff Ruffman who dispenses challenges to the show's real-life contestants. The series ran for five seasons and 100 episodes from May 29, 2006, to November 4, 2010, on PBS, with 30 contestants in that time. Although a sixth season was planned, with auditions taking place in January 2010, WGBH announced on June 14, 2010, that due to lack of funding, the series would end. In June 2008, the series received its first Emmy for Best Original Song for its theme.
Synopsis
Fetch! is a reality-based game show where young contestants (ages 10–14) take on various challenges to gain points. During these challenges, the contestants must complete a variety of tasks assigned to them ahead of time (and on the fly) by Ruff and surrogates, depending on the situation. There is also an educational component, as contestants often must learn something (i.e. Astronomy, Puzzles, Carpentry, Engineering, Food Science, Biology, Chemistry, Physics, Mathematics, etc.) in order to complete the task.
Not all contestants leave the studio each episode to complete tasks. "As determined by the Fetch 3000" (according to Ruff), the contestants who remain behind in the studio participate in the "Half-Time Quiz Show", in which Ruff asks them up to ten questions with the limited time based on the activities of the contestants out on challenges. Out on challenges, contestants will have the potential to earn up to 100 points. The contestants in the studio will have a chance to win a maximum of 50 in the "Half-Time Quiz Show". The show has a Fetch Fairness Guarantee; that every contestant will "compete for the same number of points" through thirteen challenges and six "Half-time Quiz Shows" before the final episode. Additionally, Ruff assig
|
https://en.wikipedia.org/wiki/Indian%20Railway%20Service%20of%20Mechanical%20Engineering
|
The Indian Railway Service of Mechanical Engineering, abbreviated as IRSME, is one of the group 'A' central engineering services of the Indian railways. The officers of this service are responsible for managing the Mechanical Engineering Division of the Indian Railways. Till 2019, IRSME officers were drawn from the Combined Engineering Service Examination (ESE) conducted by Union Public Service Commission. All appointments to the Group 'A' services are made by the president of India.
Recruitment
There are two modes of recruitment to IRSME Group 'A':
50% through direct recruitment through the annual Civil Services Examination conducted by UPSC.
50% through promotion from Group 'B' officers of Mechanical departments of the Zonal Railways.
Current cadre strength of IRSME officers is around 1700, serving in 68 divisions and 3 Production units across 17 Zonal Railways in India and the Railway Board.
Previous modes of recruitment
Engineering Services ExaminationThe incumbents who were Graduates in Engineering used to get selected by the Union Public Services Commission, the apex gazetted recruitment body of the Government of India. In 2020 Railways separated itself from Engineering Services Exam (ESE) and made Indian Railway Management Services (IRMS). Earlier the recruitment used to be through UPSC Engineering Services Exam for Engineers but now after 2 years halt it's through UPSC Civil Services Exam from 2022; an all India written test followed by interview for selected candidates. Earlier top rankers of Mechanical Engineering only used to get the chance to join IRSME cadre, but now it has become General Cadre or Cadre for all Civil Services aspirants.
Special Class Railway Apprentice examination Special Class Railway Apprentice (SCRA) was a programme by which candidates are selected by the Union Public Service Commission (UPSC) India, to train in the undergraduate program in mechanical engineering at the Indian Railways Institute of Mechanical and Electrica
|
https://en.wikipedia.org/wiki/Exterior%20gateway%20protocol
|
An exterior gateway protocol is an IP routing protocol used to exchange routing information between autonomous systems. This exchange is crucial for communications across the Internet. Notable exterior gateway protocols include Exterior Gateway Protocol (EGP), now obsolete, and Border Gateway Protocol (BGP).
By contrast, an interior gateway protocol is a type of protocol used for exchanging routing information between gateways (commonly routers) within an autonomous system (for example, a system of corporate local area networks). This routing information can then be used to route network-level protocols like IP.
References
Internet protocols
Internet Standards
Routing protocols
|
https://en.wikipedia.org/wiki/Engineering%20Services%20Examination
|
The Engineering Services Examination (ESE) is a combined standardized exam conducted annually by the Union Public Service Commission (UPSC) to recruit officers to various engineering services under the Government of India. It is held in four categories: Civil, Mechanical, Electrical, and Electronics & Telecommunication Engineering. The exam has three stages comprising objective, subjective and personality tests. Informally, the various services are often collectively known as Indian Engineering Services (IES).
Officers recruited through ESE are mandated to manage and conduct activities in diverse technical fields. Government infrastructure includes railways, roads, defence, manufacturing, inspection, supply, construction, public works, power, and telecommunications. Appointments to these services are made by the President of India.
List of Services
Civil Engineering
Mechanical Engineering
Electrical Engineering
Electronics and Telecommunication Engineering
Functions of officers
The work performed by these officers largely depends on their engineering branch and service (or cadre). However, they can move to any cadre, organization, agency, department, ministry or public sector undertaking of the government of India. They are appointed to posts analogous to their present one, either on a fixed-term deputation basis (at least five years and extensible, after which the officer returns to their parent cadre) or an absorption basis where the official leaves the parent cadre for the new one.
Eligibility
Candidates must be a citizen of India or Nepal or a subject of Bhutan, or a person of Indian origin who has migrated from Pakistan, Bangladesh, Myanmar, Sri Lanka, Kenya, Uganda, Tanzania, Zambia, Malawi, Zaire, Ethiopia or Vietnam with the intention of permanently settling in India.
The minimum educational requirement is a bachelor's degree in engineering (B.E. or B.Tech) from a recognised university or the equivalent. An M.Sc. degree or equivalent with wirele
|
https://en.wikipedia.org/wiki/Chrome%20S20%20series
|
Chrome 20 Series is a graphics accelerator by S3 Graphics, the successor of GammaChrome S18.
Overview
The Chrome 20 series was introduced on March 11, 2005, with the Chrome S25 and Chrome S27 as launch products. Similar to the GammaChrome S18 PCI Express which preceded it, S20 was marketed to the low and mid range of the graphics card market, with the Chromotion 3 Video Engine and low power consumption as main selling points.
The S20 series marked S3's first products utilizing Fujitsu's 90 nm process. This enabled a significant increase in clock speeds over prior S3 products, and the Chrome 20 series could use 32-256MiB GDDR1 or GDDR3 memory at maximum if 700 MHz, or 64-512MiB GDDR2 memory at a maximum of 500 MHz. The S20 was also S3's first GDDR3 enabled product - with the memory interface supporting 32, 64, or 128-bit of GDDR1, GDDR2 or GDDR3 memory.
Similar to Radeon X1000 series, texturing units and raster operators are separated from pixel shaders. Chrome 20 has 4 vertex shaders, 8 pixel shaders, 4 texturing units, 4 raster operators.
Display controller now integrates a single-link TMDS transmitter, with support of dual-link using external transmitters.
Other new features supports the multi-GPU technology MultiChrome and AcceleRAM.
Chromotion 3.0
This revision of Chromotion Engine adds support of nonlinear video scaling, commonly used by wide screen television sets. TV encoder now supports 18 DTV ATSC formats:
MultiChrome
MultiChrome is a technique to couple multiple graphics chips for better performance. It was first used in the Chrome S20 series and later in Chrome 400 series graphics processors. The methods used by MultiChome are Alternate frame rendering and Split Frame Rendering. The technology is comparable to NVidia's SLI and ATi/AMD's Crossfire multi video adapter solution.
At the moment, due to the speed of the Chrome S20 and Chrome 400 chipsets, no special connectors are required to bridge both cards to activate MultiChrome. Also, unlike NVid
|
https://en.wikipedia.org/wiki/Controllability%20Gramian
|
In control theory, we may need to find out whether or not a system such as
is controllable, where , , and are, respectively, , , and matrices for a system with inputs, state variables and outputs.
One of the many ways one can achieve such goal is by the use of the Controllability Gramian.
Controllability in LTI Systems
Linear Time Invariant (LTI) Systems are those systems in which the parameters , , and are invariant with respect to time.
One can observe if the LTI system is or is not controllable simply by looking at the pair . Then, we can say that the following statements are equivalent:
1. The pair is controllable.
2. The matrix
is nonsingular for any .
3. The controllability matrix
has rank n.
4. The matrix
has full row rank at every eigenvalue of .
If, in addition, all eigenvalues of have negative real parts ( is stable), and the unique solution of the Lyapunov equation
is positive definite, the system is controllable. The solution is called the Controllability Gramian and can be expressed as
In the following section we are going to take a closer look at the Controllability Gramian.
Controllability Gramian
The controllability Gramian can be found as the solution of the Lyapunov equation given by
In fact, we can see that if we take
as a solution, we are going to find that:
Where we used the fact that at for stable (all its eigenvalues have negative real part). This shows us that is indeed the solution for the Lyapunov equation under analysis.
Properties
We can see that is a symmetric matrix, therefore, so is .
We can use again the fact that, if is stable (all its eigenvalues have negative real part) to show that is unique. In order to prove so, suppose we have two different solutions for
and they are given by and . Then we have:
Multiplying by by the left and by by the right, would lead us to
Integrating from to :
using the fact that as :
In other words, has to be unique.
Also, we can see that
is positi
|
https://en.wikipedia.org/wiki/Observability%20Gramian
|
In control theory, we may need to find out whether or not a system such as
is observable, where , , and are, respectively, , , and matrices.
One of the many ways one can achieve such goal is by the use of the Observability Gramian.
Observability in LTI Systems
Linear Time Invariant (LTI) Systems are those systems in which the parameters , , and are invariant with respect to time.
One can determine if the LTI system is or is not observable simply by looking at the pair . Then, we can say that the following statements are equivalent:
1. The pair is observable.
2. The matrix
is nonsingular for any .
3. The observability matrix
has rank n.
4. The matrix
has full column rank at every eigenvalue of .
If, in addition, all eigenvalues of have negative real parts ( is stable) and the unique solution of
is positive definite, then the system is observable. The solution is called the Observability Gramian and can be expressed as
In the following section we are going to take a closer look at the Observability Gramian.
Observability Gramian
The Observability Gramian can be found as the solution of the Lyapunov equation given by
In fact, we can see that if we take
as a solution, we are going to find that:
Where we used the fact that at for stable (all its eigenvalues have negative real part). This shows us that is indeed the solution for the Lyapunov equation under analysis.
Properties
We can see that is a symmetric matrix, therefore, so is .
We can use again the fact that, if is stable (all its eigenvalues have negative real part) to show that is unique. In order to prove so, suppose we have two different solutions for
and they are given by and . Then we have:
Multiplying by by the left and by by the right, would lead us to
Integrating from to :
using the fact that as :
In other words, has to be unique.
Also, we can see that
is positive for any (assuming the non-degenerate case where is not identically zero), and that makes
|
https://en.wikipedia.org/wiki/Rocky%20shore
|
A rocky shore is an intertidal area of seacoasts where solid rock predominates. Rocky shores are biologically rich environments, and are a useful "natural laboratory" for studying intertidal ecology and other biological processes. Due to their high accessibility, they have been well studied for a long time and their species are well known.
Marine life
Many factors favour the survival of life on rocky shores. Temperate coastal waters are mixed by waves and convection, maintaining adequate availability of nutrients. Also, the sea brings plankton and broken organic matter in with each tide. The high availability of light (due to low depths) and nutrient levels means that primary productivity of seaweeds and algae can be very high. Human actions can also benefit rocky shores due to nutrient runoff.
Despite these favourable factors, there are also a number of challenges to marine organisms associated with the rocky shore ecosystem. Generally, the distribution of benthic species is limited by salinity, wave exposure, temperature, desiccation and general stress. The constant threat of desiccation during exposure at low tide can result in dehydration. Hence, many species have developed adaptations to prevent this drying out, such as the production of mucous layers and shells. Many species use shells and holdfasts to provide stability against strong wave actions. There are also a variety of other challenges such as temperature fluctuations due to tidal flow (resulting in exposure), changes in salinity and various ranges of illumination. Other threats include predation from birds and other marine organisms, as well as the effects of pollution.
Ballantine Scale
The Ballantine scale is a biologically defined scale for measuring the degree of exposure level of wave action on a rocky shore. Devised in 1961 by W. J. Ballantine, then at the zoology department of Queen Mary University of London, London, U.K., the scale is based on the observation that where shoreline species a
|
https://en.wikipedia.org/wiki/GUS%20reporter%20system
|
The GUS reporter system (GUS: β-glucuronidase) is a reporter gene system, particularly useful in plant molecular biology and microbiology. Several kinds of GUS reporter gene assay are available, depending on the substrate used. The term GUS staining refers to the most common of these, a histochemical technique.
Purpose
The purpose of this technique is to analyze the activity of a gene transcription promoter (in terms of expression of a so-called reporter gene under the regulatory control of that promoter) either in a quantitative manner, involving some measure of activity, or qualitatively (on versus off) through visualization of its activity in different cells, tissues, or organs. The technique utilizes the uidA gene of Escherichia coli, which codes for the enzyme, β-glucuronidase; this enzyme, when incubated with specific colorless or non-fluorescent substrates, can convert them into stable colored or fluorescent products. The presence of the GUS-induced color indicates where the gene has been actively expressed. In this way, strong promoter activity produces much staining and weak promoter activity produces less staining.
The uidA gene can also be fused to a gene of interest, creating a gene fusion. The insertion of the uidA gene will cause production of GUS, which can then be detected using various glucuronides as substrates.
Substrates
There are different possible glucuronide that can be used as substrates for the β-glucuronidase, depending on the type of detection needed (histochemical, spectrophotometrical, fluorimetrical). The most common substrate for GUS histochemical staining is 5-bromo-4-chloro-3-indolyl glucuronide (X-Gluc). X-Gluc is hydrolyzed by GUS into the product 5,5'-dibromo-4,4'-dichloro-indigo (diX-indigo). DiX-indigo will appear blue, and can be seen using light microscopy. This process is analogous to hydrolysis of X-gal by Beta-galactosidase to produce blue cells as is commonly practiced in bacterial reporter gene assays.
For other typ
|
https://en.wikipedia.org/wiki/Sequential%20equilibrium
|
Sequential equilibrium is a refinement of Nash equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each
of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.
Consistent assessments
The formal definition of a strategy being sensible given a belief is straightforward; the strategy should simply maximize expected payoff in every information set. It is also straightforward to define what a sensible belief should be for those information sets that are reached with positive probability given the strategies; the beliefs should be the conditional probability distribution on the nodes of the information set, given that it is reached. This entails the application of Bayes' rule.
It is far from straightforward to define what a sensible belief should be for those information sets that are reached with probability zero, given the strategies. Indeed, this is the main conceptual contribution of Kreps and Wilson. Their consistency requirement is the following: The assessment should be a limit point of a sequence of totally mixed strategy profiles and associated sensible beliefs, in the above straightforward sense.
Relationship to other equilibrium refinements
Sequential equilibrium is a further refinement of subgame perfect equilibrium and even perfect Bayesian equilibrium. It is itself refined by extensive-form trembling
|
https://en.wikipedia.org/wiki/Quasi-perfect%20equilibrium
|
Quasi-perfect equilibrium is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme.
Informally, a player playing by a strategy from a quasi-perfect equilibrium takes observed as well as potential future mistakes of his opponents into account but assumes that he himself will not make a mistake in the future, even if he observes that he has done so in the past.
Quasi-perfect equilibrium is a further refinement of sequential equilibrium. It is itself refined by normal form proper equilibrium.
Mertens' voting game
It has been argued by Jean-François Mertens that quasi-perfect equilibrium is superior to Reinhard Selten's notion of extensive-form trembling hand perfect equilibrium as a quasi-perfect equilibrium is guaranteed to describe admissible behavior. In contrast, for a certain two-player voting game no extensive-form trembling hand perfect equilibrium describes admissible behavior for both players.
The voting game suggested by Mertens may be described as follows:
Two players must elect one of them to perform an effortless task. The task may be performed either correctly or incorrectly. If it is performed correctly, both players receive a payoff of 1, otherwise both players receive a payoff of 0. The election is by a secret vote. If both players vote for the same player, that player gets to perform the task. If each player votes for himself, the player to perform the task is chosen at random but is not told that he was elected this way. Finally, if each player votes for the other, the task is performed by somebody else, with no possibility of it being performed incorrectly.
In the unique quasi-perfect equilibrium for the game, each player votes for himself and, if elected, performs the task correctly. This is also the unique admissible behavior. But in any extensive-form trembling hand perfect equilibrium, at least one of the players believes that
he is at least as likely as the other player to tremble and perform the task incorrec
|
https://en.wikipedia.org/wiki/Secondary%20treatment
|
Secondary treatment (mostly biological wastewater treatment) is the removal of biodegradable organic matter (in solution or suspension) from sewage or similar kinds of wastewater. The aim is to achieve a certain degree of effluent quality in a sewage treatment plant suitable for the intended disposal or reuse option. A "primary treatment" step often precedes secondary treatment, whereby physical phase separation is used to remove settleable solids. During secondary treatment, biological processes are used to remove dissolved and suspended organic matter measured as biochemical oxygen demand (BOD). These processes are performed by microorganisms in a managed aerobic or anaerobic process depending on the treatment technology. Bacteria and protozoa consume biodegradable soluble organic contaminants (e.g. sugars, fats, and organic short-chain carbon molecules from human waste, food waste, soaps and detergent) while reproducing to form cells of biological solids. Secondary treatment is widely used in sewage treatment and is also applicable to many agricultural and industrial wastewaters.
Secondary treatment systems are classified as fixed-film or suspended-growth systems, and as aerobic versus anaerobic. Fixed-film or attached growth systems include trickling filters, constructed wetlands, bio-towers, and rotating biological contactors, where the biomass grows on media and the sewage passes over its surface. The fixed-film principle has further developed into moving bed biofilm reactors (MBBR) and Integrated Fixed-Film Activated Sludge (IFAS) processes. Suspended-growth systems include activated sludge, which is an aerobic treatment system, based on the maintenance and recirculation of a complex biomass composed of micro-organisms (bacteria and protozoa) able to absorb and adsorb the organic matter carried in the wastewater. Constructed wetlands are also being used. An example for an anaerobic secondary treatment system is the upflow anaerobic sludge blanket reactor.
|
https://en.wikipedia.org/wiki/Steroidogenic%20acute%20regulatory%20protein
|
The steroidogenic acute regulatory protein, commonly referred to as StAR (STARD1), is a transport protein that regulates cholesterol transfer within the mitochondria, which is the rate-limiting step in the production of steroid hormones. It is primarily present in steroid-producing cells, including theca cells and luteal cells in the ovary, Leydig cells in the testis and cell types in the adrenal cortex.
Function
Cholesterol needs to be transferred from the outer mitochondrial membrane to the inner membrane where cytochrome P450scc enzyme (CYP11A1) cleaves the cholesterol side chain, which is the first enzymatic step in all steroid synthesis. The aqueous phase between these two membranes cannot be crossed by the lipophilic cholesterol, unless certain proteins assist in this process. A number of proteins have historically been proposed to facilitate this transfer including: sterol carrier protein 2 (SCP2), steroidogenic activator polypeptide (SAP), peripheral benzodiazepine receptor (PBR or translocator protein, TSPO), and StAR. It is now clear that this process is primarily mediated by the action of StAR.
The mechanism by which StAR causes cholesterol movement remains unclear as it appears to act from the outside of the mitochondria and its entry into the mitochondria ends its function. Various hypotheses have been advanced. Some involve StAR transferring cholesterol itself like a shuttle. While StAR may bind cholesterol itself, the exorbitant number of cholesterol molecules that the protein transfers would indicate that it would have to act as a cholesterol channel instead of a shuttle. Another notion is that it causes cholesterol to be kicked out of the outer membrane to the inner (cholesterol desorption). StAR may also promote the formation of contact sites between the outer and inner mitochondrial membranes to allow cholesterol influx. Another suggests that StAR acts in conjunction with PBR, causing the movement of Cl− out of the mitochondria to facilitat
|
https://en.wikipedia.org/wiki/Myerson%E2%80%93Satterthwaite%20theorem
|
The Myerson–Satterthwaite theorem is an important result in mechanism design and the economics of asymmetric information, and named for Roger Myerson and Mark Satterthwaite. Informally, the result says that there is no efficient way for two parties to trade a good when they each have secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss.
The Myerson–Satterthwaite theorem is among the most remarkable and universally applicable negative results in economics—a kind of negative mirror to the fundamental theorems of welfare economics. It is, however, much less famous than those results or Arrow's earlier result on the impossibility of satisfactory electoral systems.
Notation
There are two agents: Sally (the seller) and Bob (the buyer). Sally holds an item that is valuable for both her and Bob. Each agent values the item differently: Bob values it as and Sally as . Each agent knows his/her own valuation with certainty, but knows the valuation of the other agent only probabilistically:
For Sally, the value of Bob is represented by a probability density function which is positive in the range . The corresponding cumulative distribution function is .
For Bob, the value of Sally is represented by a probability density function which is positive in the range . The corresponding cumulative distribution function is .
A direct bargaining mechanism is a mechanism which asks each agent to report his/her valuation of the item, then decides whether the item will be traded and at what price. Formally, it is represented by two functions:
The trade-probability function, , determines the probability that the item will be transferred from the seller to the buyer (in a deterministic mechanism, this probability is either 0 or 1, but the formalism also allows random mechanisms).
The price function, , determines the price that Bob should pay to Sally. Note that the reported values are marked by since they do not equal the
|
https://en.wikipedia.org/wiki/Self-consistent%20mean%20field%20%28biology%29
|
The self-consistent mean field (SCMF) method is an adaptation of mean field theory used in protein structure prediction to determine the optimal amino acid side chain packing given a fixed protein backbone. It is faster but less accurate than dead-end elimination and is generally used in situations where the protein of interest is too large for the problem to be tractable by DEE.
General principles
Like dead-end elimination, the SCMF method explores conformational space by discretizing the dihedral angles of each side chain into a set of rotamers for each position in the protein sequence. The method iteratively develops a probabilistic description of the relative population of each possible rotamer at each position, and the probability of a given structure is defined as a function of the probabilities of its individual rotamer components.
The basic requirements for an effective SCMF implementation are:
A well-defined finite set of discrete independent variables
A precomputed numerical value (considered the "energy") associated with each element in the set of variables, and associated with each binary element pair
An initial probability distribution describing the starting population of each individual rotamer
A way of updating rotamer energies and probabilities as a function of the mean-field energy
The process is generally initialized with a uniform probability distribution over the rotamers—that is, if there are rotamers at the position in the protein, then the probability of any individual rotamer is . The conversion between energies and probabilities is generally accomplished via the Boltzmann distribution, which introduces a temperature factor (thus making the method amenable to simulated annealing). Lower temperatures increase the likelihood of converging to a single solution, rather than to a small subpopulation of solutions.
Mean-field energies
The energy of an individual rotamer is dependent on the "mean-field" energy of the other positions—tha
|
https://en.wikipedia.org/wiki/Harish-Chandra%20isomorphism
|
In mathematics, the Harish-Chandra isomorphism, introduced by ,
is an isomorphism of commutative rings constructed in the theory of Lie algebras. The isomorphism maps the center of the universal enveloping algebra of a reductive Lie algebra to the elements of the symmetric algebra of a Cartan subalgebra that are invariant under the Weyl group .
Introduction and setting
Let be a semisimple Lie algebra, its Cartan subalgebra and be two elements of the weight space (where is the dual of ) and assume that a set of positive roots have been fixed. Let and be highest weight modules with highest weights and respectively.
Central characters
The -modules and are representations of the universal enveloping algebra and its center acts on the modules by scalar multiplication (this follows from the fact that the modules are generated by a highest weight vector). So, for and ,
and similarly for , where the functions are homomorphisms from to scalars called central characters.
Statement of Harish-Chandra theorem
For any , the characters if and only if and are on the same orbit of the Weyl group of , where is the half-sum of the positive roots, sometimes known as the Weyl vector.
Another closely related formulation is that the Harish-Chandra homomorphism from the center of the universal enveloping algebra to (the elements of the symmetric algebra of the Cartan subalgebra fixed by the Weyl group) is an isomorphism.
Explicit isomorphism
More explicitly, the isomorphism can be constructed as the composition of two maps, one from to and another from to itself.
The first is a projection . For a choice of positive roots , defining
as the corresponding positive nilpotent subalgebra and negative nilpotent subalgebra respectively, due to the Poincaré–Birkhoff–Witt theorem there is a decomposition
If is central, then in fact
The restriction of the projection to the centre is , and is a homomorphism of algebras. This is related to the central chara
|
https://en.wikipedia.org/wiki/Differential%20variational%20inequality
|
In mathematics, a differential variational inequality (DVI) is a dynamical system that incorporates ordinary differential equations and variational inequalities or complementarity problems.
DVIs are useful for representing models involving both dynamics and inequality constraints. Examples of such problems include, for example, mechanical impact problems, electrical circuits with ideal diodes, Coulomb friction problems for contacting bodies, and dynamic economic and related problems such as dynamic traffic networks and networks of queues (where the constraints can either be upper limits on queue length or that the queue length cannot become negative). DVIs are related to a number of other concepts including differential inclusions, projected dynamical systems, evolutionary inequalities, and parabolic variational inequalities.
Differential variational inequalities were first formally introduced by Pang and Stewart, whose definition should not be confused with the differential variational inequality used in Aubin and Cellina (1984).
Differential variational inequalities have the form to find such that
for every and almost all t; K a closed convex set, where
Closely associated with DVIs are dynamic/differential complementarity problems: if K is a closed convex cone, then the variational inequality is equivalent to the complementarity problem:
Examples
Mechanical Contact
Consider a rigid ball of radius falling from a height towards a table. Assume that the forces acting on the ball are gravitation and the contact forces of the table preventing penetration. Then the differential equation describing the motion is
where is the mass of the ball and is the contact force of the table, and is the gravitational acceleration. Note that both and are a priori unknown. While the ball and the table are separated, there is no contact force. There cannot be penetration (for a rigid ball and a rigid table), so for all . If then . On the other hand,
|
https://en.wikipedia.org/wiki/Lithoautotroph
|
A lithoautotroph is an organism which derives energy from reactions of reduced compounds of mineral (inorganic) origin. Two types of lithoautotrophs are distinguished by their energy source; photolithoautotrophs derive their energy from light while chemolithoautotrophs (chemolithotrophs or chemoautotrophs) derive their energy from chemical reactions. Chemolithoautotrophs are exclusively microbes. Photolithoautotrophs include macroflora such as plants; these do not possess the ability to use mineral sources of reduced compounds for energy. Most chemolithoautotrophs belong to the domain Bacteria, while some belong to the domain Archaea. Lithoautotrophic bacteria can only use inorganic molecules as substrates in their energy-releasing reactions. The term "lithotroph" is from Greek lithos (λίθος) meaning "rock" and trōphos (τροφοσ) meaning "consumer"; literally, it may be read "eaters of rock". The "lithotroph" part of the name refers to the fact that these organisms use inorganic elements/compounds as their electron source, while the "autotroph" part of the name refers to their carbon source being CO2. Many lithoautotrophs are extremophiles, but this is not universally so, and some can be found to be the cause of acid mine drainage.
Lithoautotrophs are extremely specific in their source of reduced compounds. Thus, despite the diversity in using inorganic compounds that lithoautotrophs exhibit as a group, one particular lithoautotroph would use only one type of inorganic molecule to get its energy. A chemolithotrophic example are Anaerobic Ammonia Oxidizing Bacteria (ANAMMOX), which use ammonia and nitrite to produce N2. Additionally, in July 2020, researchers reported the discovery of chemolithoautotrophic bacterial cultures that feed on the metal manganese after performing unrelated experiments and named its bacterial species Candidatus Manganitrophus noduliformans and Ramlibacter lithotrophicus.
Metabolism
Some chemolithotrophs use redox half-reactions with low re
|
https://en.wikipedia.org/wiki/Midas%20XL8
|
The Midas XL8 was the first digital mixing console produced by Midas, previously a leading manufacturer of analogue mixing consoles for live sound. The introduction of the console came after years of digital console competition by Yamaha, Digidesign, and DiGiCo.
References
Digital audio
|
https://en.wikipedia.org/wiki/Comparison%20of%20file%20systems
|
The following tables compare general and technical information for a number of file systems.
General information
Metadata
Features
File capabilities
Block capabilities
Note that in addition to the below table, block capabilities can be implemented below the file system layer in Linux (LVM, , cryptsetup) or Windows (Volume Shadow Copy Service, SECURITY), etc.
Resize capabilities
"online" and "offline" are synonymous with "mounted" and "not mounted".
Allocation and layout policies
OS support
Limits
While storage devices usually have their size expressed in powers of 10 (for instance a 1 TB Solid State Drive will contain at least 1,000,000,000,000 (1012, 10004) bytes), filesystem limits are invariably powers of 2, so usually expressed with IEC prefixes. For instance, a 1 TiB limit means 240, 10244 bytes. Approximations (rounding down) using power of 10 are also given below to clarify.
See also
List of file systems
Comparison of file archivers
List of archive formats
Comparison of archive formats
Notes
References
External links
A speed comparison of filesystems on Linux 2.4.5 (archived)
Filesystems (ext3, reiser, xfs, jfs) comparison on Debian Etch (April 23, 2006)
Block allocation strategies of various filesystems
What are the (dis)advantages of ext4, ReiserFS, JFS, and XFS? - Unix & Linux Stack Exchange
File systems
|
https://en.wikipedia.org/wiki/Michael%20Fourman
|
Michael Paul Fourman FBCS FRSE (born 12 September 1950) is Professor of Computer Systems at the University of Edinburgh in Scotland, UK, and was Head of the School of Informatics from 2001 to 2009.
Fourman is worked in applications of logic in computer science, artificial intelligence, and cognitive science – more specifically, formal models of digital systems, system design tools, proof assistants, categorical semantics and propositional planning.
Qualifications
Fourman received a BSc in Mathematics and Philosophy from the University of Bristol in 1971, then his MSc in Mathematical Logic from the University of Oxford in 1972. He wrote his DPhil thesis Connections between Category Theory and Logic under the supervision of Dana Scott at Oxford, defending his thesis in 1974.
Career
He continued to work with Scott as an SRC postdoctoral Research Fellow and Junior Research Fellow of Wolfson College, in Oxford, until 1976, when he moved to the USA, first as a Visiting Assistant Professor of Mathematics at Clark University in Worcester, Massachusetts, then, from 1977 to 1982, as JF Ritt Assistant Professor of Mathematics at Columbia University in New York.
In 1983 he moved, with a Science and Engineering Research Council Fellowship, to the Department of Electronic and Electrical Engineering at Brunel University. He was appointed to a Readership, and then to the Chair of Formal Systems, at Brunel in 1986.
Fourman was co-founder and Technical Director of Abstract Hardware Limited (AHL), a company formed in 1986. He was central in the development of the LAMBDA system (Logic And Mathematics Behind Design Automation) to aid hardware design, a tool implemented in the SML programming language and marketed by AHL. He left the company in 1997.
In 1988 he joined the Laboratory for Foundations of Computer Science at the University of Edinburgh, and was appointed to the Chair of Computer Systems in the Department of Computer Science.
In 1998 he was founding Head of the Divisi
|
https://en.wikipedia.org/wiki/Supervisory%20circuit
|
Supervisory circuits are electronic circuits that monitor one or more parameters of systems such as power supplies and microprocessors which must be maintained within certain limits, and take appropriate action if a parameter goes out of bounds, creating an unacceptable or dangerous situation.
Supervisory circuits are known by a variety of names, including battery monitors, power supply monitors, supply supervisory circuits, and reset circuits.
Thermal protection
A thermal protection circuit consists of a temperature-monitoring circuit and a control circuit. The control circuit may either shut down the circuitry it is protecting, reduce the power available in order to avoid overheating, or notify the system (software or user). These circuits may be quite complex, programmable and software-run, or simple with predefined limits.
Overvoltage and undervoltage protection
Voltage protection circuits protect circuitry from either overvoltage or undervoltage; either of these situations can have detrimental effects. Supervisory circuits that specifically focus on voltage regulation are often sold as supply voltage supervisors and will reset the protected circuit when the voltage returns to operating range.
Two types of overvoltage protection devices are currently used: clamping, which passes through voltages up to a certain level, and foldback, which shunts voltage away from the load. The shunting creates a short circuit which removes power from the protected circuitry. In certain applications this circuitry can reset itself after the dangerous condition has passed.
Fire Alarm Systems
Fire Alarm systems often supervise inputs and outputs with an end-of-line device such as a resistor or capacitor. The system looks for changes in resistance or capacitance values to determine if the circuit has an abnormal condition.
References
See also
From the Littelfuse Electronics Designer guide. PDF contains a good introduction to circuit protection.
Electronic circuits
|
https://en.wikipedia.org/wiki/Geographic%20Locator%20Codes
|
Worldwide Geographic Location Codes (GLCs) list the number and letter codes federal agencies should use in designating geographic locations anywhere in the United States or abroad in computer programs. Use of standard codes facilitates the interchange of machine-readable data from agency to agency within the federal community and between federal offices and state and local groups. These codes are also used by some companies as a coding standard as well, especially those that must deal with federal, state and local governments for such things as taxes. The GLCs are administered by the U.S. General Services Administration (GSA).
External links
US General Services Administration site
General Services Administration
Geocodes
|
https://en.wikipedia.org/wiki/Coxeter%E2%80%93Dynkin%20diagram
|
In geometry, a Coxeter–Dynkin diagram (or Coxeter diagram, Coxeter graph) is a graph with numerically labeled edges (called branches) representing the spatial relations between a collection of mirrors (or reflecting hyperplanes). It describes a kaleidoscopic construction: each graph "node" represents a mirror (domain facet) and the label attached to a branch encodes the dihedral angle order between two mirrors (on a domain ridge), that is, the amount by which the angle between the reflective planes can be multiplied to get 180 degrees. An unlabeled branch implicitly represents order-3 (60 degrees), and each pair of nodes that is not connected by a branch at all (such as non-adjacent nodes) represents a pair of mirrors at order-2 (90 degrees).
Each diagram represents a Coxeter group, and Coxeter groups are classified by their associated diagrams.
Dynkin diagrams are closely related objects, which differ from Coxeter diagrams in two respects: firstly, branches labeled "4" or greater are directed, while Coxeter diagrams are undirected; secondly, Dynkin diagrams must satisfy an additional (crystallographic) restriction, namely that the only allowed branch labels are 2, 3, 4, and 6. Dynkin diagrams correspond to and are used to classify root systems and therefore semisimple Lie algebras.
Description
Branches of a Coxeter–Dynkin diagram are labeled with a rational number p, representing a dihedral angle of 180°/p. When the angle is 90° and the mirrors have no interaction, so the branch can be omitted from the diagram. If a branch is unlabeled, it is assumed to have , representing an angle of 60°. Two parallel mirrors have a branch marked with "∞". In principle, n mirrors can be represented by a complete graph in which all n( branches are drawn. In practice, nearly all interesting configurations of mirrors include a number of right angles, so the corresponding branches are omitted.
Diagrams can be labeled by their graph structure. The first forms studied by Ludwig Sch
|
https://en.wikipedia.org/wiki/List%20of%20paleoethnobotanists
|
The following is a list of paleoethnobotanists.
Amy Bogaard
Gayle J. Fritz
Dorian Fuller
Christine A. Hastorf
Andreas G. Heiss
Hans Helbaek
Gordon Hillman
Maria Hopf
Stefanie Jacomet
Glynis Jones
Mordechai Kislev
Udelgard Körber-Grohne
Naomi F. Miller
Klaus Oeggl
Deborah M. Pearsall
Dolores Piperno
Jane Renfrew
Irwin Rovner
Marijke van der Veen
Willem van Zeist
George Willcox
Ulrich Willerding
Daniel Zohary
See also
List of plant scientists
Paleoethnobotany
External links
List of archaeobotanists at the Open Directory
Paleoethnobotanists
Paleoethnobotanist
Archaeobotanists
|
https://en.wikipedia.org/wiki/Titanium%28III%29%20oxide
|
Titanium(III) oxide is the inorganic compound with the formula Ti2O3. A black semiconducting solid, it is prepared by reducing titanium dioxide with titanium metal at 1600 °C.
Ti2O3 adopts the Al2O3 (corundum) structure. It is reactive with oxidising agents. At around 200 °C, there is a transition from semiconducting to metallic conducting. Titanium(III) oxide occurs naturally as the extremely rare mineral in the form of tistarite.
Other titanium(III) oxides include LiTi2O4 and LiTiO2.
References
Titanium(III) compounds
Sesquioxides
Transition metal oxides
Semiconductor materials
|
https://en.wikipedia.org/wiki/Tungsten%20disilicide
|
Tungsten silicide (WSi2) is an inorganic compound, a silicide of tungsten. It is an electrically conductive ceramic material.
Chemistry
Tungsten silicide can react violently with substances such as strong acids, fluorine, oxidizers, and interhalogens.
Applications
It is used in microelectronics as a contact material, with resistivity 60–80 μΩ cm; it forms at 1000 °C. It is often used as a shunt over polysilicon lines to increase their conductivity and increase signal speed. Tungsten silicide layers can be prepared by chemical vapor deposition, e.g. using monosilane or dichlorosilane with tungsten hexafluoride as source gases. The deposited film is non-stoichiometric, and requires annealing to convert to more conductive stoichiometric form. Tungsten silicide is a replacement for earlier tungsten films. Tungsten silicide is also used as a barrier layer between silicon and other metals, e.g. tungsten.
Tungsten silicide is also of value towards use in microelectromechanical systems, where it is mostly applied as thin films for fabrication of microscale circuits. For such purposes, films of tungsten silicide can be plasma-etched using e.g. nitrogen trifluoride gas.
WSi2 performs well in applications as oxidation-resistant coatings. In particular, in similarity to Molybdenum disilicide, MoSi2, the high emissivity of tungsten disilicide makes this material attractive for high temperature radiative cooling, with implications in heat shields.
References
Ceramic materials
Group 6 silicides
Refractory materials
Semiconductor materials
Tungsten compounds
|
https://en.wikipedia.org/wiki/Active%20queue%20management
|
In routers and switches, active queue management (AQM) is the policy of dropping packets inside a buffer associated with a network interface controller (NIC) before that buffer becomes full, often with the goal of reducing network congestion or improving end-to-end latency. This task is performed by the network scheduler, which for this purpose uses various algorithms such as random early detection (RED), Explicit Congestion Notification (ECN), or controlled delay (CoDel). RFC 7567 recommends active queue management as a best practice.
Overview
An Internet router typically maintains a set of queues, one or more per interface, that hold packets scheduled to go out on that interface. Historically, such queues use a drop-tail discipline: a packet is put onto the queue if the queue is shorter than its maximum size (measured in packets or in bytes), and dropped otherwise.
Active queue disciplines drop or mark packets before the queue is full. Typically, they operate by maintaining one or more drop/mark probabilities, and occasionally dropping or marking packets according to the probabilities before the queue is full.
Benefits
Drop-tail queues have a tendency to penalise bursty flows, and to cause global synchronisation between flows. By dropping packets probabilistically, AQM disciplines typically avoid both of these issues.
By providing endpoints with congestion indication before the queue is full, AQM disciplines are able to maintain a shorter queue length than drop-tail queues, which combats bufferbloat and reduces network latency.
Drawbacks
Early AQM disciplines (notably RED and SRED) require careful tuning of their parameters in order to provide good performance. These systems are not optimally behaved from a control theory perspective. Modern AQM disciplines (ARED, Blue, PI, CoDel, CAKE) are self-tuning, and can be run with their default parameters in most circumstances.
Network engineers have historically been trained to avoid packet loss, and have ther
|
https://en.wikipedia.org/wiki/Leibniz%20formula%20for%20determinants
|
In algebra, the Leibniz formula, named in honor of Gottfried Leibniz, expresses the determinant of a square matrix in terms of permutations of the matrix elements. If is an matrix, where is the entry in the -th row and -th column of , the formula is
where is the sign function of permutations in the permutation group , which returns and for even and odd permutations, respectively.
Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes
which may be more familiar to physicists.
Directly evaluating the Leibniz formula from the definition requires operations in general—that is, a number of operations asymptotically proportional to factorial—because is the number of order- permutations. This is impractically difficult for even relatively small . Instead, the determinant can be evaluated in operations by forming the LU decomposition (typically via Gaussian elimination or similar methods), in which case and the determinants of the triangular matrices and are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, . The determinant can also be evaluated in fewer than operations by reducing the problem to matrix multiplication, but most such algorithms are not practical.
Formal statement and proof
Theorem.
There exists exactly one function which is alternating multilinear w.r.t. columns and such that .
Proof.
Uniqueness: Let be such a function, and let be an matrix. Call the -th column of , i.e. , so that
Also, let denote the -th column vector of the identity matrix.
Now one writes each of the 's in terms of the , i.e.
.
As is multilinear, one has
From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutation
|
https://en.wikipedia.org/wiki/Kautsky%20effect
|
In biophysics, the Kautsky effect (also fluorescence transient, fluorescence induction or fluorescence decay) is a phenomenon consisting of a typical variation in the behavior of a plant fluorescence when exposed to light. It was discovered in 1931 by H. Kautsky and A. Hirsch.
When dark-adapted photosynthesising cells are illuminated with continuous light, chlorophyll fluorescence displays characteristic changes in intensity accompanying the induction of photosynthetic activity.
Application of Kautsky effect
The quantum yield of photosynthesis, which is also the photochemical quenching of fluorescence, is calculated through the following equation:
Φp = (Fm-F0)/Fm = Fv/Fm
F0 is the low fluorescence intensity, which is measured by a short light flash that is not strong enough to cause photochemistry, and thus induces fluorescence. Fm is the maximum fluorescence that can be obtained from a sample by measuring the highest intensity of fluorescence after a saturating flash. The difference between the measured values is the variable fluorescence Fv.
Explanation
When a sample (leaf or algal suspension) is illuminated, the fluorescence intensity increases with a time constant in the microsecond or millisecond range. After a few seconds the intensity decreases and reaches a steady-state level. The initial rise of the fluorescence intensity is attributed to the progressive saturation of the reaction centers of photosystem 2 (PSII). Therefore, photochemical quenching increases with the time of illumination, with a corresponding increase of the fluorescence intensity. The slow decrease of the fluorescence intensity at later times is caused, in addition to other processes, by non-photochemical quenching. Non-photochemical quenching is a protection mechanism in photosynthetic organisms as they have to avoid the adverse effect of excess light. Which components contribute and in which quantities remains an active but controversial area of research. It is known that carotenoid
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.