source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Fomivirsen
|
Fomivirsen (brand name Vitravene) is an antisense antiviral drug that was used in the treatment of cytomegalovirus retinitis (CMV) in immunocompromised patients, including those with AIDS. It was administered via intraocular injection.
It was discovered at the NIH and was licensed and initially developed by Isis Pharmaceuticals, which subsequently licensed it to Novartis. It was licensed by the FDA for CMV in Aug 1998, and was the first antisense drug that was approved.
Novartis withdrew the marketing authorization in the EU in 2002 and in the US in 2006. The drug was withdrawn because while there was a high unmet need for drugs to treat CMV when the drug was initially discovered and developed due to the CMV arising in people with AIDS, the development of HAART dramatically reduced the number of cases of CMV.
It is an antisense oligonucleotide -- a synthetic 21 member oligonucleotide with phosphorothioate linkages (which are resistant to degradation by nucleases) and has the sequence:
5'-GCG TTT GCT CTT CTT CTT GCG-3'
It blocks translation of viral mRNA by binding to the complementary sequence of the mRNA transcribed from the template segment of a key CMV gene UL123, which encodes the CMV protein IE2. It was the first antisense antiviral approved by the FDA.
|
https://en.wikipedia.org/wiki/Cytochalasin%20D
|
Cytochalasin D is a member of the class of mycotoxins known as cytochalasins. Cytochalasin D is an alkaloid produced by Helminthosporium and other molds.
Cytochalasin D is a cell-permeable and potent inhibitor of actin polymerization. It disrupts actin microfilaments and activates the p53-dependent pathways causing arrest of the cell cycle at the G1-S transition. It is believed to bind to F-actin polymer and prevent polymerization of actin monomers.
|
https://en.wikipedia.org/wiki/64%2C079
|
64079 is the twenty-third Lucas number and is thus often written as L23. It is significant for being the first Lucas number Ln where n is prime that is itself not prime, after L3=4.
Other uses
64079 is the zip code of Platte City and Tracy, Missouri.
|
https://en.wikipedia.org/wiki/Distributive%20category
|
In mathematics, a category is distributive if it has finite products and finite coproducts and such that for every choice of objects , the canonical map
is an isomorphism, and for all objects , the canonical map is an isomorphism (where 0 denotes the initial object). Equivalently, if for every object the endofunctor defined by preserves coproducts up to isomorphisms . It follows that and aforementioned canonical maps are equal for each choice of objects.
In particular, if the functor has a right adjoint (i.e., if the category is cartesian closed), it necessarily preserves all colimits, and thus any cartesian closed category with finite coproducts (i.e., any bicartesian closed category) is distributive.
Example
The category of sets is distributive. Let , , and be sets. Then
where denotes the coproduct in Set, namely the disjoint union, and denotes a bijection. In the case where , , and are finite sets, this result reflects the distributive property: the above sets each have cardinality .
The categories Grp and Ab are not distributive, even though they have both products and coproducts.
An even simpler category that has both products and coproducts but is not distributive is the category of pointed sets.
|
https://en.wikipedia.org/wiki/General%20Graphics%20Interface
|
General Graphics Interface (GGI) was a project that aimed to develop a reliable, stable and fast computer graphics system that works everywhere. The intent was to allow for any program using GGI to run on any computing platform supported by it, requiring at most a recompilation. GGI is free and open-source software, subject to the requirements of the MIT License.
The GGI project, and its related projects such as KGI, are generally acknowledged to be dead.
Goals
The project was originally started to make switching back and forth between virtual consoles, svgalib, and the X display server subsystems on Linux more reliable. The goals were:
Portability through a flexible and extensible API for the applications. This avoids bloat in the applications by only getting what they use.
Portability in cross-platform and in backends
Security in the sense of requiring as few privileges as possible
The GGI framework is implemented by a set of portable user-space libraries, with an array of different backends or targets (e.g. Linux framebuffer, X11, Quartz, DirectX), of which the two most fundamental are LibGII (for input-handling) and LibGGI (for graphical output). All other packages add features to these core libraries, and so depend on one or both of them.
Some targets talk to other targets. These are called pseudo targets. Pseudo targets can be combined and work like a pipeline.
One example:
display-palemu, for example, emulates palette mode on truecolor modes. This allows users to run applications in palette mode even on machines where no palette mode would be available otherwise. display-tile splits large virtual display into many smaller pieces. You can spread them on multiple monitors or even forward them over a network.
History
Andreas Beck and Steffen Seeger founded The GGI Project in 1994 after some experimental precursors that were called "scrdrv".
Development of scrdrv was motivated by the problems caused by coexisting but not very well cooperating graphics env
|
https://en.wikipedia.org/wiki/Jensen%27s%20formula
|
In the mathematical field known as complex analysis, Jensen's formula, introduced by , relates the average magnitude of an analytic function on a circle with the number of its zeros inside the circle. It forms an important statement in the study of entire functions.
Formal statement
Suppose that is an analytic function in a region in the complex plane which contains the closed disk of radius about the origin, are the zeros of in the interior of (repeated according to their respective multiplicity), and that .
Jensen's formula states that
This formula establishes a connection between the moduli of the zeros of the function inside the disk and the average of
on the boundary circle , and can be seen as a generalisation of the mean value property of harmonic functions. Namely, if has no zeros in , then Jensen's formula reduces to
which is the mean-value property of the harmonic function .
An equivalent statement of Jensen's formula that is frequently used is
where denotes the number of zeros of in the disc of radius centered at the origin.
Proof
Applications
Jensen's formula can be used to estimate the number of zeros of an analytic function in a circle. Namely, if is a function analytic in a disk of radius centered at and if is bounded by on the boundary of that disk, then the number of zeros of in a circle of radius
centered at the same point does not exceed
Jensen's formula is an important statement in the study of value distribution of entire and meromorphic functions. In particular, it is the starting point of Nevanlinna theory, and it often appears in proofs of Hadamard factorization theorem, which requires an estimate on the number of zeros of an entire function.
Generalizations
Jensen's formula may be generalized for functions which are merely meromorphic on . Namely, assume that
where and are analytic functions in having zeros at
and
respectively, then Jensen's formula for meromorphic functions states that
Jensen's
|
https://en.wikipedia.org/wiki/Finite%20difference%20method
|
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval (if applicable) are discretized, or broken into a finite number of steps, and the value of the solution at these discrete points is approximated by solving algebraic equations containing finite differences and values from nearby points.
Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. Modern computers can perform these linear algebra computations efficiently which, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis.
Today, FDM are one of the most common approaches to the numerical solution of PDE, along with finite element methods.
Derive difference quotient from Taylor's polynomial
For a n-times differentiable function, by Taylor's theorem the Taylor series expansion is given as
where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. We will derive an approximation for the first derivative of the function f by first truncating the Taylor polynomial plus remainder:
Dividing across by h gives:
Solving for :
Assuming that is sufficiently small, the approximation of the first derivative of f is:
This is, not coincidentally, similar to the definition of derivative, which is given as:
except for the limit towards zero (the method is named after this).
Accuracy and order
The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rou
|
https://en.wikipedia.org/wiki/Tumor%20M2-PK
|
Tumor M2-PK is a synonym for the dimeric form of the pyruvate kinase isoenzyme type M2 (PKM2), a key enzyme within tumor metabolism. Tumor M2-PK can be elevated in many tumor types, rather than being an organ-specific tumor marker such as PSA. Increased stool (fecal) levels are being investigated as a method of screening for colorectal tumors, and EDTA plasma levels are undergoing testing for possible application in the follow-up of various cancers.
Sandwich ELISAs based on two monoclonal antibodies which specifically recognize Tumor M2-PK (the dimeric form of M2-PK) are available for the quantification of Tumor M2-PK in stool and EDTA-plasma samples respectively. As a biomarker, the amount of Tumor M2-PK in stool and EDTA-plasma reflects the specific metabolic status of the tumors.
Early detection of colorectal tumors and polyps
M2-PK, as measured in feces, is a potential tumor marker for colorectal cancer. When measured in feces with a cutoff value of 4 U/ml, its sensitivity has been estimated to be 85% (with a 95% confidence interval of 65 to 96%) for colon cancer and 56% (confidence interval 41–74%) for rectal cancer. Its specificity is 95%.
The M2-PK test is not dependent on occult blood (ELISA method), so it can detect bleeding or non-bleeding bowel cancer and also polyps with high sensitivity and high specificity with no false negative, but false positives may occur.
Most people are more willing to accept non-invasive preventive medical check-ups. Therefore, the measurement of tumor M2-PK in stool samples, with follow-up by colonoscopy to clarify the tumor M2-PK positive results, may prove to be an advance in the early detection of colorectal carcinomas. The CE marked M2-PK Test is available in form of an ELISA test for quantitative results or as point of care test to receive results within minutes.
Tumor M2-PK is also useful to diagnose lung cancer and better than SCC and NSE tumor markers. With renal cell carcinoma (RCC), the M2PK test has sensitivity
|
https://en.wikipedia.org/wiki/Call%20gate%20%28Intel%29
|
A call gate is a mechanism in Intel's x86 architecture for changing the privilege level of a process when it executes a predefined function call using a CALL FAR instruction.
Overview
Call gates are intended to allow less privileged code to call code with a higher privilege level. This type of mechanism is essential in modern operating systems that employ memory protection since it allows user applications to use kernel functions and system calls in a way that can be controlled by the operating system.
Call gates use a special selector value to reference a descriptor accessed via the Global Descriptor Table or the Local Descriptor Table, which contains the information needed for the call across privilege boundaries. This is similar to the mechanism used for interrupts.
Usage
Assuming a call gate has been set up already by the operating system kernel, code simply does a CALL FAR with the necessary segment selector (the offset field is ignored). The processor will perform a number of checks to make sure the entry is valid and the code was operating at sufficient privilege to use the gate. Assuming all checks pass, a new CS/EIP is loaded from the segment descriptor, and continuation information is pushed onto the stack of the new privilege level (old SS, old ESP, old CS, old EIP, in that order). Parameters may also be copied from the old stack to the new stack if needed. The number of parameters to copy is located in the call gate descriptor.
The kernel may return to the user space program by using a RET FAR instruction which pops the continuation information off the stack and returns to the outer privilege level.
Format of call gate descriptor
typedef struct _CALL_GATE
{
USHORT OffsetLow;
USHORT Selector;
UCHAR NumberOfArguments:5;
UCHAR Reserved:3;
UCHAR Type:5; // 01100 in i386, 00100 in i286
UCHAR Dpl:2;
UCHAR Present:1;
USHORT OffsetHigh;
}CALL_GATE,*PCALL_GATE;
Previous use
Multics was the first user of call gates. The Honeywell 6180 had call gates
|
https://en.wikipedia.org/wiki/Bandwidth%20extension
|
Bandwidth extension of signal is defined as the deliberate process of expanding the frequency range (bandwidth) of a signal in which it contains an appreciable and useful content, and/or the frequency range in which its effects are such. Its significant advancement in recent years has led to the technology being adopted commercially in several areas including psychacoustic bass enhancement of small loudspeakers and the high frequency enhancement of coded speech and audio.
Bandwidth extension has been used in both speech and audio compression applications. The algorithms used in G.729.1 and Spectral Band Replication (SBR) are two of many examples of bandwidth extension algorithms currently in use. In these methods, the low band of the spectrum is encoded using an existing codec, whereas the high band is coarsely parameterized using fewer parameters. Many of these bandwidth extension algorithms make use of the correlation between the low band and the high band in order to predict the wider band signal from extracted lower-band features. Others encode the high band using very few bits. This is often sufficient since the ear is less sensitive to distortions in the high band compared to the low band.
Bass enhancement of small loudspeakers
Most often small loudspeakers are physically incapable of reproducing low frequency material. Using a psycho-acoustical phenomenon like the missing fundamental, perception of low frequencies can be greatly increased. By generating harmonics of lower frequencies and removing the lower frequencies themselves, the suggestion is created that these frequencies are still remaining in the signal. This process is usually applied through external equipment or embedded in the speaker system using a digital signal processor.
High frequency response can also be enhanced through generation of harmonics. Instead of mapping frequencies inside the reproducible region of the speaker, the speaker itself is used to generate frequencies outside the nor
|
https://en.wikipedia.org/wiki/Basic%20hostility
|
Basic hostility is a psychological concept first described by psychoanalyst Karen Horney. Horney described it as aggression which a child develops as a result of "basic evil". Horney generally defines basic evil as "invariably the lack of genuine warmth and affection". Basic evil includes all range of inappropriate parental behavior – from lack of affection to abuse. This situation of abuse and torment that can not be avoided or escaped causes kids to have a higher level of irritability. The same can be said for anxiety.
Background
Specifically, basic hostility pertains to a sense of anger and betrayal that a child feels towards his parents for their failure to provide a secure environment. Horney associated this concept with "basic anxiety", citing that the two are inseparably interwoven and are both offshoots of the "basic evil" of parental mistreatment. Their relationship can be explained in this manner: The existence of basic evil leads to basic hostility towards the parents and the world. Once such hostility is repressed it becomes basic anxiety or the feeling of being helpless.
The pattern of basic hostility
The child wants to leave, but cannot. Although the child wants to avoid the abuse, their parents are perpetrating it.
The child is dependent on their parents and therefore cannot move or back away.
The child therefore redirects their feelings and expressions of hostility toward people they do not depend on for support.
According to Horney, some children find Basic Hostility to be an aggressive coping strategy and continue using it to deal with life's problems.
See also
Basic anxiety
Disorganized attachment
|
https://en.wikipedia.org/wiki/AN/PYQ-10
|
The AN/PYQ-10 Simple Key Loader (SKL) is a ruggedized, portable, hand-held fill device, for securely receiving, storing, and transferring data between compatible cryptographic and communications equipment. The SKL was designed and built by Ralph Osterhout and then sold to Sierra Nevada Corporation, with software developed by Science Applications International Corporation (SAIC) under the auspices of the United States Army. It is intended to supplement and eventually replace the AN/CYZ-10 Data Transfer Device (DTD). The PYQ-10 provides all the functions currently resident in the CYZ-10 and incorporates new features that provide streamlined management of COMSEC key, Electronic Protection (EP) data, and Signal Operating Instructions (SOI). Cryptographic functions are performed by an embedded KOV-21 card developed by the National Security Agency (NSA). The AN/PYQ-10 supports both the DS-101 and DS-102 interfaces, as well as the KSD-64 Crypto Ignition Key. The SKL is backward-compatible with existing End Cryptographic Units (ECU) and forward-compatible with future security equipment and systems, including NSA's Key Management Infrastructure.
Between 2005 and 2007, the U.S. Army budget included funds for over 24,000 SKL units. The estimated price for FY07 was $1708 each. When released in May 2005, the price was $1695 each. This price includes the unit and the internal encryptor card.
|
https://en.wikipedia.org/wiki/7-Dehydrocholesterol%20reductase
|
7-Dehydrocholesterol reductase, also known as DHCR7, is a protein that in humans is encoded by the DHCR7 gene.
Function
The protein encoded by this gene is an enzyme catalyzing the production of cholesterol from 7-Dehydrocholesterol using NADPH.
The DHCR7 gene encodes delta-7-sterol reductase (EC 1.3.1.21), the ultimate enzyme of mammalian sterol biosynthesis that converts 7-dehydrocholesterol (7-DHC) to cholesterol. This enzyme removes the C(7-8) double bond introduced by the sterol delta8-delta7 isomerases. In addition, its role in drug-induced malformations is known: inhibitors of the last step of cholesterol biosynthesis such as AY9944 and BM15766 severely impair brain development.
Pathology
A deficiency is associated with Smith–Lemli–Opitz syndrome.
All house cats and dogs have higher-than-usual activity of this enzyme, causing an inability to synthesize vitamin D due to the lack of 7-dehydrocholesterol.
Interactive pathway map
See also
Steroidogenic enzyme
|
https://en.wikipedia.org/wiki/Berlekamp%27s%20algorithm
|
In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields (also known as Galois fields). The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems.
Overview
Berlekamp's algorithm takes as input a square-free polynomial (i.e. one with no repeated factors) of degree with coefficients in a finite field and gives as output a polynomial with coefficients in the same field such that divides . The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of into powers of irreducible polynomials (recalling that the ring of polynomials over a finite field is a unique factorization domain).
All possible factors of are contained within the factor ring
The algorithm focuses on polynomials which satisfy the congruence:
These polynomials form a subalgebra of R (which can be considered as an -dimensional vector space over ), called the Berlekamp subalgebra. The Berlekamp subalgebra is of interest because the polynomials it contains satisfy
In general, not every GCD in the above product will be a non-trivial factor of , but some are, providing the factors we seek.
Berlekamp's algorithm finds polynomials suitable for use with the above result by computing a basis for the Berlekamp subalgebra. This is achieved via the observation that Berlekamp subalgebra is in fact the kernel of a certain matrix over , which is derived from the so-called Berlekamp matrix of the polynomial, denoted . If then is the coefficient of the -th power term in the reduction of modulo , i.e.:
With a certain polynomial , say:
we may associate the row vector:
It is relatively straightforward to see that the row
|
https://en.wikipedia.org/wiki/Superoperator
|
In physics, a superoperator is a linear operator acting on a vector space of linear operators.
Sometimes the term refers more specially to a completely positive map which also preserves or does not increase the trace of its argument. This specialized meaning is used extensively in the field of quantum computing, especially quantum programming, as they characterise mappings between density matrices.
The use of the super- prefix here is in no way related to its other use in mathematical physics. That is to say superoperators have no connection to supersymmetry and superalgebra which are extensions of the usual mathematical concepts defined by extending the ring of numbers to include Grassmann numbers. Since superoperators are themselves operators the use of the super- prefix is used to distinguish them from the operators upon which they act.
Left/Right Multiplication
Defining the left and right multiplication superoperators by and respectively one can express the commutator as
Next we vectorize the matrix which is the mapping
where denotes a vector in the Fock-Liouville space.
The matrix representation of is then calculated by using the same mapping
indicating that . Similarly one can show that . These representations allows us to calculate things like eigenvalues associated to superoperators. These eigenvalues are particularly useful in the field of open quantum systems, where the real parts of the Lindblad superoperator's eigenvalues will indicate whether a quantum system will relax or not.
Example von Neumann Equation
In quantum mechanics the Schrödinger Equation, expresses the time evolution of the state vector by the action of the Hamiltonian which is an operator mapping state vectors to state vectors.
In the more general formulation of John von Neumann, statistical states and ensembles are expressed by density operators rather than state vectors.
In this context the time evolution of the density operator is expressed via the von Neumann e
|
https://en.wikipedia.org/wiki/Wireless%20supplicant
|
A Wireless Supplicant is a program that runs on a computer and is responsible for making login requests to a wireless network. It handles passing the login and encryption credentials to the authentication server. It also handles roaming from one wireless access point to another, in order to maintain connectivity.
See also
Supplicant
wpa_supplicant
Xsupplicant
|
https://en.wikipedia.org/wiki/Doppler%20echocardiography
|
Doppler echocardiography is a procedure that uses Doppler ultrasonography to examine the heart. An echocardiogram uses high frequency sound waves to create an image of the heart while the use of Doppler technology allows determination of the speed and direction of blood flow by utilizing the Doppler effect.
An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, any abnormal communications between the left and right side of the heart, any leaking of blood through the valves (valvular regurgitation), calculation of the cardiac output and calculation of E/A ratio (a measure of diastolic dysfunction). Contrast-enhanced ultrasound-using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
An advantage of Doppler echocardiography is that it can be used to measure blood flow within the heart without invasive procedures such as cardiac catheterization.
In addition, with slightly different filter/gain settings, the method can measure tissue velocities by tissue Doppler echocardiography. The combination of flow and tissue velocities can be used for estimating left ventricular filling pressure, although only under certain conditions.
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives). However, the calculation result will end up identical.
This procedure is frequently used to examine children's hearts for heart disease because there is no age or size requirement.
2D Doppler imaging
Unlike 1D Doppler im
|
https://en.wikipedia.org/wiki/Cantor%E2%80%93Zassenhaus%20algorithm
|
In computational algebra, the Cantor–Zassenhaus algorithm is a method for factoring polynomials over finite fields (also called Galois fields).
The algorithm consists mainly of exponentiation and polynomial GCD computations. It was invented by David G. Cantor and Hans Zassenhaus in 1981.
It is arguably the dominant algorithm for solving the problem, having replaced the earlier Berlekamp's algorithm of 1967. It is currently implemented in many computer algebra systems.
Overview
Background
The Cantor–Zassenhaus algorithm takes as input a square-free polynomial (i.e. one with no repeated factors) of degree n with coefficients in a finite field whose irreducible polynomial factors are all of equal degree (algorithms exist for efficiently factoring arbitrary polynomials into a product of polynomials satisfying these conditions, for instance, is a squarefree polynomial with the same factors as , so that the Cantor–Zassenhaus algorithm can be used to factor arbitrary polynomials). It gives as output a polynomial with coefficients in the same field such that divides . The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of into powers of irreducible polynomials (recalling that the ring of polynomials over any field is a unique factorisation domain).
All possible factors of are contained within the factor ring
. If we suppose that has irreducible factors , all of degree d, then this factor ring is isomorphic to the direct product of factor rings . The isomorphism from R to S, say , maps a polynomial to the s-tuple of its reductions modulo each of the , i.e. if:
then . It is important to note the following at this point, as it shall be of critical importance later in the algorithm: Since the are each irreducible, each of the factor rings in this direct sum is in fact a field. These fields each have degree .
Core result
The core result underlying the Cantor–Zassenhaus algorithm is the followi
|
https://en.wikipedia.org/wiki/XTS-400
|
The XTS-400 is a multilevel secure computer operating system. It is multiuser and multitasking that uses multilevel scheduling in processing data and information. It works in networked environments and supports Gigabit Ethernet and both IPv4 and IPv6.
The XTS-400 is a combination of Intel x86 hardware and the Secure Trusted Operating Program (STOP) operating system. XTS-400 was developed by BAE Systems, and originally released as version 6.0 in December 2003.
STOP provides high-assurance security and was the first general-purpose operating system with a Common Criteria assurance level rating of EAL5 or above. The XTS-400 can host, and be trusted to separate, multiple, concurrent data sets, users, and networks at different sensitivity levels.
The XTS-400 provides both an untrusted environment for normal work and a trusted environment for administrative work and for privileged applications. The untrusted environment is similar to traditional Unix environments. It provides binary compatibility with Linux applications running most Linux commands and tools as well as most Linux applications without the need for recompiling. This untrusted environment includes an X Window System GUI, though all windows on a screen must be at the same sensitivity level.
To support the trusted environment and various security features, STOP provides a set of proprietary APIs to applications. In order to develop programs that use these proprietary
APIs, a special software development environment (SDE) is needed. The SDE is also needed in order to port some complicated Linux/Unix applications to the XTS-400.
A new version of the STOP operating system, STOP 7 has since been introduced, with claims to have improved performance and new features such as RBAC.
Uses
As a high-assurance, MLS system, XTS-400 can be used in cross-domain solutions, which typically need a piece of privileged software to be developed which can temporarily circumvent one or more security features in a controlled m
|
https://en.wikipedia.org/wiki/Codabar
|
Codabar is a linear barcode symbology developed in 1972 by Pitney Bowes Corp. It and its variants are also known as Codeabar, Ames Code, NW-7, Monarch, Code 2 of 7, Rationalized Codabar, ANSI/AIM BC3-1995 or USD-4. Although Codabar has not been registered for US federal trademark status, its hyphenated variant Code-a-bar is a registered trademark.
Codabar was designed to be accurately read even when printed on dot-matrix printers for multi-part forms such as FedEx airbills and blood bank forms, where variants are still in use . Although newer symbologies hold more information in a smaller space, Codabar has a large installed base in libraries. It is even possible to print Codabar codes using typewriter-like impact printers, which allows the creation of many codes with consecutive numbers without having to use computer equipment. After each printed code, the printer's stamp is mechanically turned to the next number, as for example in mechanical mile counters.
Check digit
Because Codabar is self-checking, most standards do not define a check digit.
Some standards that use Codabar will define a check digit, but the algorithm is not universal. For purely numerical data, such as the library barcode pictured above, the Luhn algorithm is popular.
When all 16 symbols are possible, a simple modulo-16 checksum is used. The values 10 through 19 are assigned to the symbols –$:/.+ABCD, respectively.
Encoding
Each character comprises 7 elements, 4 bars and 3 spaces, and is separated from adjacent characters by an additional narrow space. Each can be either narrow (binary value 0) or wide (binary value 1). The width ratio between narrow and wide can be chosen between 1:2.25 and 1:3. The minimum narrow width varies with the specification, with the smallest being 0.0065 inches (0.165 mm), allowing 11 digits per inch to be encoded.
The characters are divided into three groups, based on the number of wide elements:
The basic 12 symbols (digits 0–9, dash, and $) are encoded us
|
https://en.wikipedia.org/wiki/Paracoccidioidomycosis
|
Paracoccidioidomycosis (PCM), also known as South American blastomycosis, is a fungal infection that can occur as a mouth and skin type, lymphangitic type, multi-organ involvement type (particularly lungs), or mixed type. If there are mouth ulcers or skin lesions, the disease is likely to be widespread. There may be no symptoms, or it may present with fever, sepsis, weight loss, large glands, or a large liver and spleen.
The cause is fungi in the genus Paracoccidioides, including Paracoccidioides brasiliensis and Paracoccidioides lutzii, acquired by breathing in fungal spores.
Diagnosis is by sampling of blood, sputum, or skin. The disease can appear similar to tuberculosis, leukaemia, and lymphoma Treatment is with antifungals; itraconazole. For severe disease, treatment is with amphotericin B followed by itraconazole, or trimethoprim/sulfamethoxazole as an alternative.
It is endemic to Central and South America, and is considered a type of neglected tropical disease. In Brazil, the disease causes around 200 deaths per year.
Signs and symptoms
Asymptomatic lung infection is common, with fewer than 5% of infected individuals developing clinical disease.
It can occur as a mouth and skin type, lymphangitic type, multi-organ involvement type (particularly lungs), or mixed type. If there are mouth ulcers or skin lesions, the disease is likely to be widespread. There may be no symptoms, or it may present with fever, sepsis, weight loss, large glands, or a large liver and spleen.
Two presentations are known, firstly the acute or subacute form, which predominantly affects children and young adults, and the chronic form, predominantly affecting adult men. Most cases are infected before age 20, although symptoms may present many years later.
Juvenile (acute/subacute) form
The juvenile, acute form is characterised by symptoms, such as fever, weight loss and feeling unwell together with enlarged lymph nodes and enlargement of the liver and spleen. This form is most of
|
https://en.wikipedia.org/wiki/Polish%20Enigma%20double
|
A Polish Enigma "double" was a machine produced by the Polish Cipher Bureau that replicated the German Enigma rotor cipher machine. The Enigma double was one result of Marian Rejewski's remarkable achievement of determining the wiring of the Enigma's rotors and reflectors.
First double
The Polish Cipher Bureau recognized that the Germans were using a new cipher. The Germans had mistakenly shipped a cipher machine to Poland; their attempts to recover a shipment raised the suspicions of Polish customs, and the Polish Cipher Bureau learned that the Germans were using an Enigma machine. The Bureau purchased a commercial Enigma machine, and it attempted but failed to break the cipher.
In December 1932, the Polish Cipher Bureau tasked Marian Rejewski with breaking the Enigma cipher machine. A French spy had obtained some material about the Enigma, and the French had provided the material to the Polish Cipher Bureau. By that time, the commercial Enigma had been extended to use a plugboard. Rejewski made rapid progress and was able to determine the wiring of the military Enigma. The Bureau modified its commercial Enigma rotors, reflector, and internal wiring to match the military Enigma. The commercial Enigma did not have a plugboard, but the plugboard could be simulated by relabeling the keys and the lamps. The result was the first Enigma double.
AVA
In February 1933, the Polish Cipher Bureau ordered fifteen "doubles" of the military Enigma machine from the AVA Radio Manufacturing Company, in Warsaw. Ultimately, about seventy such functional replicas were produced.
Precious gift
In August 1939, following a tripartite meeting of French, British, and Polish cryptologists held near Warsaw on 25 and 26 July, two Enigma replicas were passed to Poland's allies, one sent to Paris and one to London. Until then, German military Enigma traffic had defeated the British and French, and they had faced the disturbing prospect that German communications would remain "black"
|
https://en.wikipedia.org/wiki/Bi-amping%20and%20tri-amping
|
Bi-amping and tri-amping is the practice of using two or three audio amplifiers respectively to amplify different audio frequency ranges, with the amplified signals being routed to different speaker drivers, such as woofers, subwoofers and tweeters. With bi-amping and tri-amping, an audio crossover is used to divide a sound signal into different frequency ranges, each of which is then separately amplified and routed to separate speaker drivers. In Powered speakers using bi-amping, multiple speaker drivers are in the same speaker enclosure. In some bi-amp set-ups, the drivers are in separate speaker enclosures, such as with home stereos that contain two speakers and a separate subwoofer.
Description
Bi-amping is the use of two channels of amplification to power each loudspeaker within an audio system. Tri-amping is the practice of connecting three channels of amplification to a loudspeaker unit: one to power the bass driver (woofer), one to power the mid-range and the third to power the treble driver (tweeter). The terms derive from the prefix bi- meaning 'two', tri- meaning 'three', and amp the abbreviation for amplifier.
Crossover
It differs from the conventional arrangement in which each channel of amplification powers a single speaker. Bi-amping typically consists of a crossover network and two or more drivers. With ordinary loudspeakers, a single amplifier can power the woofer, mid-range and tweeter through an audio crossover, which filters the signal into high- medium- and low-frequencies (or high- and low-frequencies in 2-way speakers) – a mechanism that protects each driver from signals outside its frequency range. However, the passive crossover itself is inefficient, so splitting the frequencies electronically before these are amplified is a way to avoid this problem. In such a case, an amplifier each powers a frequency range determined by an active crossover to each of the drive units. The technique is primarily used in large-scale audio applications su
|
https://en.wikipedia.org/wiki/Cubic%20IDE
|
Cubic IDE is a modular development environment (IDE) for AmigaOS (versions 3.5 and 3.9 only) and MorphOS. Its central editor is GoldED 8, which supports file type centric configuration.
The specific features for developers include syntax highlighting for several programming languages (e.g. Hollywood), folding, a symbol browser, a project explorer, an installation assistant (to create installations), support for creating Rexx macros and documentation, makefile generation, dialogs to set compiler options, automatic completion of OS symbols and clickable compiler output (jump to error). Compiler integration is available for popular C/C++ compilers for the supported platforms: GCC, vbcc, SAS/C and StormC3. Several free compilers for AmigaOS3, PowerUP, WarpOS and MorphOS are included and integrated into the development environment.
External links
Amiga development software
Integrated development environments
MorphOS software
|
https://en.wikipedia.org/wiki/Gap%20gene
|
A gap gene is a type of gene involved in the development of the segmented embryos of some arthropods. Gap genes are defined by the effect of a mutation in that gene, which causes the loss of contiguous body segments, resembling a gap in the normal body plan. Each gap gene, therefore, is necessary for the development of a section of the organism.
Gap genes were first described by Christiane Nüsslein-Volhard and Eric Wieschaus in 1980. They used a genetic screen to identify genes required for embryonic development in the fruit fly Drosophila melanogaster. They found three genes – knirps, Krüppel and hunchback – where mutations caused deletion of particular stretches of segments. Later work identified more gap genes in the Drosophila early embryo – giant, huckebein and tailless. Further gap genes including orthodenticle and buttonhead are required for the development of the Drosophila head.
Once the gap genes had been identified at the molecular level it was found that each gap gene is expressed in a band in the early embryo generally correlated with the region that is absent in the mutant. In Drosophila the gap genes encode transcription factors, and they directly control the expression of another set of genes involved in segmentation, the pair-rule genes. The gap genes themselves are expressed under the control of maternal effect genes such as bicoid and nanos, and regulate each other to achieve their precise expression patterns.
Gene activation
Expression of tailless is activated by torso protein in the poles of the embryo. Tailless is also regulated in a complex manner by the maternal-effect gene bicoid.
Both embryonically transcribed hunchback and maternally transcribed hunchback are activated by bicoid protein in the anterior and is inhibited in the posterior by nanos protein. Embryonically transcribed hunchback protein is able to exhibit the same effects on Krüppel and knirps as maternally transcribed hunchback.
The Krüppel gene is activated when the bicoid
|
https://en.wikipedia.org/wiki/Torsten%20Carleman
|
Torsten Carleman (8 July 1892, Visseltofta, Osby Municipality – 11 January 1949, Stockholm), born Tage Gillis Torsten Carleman, was a Swedish mathematician, known for his results in classical analysis and its applications. As the director of the Mittag-Leffler Institute for more than two decades, Carleman was the most influential mathematician in Sweden.
Work
The dissertation of Carleman under Erik Albert Holmgren, as well as his work in the early 1920s, was devoted to singular integral equations. He developed the spectral theory of integral operators with Carleman kernels, that is, kernels K(x, y) such that K(y, x) = K(x, y) for almost every (x, y), and
for almost every x.
In the mid-1920s, Carleman developed the theory of quasi-analytic functions. He proved the necessary and sufficient condition for quasi-analyticity, now called the Denjoy–Carleman theorem. As a corollary, he obtained a sufficient condition for the determinacy of the moment problem. As one of the steps in the proof of the Denjoy–Carleman theorem in , he introduced the Carleman inequality
valid for any sequence of non-negative real numbers ak.
At about the same time, he established the Carleman formulae in complex analysis, which reconstruct an analytic function in a domain from its values on a subset of the boundary. He also proved a generalisation of Jensen's formula, now called the Jensen–Carleman formula.
In the 1930s, independently of John von Neumann, he discovered the mean ergodic theorem. Later, he worked in the theory of partial differential equations, where he introduced the Carleman estimates, and found a way to study the spectral asymptotics of Schrödinger operators.
In 1932, following the work of Henri Poincaré, Erik Ivar Fredholm, and Bernard Koopman, he devised the Carleman embedding (also called Carleman linearization), a way to embed a finite-dimensional system of nonlinear differential equations = P(u) for u: Rk → R, where the components of P are polynomials in u, into
|
https://en.wikipedia.org/wiki/Gheorghe%20Vr%C4%83nceanu
|
Gheorghe Vrănceanu (June 30, 1900 – April 27, 1979) was a Romanian mathematician, best known for his work in differential geometry and topology. He was titular member of the Romanian Academy and vice-president of the International Mathematical Union.
Biography
He was born in 1900 in Valea Hogei, then a village in Vaslui County, now a component of Lipova commune, in Bacău County. He was the eldest of five children in his family. After attending primary school in his village and high school in Vaslui, he went to study mathematics at the University of Iași in 1919. There, he took courses with , Vera Myller, , Victor Vâlcovici, and Simion Stoilow. After graduating in 1922, he went in 1923 to the University of Göttingen, where he studied under David Hilbert. Thereafter, he went to the University of Rome, where he studied under Tullio Levi-Civita, obtaining his doctorate on November 5, 1924, with thesis Sopra una teorema di Weierstrass e le sue applicazioni alla stabilita. The thesis defense committee was composed of 11 faculty, and was headed by Vito Volterra.
Vrănceanu returned to Iași, where he was appointed a lecturer at the university. In 1927–1928, he was awarded a Rockefeller Foundation scholarship to study in France and the United States, where he was in a contact with Élie Cartan and Oswald Veblen. In 1929, he returned to Romania, and was appointed professor at the University of Cernăuți. In 1939, he moved to the University of Bucharest, where he was appointed Head of the Geometry and Topology department in 1948, a position he held until his retirement in 1970. His doctoral students include Henri Moscovici and .
Vrănceanu was elected to the Romanian Academy as a corresponding member in 1946, then as a full member in 1955. From 1964 he was president of the Mathematics Section of the Romanian Academy. Also from 1964, he was an editor of the journal Revue Roumaine de mathématiques pures et appliquées, founded that year. At the International Congress of Mathemat
|
https://en.wikipedia.org/wiki/Sandbox%20%28software%20development%29
|
A sandbox is a testing environment that isolates untested code changes and outright experimentation from the production environment or repository, in the context of software development including Web development, Automation and revision control.
Sandboxing protects "live" servers and their data, vetted source code distributions, and other collections of code, data and/or content, proprietary or public, from changes that could be damaging to a mission-critical system or which could simply be difficult to revert, regardless of the intent of the author of those changes. Sandboxes replicate at least the minimal functionality needed to accurately test the programs or other code under development (e.g. usage of the same environment variables as, or access to an identical database to that used by, the stable prior implementation intended to be modified; there are many other possibilities, as the specific functionality needs vary widely with the nature of the code and the application[s] for which it is intended).
The concept of sandboxing is built into revision control software such as Git, CVS and Subversion (SVN), in which developers "check out" a copy of the source code tree, or a branch thereof, to examine and work on. After the developer has fully tested the code changes in their own sandbox, the changes would be checked back into and merged with the repository and thereby made available to other developers or end users of the software.
By further analogy, the term "sandbox" can also be applied in computing and networking to other temporary or indefinite isolation areas, such as security sandboxes and search engine sandboxes (both of which have highly specific meanings), that prevent incoming data from affecting a "live" system (or aspects thereof) unless/until defined requirements or criteria have been met.
In web services
The term sandbox is commonly used for the development of Web services to refer to a mirrored production environment for use by external devel
|
https://en.wikipedia.org/wiki/Reversion%20%28software%20development%29
|
In software development (and, by extension, in content-editing environments, especially wikis, that make use of the software development process of revision control), reversion or reverting is the abandonment of one or more recent changes in favor of a return to a previous version of the material at hand (typically software source code in the context of application development; HTML, CSS or script code in the context of web development; or content and formatting thereof in the context of wikis).
A revert may be done for a wide variety of reasons, including: fixing errors introduced by previous edits; restoring the material to a state that was not contentious until new disputes can be resolved; undoing scope creep; regression testing; and even petty malice, vandalistic intent, or personal unhappiness with the author of a previous change. While the is generally agreed to be a sound and sometimes necessary one, particular instantiations of its use may be at least as controversial as the changes being reverted.
See also
Revision control
Sandbox (computer security)
Software development process
|
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20poems
|
See also
Lists of U.S. state insignia
|
https://en.wikipedia.org/wiki/Spatial%20descriptive%20statistics
|
Spatial descriptive statistics is the intersection of spatial statistics and descriptive statistics; these methods are used for a variety of purposes in geography, particularly in quantitative data analyses involving Geographic Information Systems (GIS).
Types of spatial data
The simplest forms of spatial data are gridded data, in which a scalar quantity is measured for each point in a regular grid of points, and point sets, in which a set of coordinates (e.g. of points in the plane) is observed. An example of gridded data would be a satellite image of forest density that has been digitized on a grid. An example of a point set would be the latitude/longitude coordinates of all elm trees in a particular plot of land. More complicated forms of data include marked point sets and spatial time series.
Measures of spatial central tendency
The coordinate-wise mean of a point set is the centroid, which solves the same variational problem in the plane (or higher-dimensional Euclidean space) that the familiar average solves on the real line — that is, the centroid has the smallest possible average squared distance to all points in the set.
Measures of spatial dispersion
Dispersion captures the degree to which points in a point set are separated from each other. For most applications, spatial dispersion should be quantified in a way that is invariant to rotations and reflections. Several simple measures of spatial dispersion for a point set can be defined using the covariance matrix of the coordinates of the points. The trace, the determinant, and the largest eigenvalue of the covariance matrix can be used as measures of spatial dispersion.
A measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors.
Measures of spatial homogeneity
A homogeneous set of points in the plane is a set that is distributed such that approximately the same number of points occurs in any circular region of a given area.
|
https://en.wikipedia.org/wiki/Index%20case
|
The index case or patient zero is the first documented patient in a disease epidemic within a population, or the first documented patient included in an epidemiological study.
It can also refer to the first case of a condition or syndrome (not necessarily contagious) to be described in the medical literature, whether or not the patient is thought to be the first person affected. An index case can achieve the status of a "classic" case study in the literature, as did Phineas Gage, the first known person to exhibit a definitive personality change as a result of a brain injury.
Term
The index case may or may not indicate the source of the disease, the possible spread, or which reservoir holds the disease in between outbreaks, but may bring awareness of an emerging outbreak. Earlier cases may or may not be found and are labeled primary or coprimary, secondary, tertiary, etc. The term primary case can only apply to infectious diseases that spread from human to human, and refers to the person who first brings a disease into a group of people. In epidemiology, the term is often used by both scientists and journalists alike to refer to the individual known or believed to have been the first infected or source of the resulting outbreak in a population as the index case, but such would technically refer to the primary case.
Origin of patient zero
"Patient zero" was used to refer to the supposed source of HIV outbreak in the United States, flight attendant Gaëtan Dugas in the popular press, but the term's use was based on a misunderstanding (and Dugas was not the index case). In the 1984 study of Centers for Disease Control and Prevention (CDC), one of the earliest recorded HIV-patients was code-named "patient O", which stands for "patient out of California". The letter O, however, was interpreted by some readers of the report as the numeral 0. The designation patient zero (for Gaëtan Dugas) was subsequently propagated by the San Francisco Chronicle journalist Randy Shilts
|
https://en.wikipedia.org/wiki/Binary%20Land
|
is a puzzle video game developed by Hudson Soft in 1983 for the MSX, FM-7, NEC PC-6001, NEC PC-8801, and in 1985 for the Famicom. The MSX version saw release in Japan by Hudson Soft and in Europe by Kuma Computers Ltd in 1984.
While the Famicom version has 99 levels, there is no ending screen implemented in the game.
Gameplay
In the Famicom version of the game, players have to unite two penguins, Gurin (male) and Malon (female), who are in love. The MSX version features a human boy and a human girl; gameplay remains identical to the Famicom version. Players control Malon and Gurin simultaneously, with a timer adding to the difficulty. These penguins move in mirror images of each other. After completing the 17th stage, players have to start over again on stage 1. Je te veux by Erik Satie is the background music in the game during the levels. Upon uniting the two penguins, Beethoven's "Ode to Joy" is played.
A top-down view is used in the game, similar to the method used in The Adventures of Lolo. Standing in the penguins' paths are spiders, birds and other creatures which the player must attack with the penguins' spray. Cobwebs occupy some of the cells on the playing field, possibly slowing the player down long enough for a spider to kill either Gurin or Malon. With each level arranged on a 10-by-15 grid and a vertical wall separating the two penguins from each other, only the upper central cell is free for both characters to reach. This "free cell" always holds the caged heart that is required to complete the level. A row of walls surrounds each player in a maze-like environment. In order to make the challenges more difficult, they are unbalanced and offer a different experience for Gurin and Malon. Should Gurin or Malon become trapped in a cobweb, they are helpless until the player navigates the maze, crosses the free cell and then enters the other penguin's side of the maze to free them from the web with their spraycan.
Should the player reach a high enough s
|
https://en.wikipedia.org/wiki/Itsy%20Pocket%20Computer
|
The Itsy Pocket Computer is a small, low-power, handheld device with a highly flexible interface. It was designed at Digital Equipment Corporation's Western Research Laboratory to encourage novel user interface development—for example, it had accelerometers to detect movement and orientation as early as 1999.
Hardware
CPU: DEC StrongARM SA-1100 processor
Memory: 16 MB of DRAM, 4 MB of flash memory
Interfaces: I/O interfaces for audio input/output, IrDA, and an RS232 serial port
Small 320 x 200 pixel LCD touchscreen for display and user input
10 general purpose push-buttons for additional user input purposes
Power supply: Pair of standard AAA alkaline batteries
|
https://en.wikipedia.org/wiki/Chemical%20colitis
|
Chemical colitis is a type of colitis, an inflammation of the large intestine or colon, caused by the introduction of harsh chemicals to the colon by an enema or other procedure. Chemical colitis can resemble ulcerative colitis, infectious colitis and pseudomembranous colitis endoscopically.
Prior to 1950, hydrogen peroxide enemas were commonly used for certain conditions. This practice will often result in chemical colitis.
Soap enemas may also cause chemical colitis.
Harsh chemicals, such as compounds used to clean colonoscopes, are sometimes accidentally introduced into the colon during colonoscopy or other procedures. This can also lead to chemical colitis.
Chemical colitis may trigger a flare of ulcerative colitis or Crohn's colitis. Symptoms of colitis are assessed using the Simple Clinical Colitis Activity Index.
|
https://en.wikipedia.org/wiki/Swap%20%28computer%20programming%29
|
In computer programming, the act of swapping two variables refers to mutually exchanging the values of the variables. Usually, this is done with the data in memory. For example, in a program, two variables may be defined thus (in pseudocode):
data_item x := 1
data_item y := 0
swap (x, y);
After swap() is performed, x will contain the value 0 and y will contain 1; their values have been exchanged. This operation may be generalized to other types of values, such as strings and aggregated data types. Comparison sorts use swaps to change the positions of data.
In many programming languages the swap function is built-in. In C++, overloads are provided allowing std::swap to exchange some large structures in O(1) time.
Using a temporary variable
The simplest and probably most widely used method to swap two variables is to use a third temporary variable:
define swap (x, y)
temp := x
x := y
y := temp
While this is conceptually simple and in many cases the only convenient way to swap two variables, it uses extra memory. Although this should not be a problem in most applications, the sizes of the values being swapped may be huge (which means the temporary variable may occupy a lot of memory as well), or the swap operation may need to be performed many times, as in sorting algorithms.
In addition, swapping two variables in object-oriented languages such as C++ may involve one call to the class constructor and destructor for the temporary variable, and three calls to the copy constructor. Some classes may allocate memory in the constructor and deallocate it in the destructor, thus creating expensive calls to the system. Copy constructors for classes containing a lot of data, e.g. in an array, may even need to copy the data manually.
XOR swap
XOR swap uses the XOR operation to swap two numeric variables. It is generally touted to be faster than the naive method mentioned above; however it does have disadvantages. XOR swap is generally used to swap low-leve
|
https://en.wikipedia.org/wiki/Ajika
|
Ajika or adjika, (, , is a Georgian and Abkhazian hot, spicy, but subtly flavored dip, often used to flavor food. In 2018, the technology of ajika was inscribed on the Intangible Cultural Heritage of Georgia list.
Common varieties of ajika resemble Italian red pesto in appearance and consistency. Though it is usually red, green ajika is also made with unripe peppers.
See also
Erős Pista, a popular Hungarian pepper sauce.
Biber salçası, a hot or sweet pepper paste in Turkish cuisine
Muhammara or acuka, a hot pepper dip in Levantine cuisine
Harissa, a hot chili pepper paste in Maghreb cuisine
Zhug, a hot sauce in Middle Eastern cuisine, made from fresh hot peppers seasoned with coriander, garlic and various spices
List of dips
List of sauces
|
https://en.wikipedia.org/wiki/Time%20perception
|
The study of time perception or chronoception is a field within psychology, cognitive linguistics and neuroscience that refers to the subjective experience, or sense, of time, which is measured by someone's own perception of the duration of the indefinite and unfolding of events. The perceived time interval between two successive events is referred to as perceived duration. Though directly experiencing or understanding another person's perception of time is not possible, perception can be objectively studied and inferred through a number of scientific experiments. Some temporal illusions help to expose the underlying neural mechanisms of time perception.
The ancient Greeks recognized the difference between chronological time (chronos) and subjective time (kairos).
Pioneering work on time perception, emphasizing species-specific differences, was conducted by Karl Ernst von Baer.
Theories
Time perception is typically categorized in three distinct ranges, because different ranges of duration are processed in different areas of the brain:
Sub-second timing or millisecond timing
Interval timing or seconds-to-minutes timing
Circadian timing
There are many theories and computational models for time perception mechanisms in the brain. William J. Friedman (1993) contrasted two theories of the sense of time:
The strength model of time memory. This posits a memory trace that persists over time, by which one might judge the age of a memory (and therefore how long ago the event remembered occurred) from the strength of the trace. This conflicts with the fact that memories of recent events may fade more quickly than more distant memories.
The inference model suggests the time of an event is inferred from information about relations between the event in question and other events whose date or time is known.
Another hypothesis involves the brain's subconscious tallying of "pulses" during a specific interval, forming a biological stopwatch. This theory proposes that the
|
https://en.wikipedia.org/wiki/Reversible-jump%20Markov%20chain%20Monte%20Carlo
|
In computational statistics, reversible-jump Markov chain Monte Carlo is an extension to standard Markov chain Monte Carlo (MCMC) methodology, introduced by Peter Green, which allows simulation of the posterior distribution on spaces of varying dimensions.
Thus, the simulation is possible even if the number of parameters in the model is not known.
Let
be a model indicator and the parameter space whose number of dimensions depends on the model . The model indication need not be finite. The stationary distribution is the joint posterior distribution of that takes the values .
The proposal can be constructed with a mapping of and , where is drawn from a random component
with density on . The move to state can thus be formulated as
The function
must be one to one and differentiable, and have a non-zero support:
so that there exists an inverse function
that is differentiable. Therefore, the and must be of equal dimension, which is the case if the dimension criterion
is met where is the dimension of . This is known as dimension matching.
If then the dimensional matching
condition can be reduced to
with
The acceptance probability will be given by
where denotes the absolute value and is the joint posterior probability
where is the normalising constant.
Software packages
There is an experimental RJ-MCMC tool available for the open source BUGs package.
The Gen probabilistic programming system automates the acceptance probability computation for user-defined reversible jump MCMC kernels as part of its Involution MCMC feature.
|
https://en.wikipedia.org/wiki/Superior%20rectal%20vein
|
The inferior mesenteric vein begins in the rectum as the superior rectal vein (superior hemorrhoidal vein), which has its origin in the hemorrhoidal plexus, and through this plexus communicates with the middle and inferior hemorrhoidal veins.
The superior rectal vein leaves the lesser pelvis and crosses the left common iliac vessels with the superior rectal artery, and is continued upward as the inferior mesenteric vein.
|
https://en.wikipedia.org/wiki/Solitary%20lymphatic%20nodule
|
The Solitary lymphatic nodules (or solitary follicles) are structures found in the small intestine and large intestine.
Small intestine
The solitary lymphatic nodules are found scattered throughout the mucous membrane of the small intestine, but are most numerous in the lower part of the ileum.
Their free surfaces are covered with rudimentary villi, except at the summits, and each gland is surrounded by the openings of the intestinal glands.
Each consists of a dense interlacing retiform tissue closely packed with lymph-corpuscles, and permeated with an abundant capillary network.
The interspaces of the retiform tissue are continuous with larger lymph spaces which surround the gland, through which they communicate with the lacteal system.
They are situated partly in the submucous tissue, partly in the mucous membrane, where they form slight projections of its epithelial layer.
Large intestine
The solitary lymphatic nodules of the large intestine are most abundant in the cecum and vermiform process, but are irregularly scattered also over the rest of the intestine.
They are similar to those of the small intestine.
|
https://en.wikipedia.org/wiki/Spider%20behavior
|
Spider behavior refers to the range of behaviors and activities performed by spiders. Spiders are air-breathing arthropods that have eight legs and chelicerae with fangs that inject venom. They are the largest order of arachnids and rank seventh in total species diversity among all other groups of organisms which is reflected in their large diversity of behavior.
Diet
Almost all known spider species are predators, mostly preying on insects and on other spiders, although a few species also take vertebrates such as frogs, lizards, fish, and even birds and bats. Spiders' guts are too narrow to take solids, and they liquidize their food by flooding it with digestive enzymes and grinding it with the bases of their pedipalps, as they do not have true jaws.
Though most known spiders are almost exclusively carnivorous, a few species, primarily of jumping spiders, supplement their diet with plant matter such as sap, nectar, and pollen. However, most of these spiders still need a mostly carnivorous diet to survive, and lab studies have shown that they become unhealthy when fed only plants. One exception is a species of jumping spider called Bagheera kiplingi, which is largely herbivorous, feeding mainly on the sugar rich Beltian bodies produced by acacia plants.
Capturing prey
Many spiders, but not all, build webs. Other spiders use a wide variety of methods to capture prey.
Web: There are several recognised types of spider web
Spiral orb webs, associated primarily with the family Araneidae
Tangle webs or cobwebs, associated with the family Theridiidae
Funnel webs,
Tubular webs, which run up the bases of trees or along the ground
Sheet webs
The net-casting spider weaves a small net which it attaches to its front legs. It then lurks in wait for potential prey and when such prey arrives, lunges forward to wrap its victim in the net, bite and paralyze it. Hence, this spider expends less energy catching prey than a primitive hunter and also avoids the energy loss of
|
https://en.wikipedia.org/wiki/Rob%20Eastaway
|
Rob Eastaway is an English author. He is active in the popularisation of mathematics and was awarded the Zeeman medal in 2017 for excellence in the promotion of maths. He is best known for his books, including the bestselling Why Do Buses Come in Threes? and Maths for Mums and Dads. His first book was What is a Googly?, an explanation of cricket for Americans and other newcomers to the game.
Eastaway is a keen cricketer and was one of the originators of the International Rankings of Cricketers. He is also a puzzle setter and adviser for New Scientist magazine and he has appeared frequently on BBC Radio 4 and 5 Live.
He is the director of Maths Inspiration, a national programme of maths lectures for teenagers which involves some of the UK’s leading maths speakers. He was president of the UK Mathematical Association for 2007/2008. He is a former pupil of The King's School, Chester, and has a degree in engineering and management science from the University of Cambridge.
Books
1992: What is a Googly?
1995: The Guinness Book of Mindbenders, co-author David Wells
1998: Why do Buses Come in Threes?, co-author Jeremy Wyndham, foreword by Tim Rice
1999: The Memory Kit
2002: How Long is a Piece of String?, co-author Jeremy Wyndham
2004: How to Remember
2005: How to Take a Penalty, co-author John Haigh
2007: How to Remember (Almost) Everything Ever
2007: Out of the Box
2008: How Many Socks Make a Pair?
2009: Improve Your Memory Today, with Dr Hilary Jones
2010: Maths for Mums and Dads, co-author Mike Askew
2011: The Hidden Mathematics of Sport (new edition of Beating the Odds)
2013: More Maths for Mums and Dads, co-author Mike Askew
2016: Maths on the Go, co-author Mike Askew
2017: Any ideas? Tips and Techniques to Help You Think Creatively
2018: 100 Maddening Mindbending Puzzles
2019: Maths On The Back of an Envelope
|
https://en.wikipedia.org/wiki/Pectoral%20fascia
|
The pectoral fascia is a thin lamina, covering the surface of the pectoralis major, and sending numerous prolongations between its fasciculi: it is attached, in the middle line, to the front of the sternum; above, to the clavicle; laterally and below it is continuous with the fascia of the shoulder, axilla, and thorax.
It is very thin over the upper part of the pectoralis major, but thicker in the interval between it and the latissimus dorsi, where it closes in the axillary space and forms the axillary fascia; it divides at the lateral margin of the latissimus dorsi into two layers, one of which passes in front of, and the other behind it; these proceed as far as the spinous processes of the thoracic vertebrae, to which they are attached.
As the fascia leaves the lower edge of the pectoralis major to cross the floor of the axilla it sends a layer upward under cover of the muscle; this lamina splits to envelop the pectoralis minor, at the upper edge of which it is continuous with the coracoclavicular fascia. The hollow of the armpit, seen when the arm is abducted, is produced mainly by the traction of this fascia on the axillary floor, and hence the lamina is sometimes named the suspensory ligament of the axilla.
At the lower part of the thoracic region the deep fascia is well-developed, and is continuous with the fibrous sheaths of the rectus abdominis.
|
https://en.wikipedia.org/wiki/Hericium%20erinaceus
|
Hericium erinaceus (also called lion's mane mushroom, mountain-priest mushroom, bearded tooth fungus, and bearded hedgehog) is an edible mushroom belonging to the tooth fungus group. Native to North America, Europe, and Asia, it can be identified by its long spines (longer than ), occurrence on hardwoods, and tendency to grow a single clump of dangling spines. The fruit bodies can be harvested for culinary use.
H. erinaceus can be mistaken for other species of Hericium, which grow across the same range. In the wild, these mushrooms are common during late summer and fall on hardwoods, particularly American beech and maple. Usually H. erinaceus is considered saprophytic, as it mostly feeds on dead trees. It can also be found on living trees, so it may be a tree parasite. This could indicate an endophytic habitat.
Description
The fruit bodies of H. erinaceus are large, irregular bulbous tubercules. They are in diameter, and are dominated by crowded, hanging, spore-producing spines, which are long or longer.
The hyphal system is monomitic, amyloid, and composed of thin- to thick-walled hyphae that are approximately 3–15 μm (microns) wide. The hyphae also contain clamped septa and gloeoplerous elements (filled with oily, resinous substances), which can come into the hymenium as gloeocystidia.
The basidia are 25–40 µm long and 5–7 µm wide, contain four spores each and possess a basal clamp. The white amyloid spores measure approximately 5–7 µm in length and 4–5 µm in width. The spore shape is described as subglobose to short ellipsoid and the spore surface is smooth to finely roughened.
Development
The fruit bodies of H. erinaceus are mainly produced annually from August to November in Europe. It was observed that H. erinaceus could fruit intermittently for 20 years on the same dead tree. It is hypothesized that H. erinaceus can survive for 40 years. The mating system of H. erinaceus species found in the U.S. was shown to be bifactorially heterothallic.
The mon
|
https://en.wikipedia.org/wiki/Web.com%20%281995%E2%80%932007%29
|
Web Internet LLC (and later Web.com Inc.) were formed in 1997 by Bill Bloomfield, then President of Web Service Company which was the second largest coin-operated laundry machine company in the U.S. and held a trademark on the "WEB" brand, resulting in the company's ownership of the Web.com domain. Web.com initially launched as a web portal, offering paid search results, a shopping directory, comparison shopping engine, as well as a free web-based @web.com email service in multiple languages, all of which proved unsuccessful. To spearhead growth and bring Web.com domain registration and hosting services to market, Will Pemble was hired as CEO in 1999. Mr. Pemble led the development of Web.com's domain name registration and web hosting services, which became the core product offerings of the company. In 2004, Will Pemble purchased the business from its parent company the Web Services Company, Inc. Shortly after purchasing Web.com, Mr. Pemble founded Perfect Privacy, LLC, a subsidiary of Web.com pioneering private domain name registration services to customers of Web.com.
Will Pemble sold Web.com to Interland, Inc. in 2005. Following the sale, Interland changed its name to Web.com. The company maintained services including do-it-yourself and professional website design, web hosting, e-commerce, web marketing, and e-mail. As of March 2007, there were approximately 166,000 paid hosting subscribers.
Along with various web products and services, Web.com provided small businesses, entrepreneurs and consumers with advice and tips for developing a strong online presence. It owned the brands Web.com, Interland, Trellix, and HostPro.
On September 30, 2007, Web.com merged with Website Pros, forming the new Web.com.
History
The company traces its corporate roots to MicronPC, a multi-billion PC manufacturer. MicronPC entered the web services industry with the acquisition of HostPro in 1999. Two years later, they merged with Interland, Inc., a public company based in Atlant
|
https://en.wikipedia.org/wiki/Alex%20Martelli
|
Alex Martelli (born October 5, 1955) is an Italian computer engineer and Fellow of the Python Software Foundation. Since early 2005, he works for Google, Inc. in Mountain View, California, for the first few years as "Über Tech Lead," then as "Senior Staff Engineer," currently in charge of "long tail" community support for Google Cloud Platform.
He holds a Laurea in Electrical Engineering from Bologna University (1980); he is the author of Python in a Nutshell (recently out in a fourth edition, which Martelli wrote with three co-authors), co-editor of the Python Cookbooks first two editions, and has written other (mostly Python-related) materials. Martelli won the 2002 Activators' Choice Award, and the 2006 Frank Willison award for outstanding contributions to the Python community.
Before joining Google, Martelli spent a year designing computer chips with Texas Instruments; eight years with IBM Research, gradually shifting from hardware to software, and winning three Outstanding Technical Achievement Awards; 12 as Senior Software Consultant at think3, Inc., developing libraries, network protocols, GUI engines, event frameworks, and web access frontends; and three more as a freelance consultant, working mostly for Open End AB, a Python-centered software house (formerly known as Strakt AB) located in Gothenburg, Sweden.
He has taught courses on programming, development methods, object-oriented design, cloud computing, and numerical computing, at Ferrara University and other schools. Martelli was also the keynote speaker for the 2008 SciPy Conference, and various editions of Pycon APAC and Pycon Italia conferences.
Bibliography
|
https://en.wikipedia.org/wiki/Fermentation
|
Fermentation is a metabolic process that produces chemical changes in organic substances through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage. The science of fermentation is known as zymology.
In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically.
Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans.
Industrial fermentation is a broader term used for the process of applying microbes for the large-scale production of chemicals, biofuels, enzymes, proteins and pharmaceuticals.
Definitions and etymology
Below are some definitions of fermentation ranging from informal, general usages to more scientific definitions.
Preservation methods for food via microorganisms (general use).
Any large-scale microbial process occurring with or without air (common definition used in industry, also known as industrial fermentation).
Any process that produces alcoholic beverages or acidic dairy products (general use).
Any energy-releasing metabolic process that takes place only under anaerobic conditions (somewhat scientific).
Any metabolic process that releases energy from a sugar or other organic molecule, does not require oxygen or an electron transport system, and uses an organic molecule as the final electron acceptor (most scientific).
The word "ferment" is derived from the
|
https://en.wikipedia.org/wiki/Transcendental%20equation
|
In applied mathematics, a transcendental equation is an equation over the real (or complex) numbers that is not algebraic, that is, if at least one of its sides describes a transcendental function.
Examples include:
A transcendental equation need not be an equation between elementary functions, although most published examples are.
In some cases, a transcendental equation can be solved by transforming it into an equivalent algebraic equation.
Some such transformations are sketched below; computer algebra systems may provide more elaborated transformations.
In general, however, only approximate solutions can be found.
Transformation into an algebraic equation
Ad hoc methods exist for some classes of transcendental equations in one variable to transform them into algebraic equations which then might be solved.
Exponential equations
If the unknown, say x, occurs only in exponents:
applying the natural logarithm to both sides may yield an algebraic equation, e.g.
transforms to , which simplifies to , which has the solutions
This will not work if addition occurs "at the base line", as in
if all "base constants" can be written as integer or rational powers of some number q, then substituting y=qx may succeed, e.g.
transforms, using y=2x, to which has the solutions , hence is the only real solution.
This will not work if squares or higher power of x occurs in an exponent, or if the "base constants" do not "share" a common q.
sometimes, substituting y=xex may obtain an algebraic equation; after the solutions for y are known, those for x can be obtained by applying the Lambert W function, e.g.:
transforms to which has the solutions hence , where and the denote the real-valued branches of the multivalued function.
Logarithmic equations
If the unknown x occurs only in arguments of a logarithm function:
applying exponentiation to both sides may yield an algebraic equation, e.g.
transforms, using exponentiation to base to which has the solutions I
|
https://en.wikipedia.org/wiki/Fermentation%20in%20food%20processing
|
In food processing, fermentation is the conversion of carbohydrates to alcohol or organic acids using microorganisms—yeasts or bacteria—under anaerobic (oxygen-free) conditions. Fermentation usually implies that the action of microorganisms is desired. The science of fermentation is known as zymology or zymurgy.
The term "fermentation" sometimes refers specifically to the chemical conversion of sugars into ethanol, producing alcoholic drinks such as wine, beer, and cider. However, similar processes take place in the leavening of bread (CO2 produced by yeast activity), and in the preservation of sour foods with the production of lactic acid, such as in sauerkraut and yogurt.
Other widely consumed fermented foods include vinegar, olives, and cheese. More localised foods prepared by fermentation may also be based on beans, grain, vegetables, fruit, honey, dairy products, and fish.
History and prehistory
Brewing and winemaking
Natural fermentation precedes human history. Since ancient times, humans have exploited the fermentation process. The earliest archaeological evidence of fermentation is 13,000-year-old residues of a beer, with the consistency of gruel, found in a cave near Haifa in Israel. Another early alcoholic drink, made from fruit, rice, and honey, dates from 7000 to 6600 BC, in the Neolithic Chinese village of Jiahu, and winemaking dates from ca. 6000 BC, in Georgia, in the Caucasus area. Seven-thousand-year-old jars containing the remains of wine, now on display at the University of Pennsylvania, were excavated in the Zagros Mountains in Iran. There is strong evidence that people were fermenting alcoholic drinks in Babylon ca. 3000 BC, ancient Egypt ca. 3150 BC, pre-Hispanic Mexico ca. 2000 BC, and Sudan ca. 1500 BC.
Discovery of the role of yeast
The French chemist Louis Pasteur founded zymology, when in 1856 he connected yeast to fermentation.
When studying the fermentation of sugar to alcohol by yeast, Pasteur concluded that the fermentation wa
|
https://en.wikipedia.org/wiki/Viral%20entry
|
Viral entry is the earliest stage of infection in the viral life cycle, as the virus comes into contact with the host cell and introduces viral material into the cell. The major steps involved in viral entry are shown below. Despite the variation among viruses, there are several shared generalities concerning viral entry.
Reducing cellular proximity
How a virus enters a cell is different depending on the type of virus it is. A virus with a nonenveloped capsid enters the cell by attaching to the attachment factor located on a host cell. It then enters the cell by endocytosis or by making a hole in the membrane of the host cell and inserting its viral genome.
Cell entry by enveloped viruses is more complicated. Enveloped viruses enter the cell by attaching to an attachment factor located on the surface of the host cell. They then enter by endocytosis or a direct membrane fusion event. The fusion event is when the virus membrane and the host cell membrane fuse together allowing a virus to enter. It does this by attachment – or adsorption – onto a susceptible cell; a cell which holds a receptor that the virus can bind to. The receptors on the viral envelope effectively become connected to complementary receptors on the cell membrane. This attachment causes the two membranes to remain in mutual proximity, favoring further interactions between surface proteins. This is also the first requisite that must be satisfied before a cell can become infected. Satisfaction of this requisite makes the cell susceptible. Viruses that exhibit this behavior include many enveloped viruses such as HIV and herpes simplex virus.
These basic ideas extend to viruses that infect bacteria, known as bacteriophages (or simply phages). Typical phages have long tails used to attach to receptors on the bacterial surface and inject their viral genome.
Overview
Prior to entry, a virus must attach to a host cell. Attachment is achieved when specific proteins on the viral capsid or viral env
|
https://en.wikipedia.org/wiki/Biological%20engineering
|
Biological engineering or
bioengineering is the application of principles of biology and the tools of engineering to create usable, tangible, economically viable products. Biological engineering employs knowledge and expertise from a number of pure and applied sciences, such as mass and heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials, renewable energy, ecological engineering, agricultural engineering, process engineering and catalysis, and other areas that improve the living standards of societies.
Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. Bioengineering overlaps substantially with biotechnology and the biomedical sciences in a way analogous to how various other forms of engineering and technology relate to various other sciences (such as aerospace engineering and other space technology to kinetics and astrophysics).
In general, biological engineers attempt to either mimic biological systems to create products, or to modify and control biological systems. Working with doctors, clinicians, and researchers, bioengineers use traditional engineering principles and techniques to address biological processes, including ways to replace, augment, sustain, or predict chemical and mechanical processes.
History
Biological engineering is a science-based discipline founded upon the biological sciences in the same way that chemical engineering, electrical engineering, and mechanical engineering can be based upon chemistry, electricity and magnetism, and classical mechanics, respectively.
Before WWII, biological engineering had begun being recognized as a
|
https://en.wikipedia.org/wiki/Acute%20muscle%20soreness
|
Acute muscle soreness (AMS) is the pain felt in muscles during and immediately, up to 24 hours, after strenuous physical exercise. The pain appears within a minute of contracting the muscle and it will disappear within two or three minutes or up to several hours after relaxing it.
The following causes have been proposed for acute muscle soreness:
Accumulation of chemical end products of exercise in muscle cells such as lactic acid and H+
Muscle fatigue (the muscle tires and cannot contract any more)
There are no modern effective treatments for AMS.
Cause
Muscle soreness can stem from strain on the sarcomere, the muscle's functional unit. Due to the mechanism of activation of the unit by the nerves, which accumulates calcium that further degrades sarcomeres. This degradation initiates the body inflammatory response, and has to be supported by surrounding connective tissues. The inflammatory cells and cytokines stimulate the pain receptors that cause the acute pain associated with AMS. Repair of the sarcomere and the surrounding connective tissue leads to delayed onset muscle soreness, which peaks between 24 and 72 hours after exercise.
AMS may also be caused by cramping following strenuous exercise, which has been theorized to be caused by two pathways:
Dehydration
Electrolyte imbalance
Dehydration
Dehydration theory states that extracellular fluid (ECF) compartment becomes contracted due to the excessive sweating, causing the volume to decrease to the point until the muscles are contracted until the fluids can re-inhabit the vacuum. Excessive sweating can also cause the electrolyte imbalance theory, which is sweating disturbs the body's balance of electrolyte, which results in exciting motor neurons and spontaneous discharge. (?)
The feeling of soreness can also be attributed to the lack contraction from the muscle, which can lead to overexertion of the muscle. The decrease in contraction has been theorized to have been caused by the high level of concentr
|
https://en.wikipedia.org/wiki/Binding%20potential
|
In pharmacokinetics and receptor-ligand kinetics the binding potential (BP) is a combined measure of the density of "available" neuroreceptors and the affinity of a drug to that neuroreceptor.
Description
Consider a ligand receptor binding system. Ligand with a concentration L associates with a receptor of concentration or availability R to form a ligand-receptor complex with concentration RL. The binding potential is then the ratio ligand-receptor complex to free ligand at equilibrium and in the limit of L tending to 0, and is given symbol BP:
This quantity, originally defined by Mintun, describes the capacity of a receptor to bind ligand. It is a limit (L << Ki) of the general receptor association equation:
and is thus also equivalent to:
These equations apply equally when measuring the total receptor density or the residual receptor density available after binding to second ligand - availability.
BP in Positron Emission Tomography
BP is a pivotal measure in the use of positron emission tomography (PET) to measure the density of "available" receptors, e.g. to assess the occupancy by drugs or to characterize neuropsychiatric diseases (yet, one should keep in mind that binding potential is a combined measure that depends on receptor density as well as on affinity). An overview of the related methodology is e.g. given in Laruelle et al. (2002).
Estimating BP with PET usually requires that a reference tissue is available. A reference tissue has negligible receptor density and its distribution volume should be the same as the distribution volume in the target region if all receptors were blocked. Although the BP can be measured in a relatively unbiased way by measuring the whole time course of labelled ligand association and blood radioactivity, this is practically not always necessary. Two other common measures have been derived, which involve assumptions, but result in measures that should correlate with BP: and .
: The "specific to nonspecific equilibrium p
|
https://en.wikipedia.org/wiki/Brush%20border
|
A brush border (striated border or brush border membrane) is the microvillus-covered surface of simple cuboidal and simple columnar epithelium found in different parts of the body. Microvilli are approximately 100 nanometers in diameter and their length varies from approximately 100 to 2,000 nanometers. Because individual microvilli are so small and are tightly packed in the brush border, individual microvilli can only be resolved using electron microscopes; with a light microscope they can usually only be seen collectively as a fuzzy fringe at the surface of the epithelium. This fuzzy appearance gave rise to the term brush border, as early anatomists noted that this structure appeared very much like the bristles of a paintbrush.
Brush border cells are found mainly in the following organs:
The small intestine tract: This is where absorption takes place. The brush borders of the intestinal lining are the site of terminal carbohydrate digestions. The microvilli that constitute the brush border have enzymes for this final part of digestion anchored into their apical plasma membrane as integral membrane proteins. These enzymes are found near to the transporters that will then allow absorption of the digested nutrients.
The kidney: Here the brush border is useful in distinguishing the proximal tubule (which possesses the brush border) from the distal convoluted tubule (which does not).
The large intestine also has microvilli on the surface of its enterocytes.
The brush border morphology increases a cell's surface area, a trait which is especially useful in absorptive cells. Cells that absorb substances need a large surface area in contact with the substance to be efficient.
In intestinal cells, the microvilli are referred to as brush border and are protoplasmic extensions contrary to villi which are submucosal folds, while in the kidneys, microvilli are referred to as striated border.
See also
List of distinct cell types in the adult human body
|
https://en.wikipedia.org/wiki/22q13%20deletion%20syndrome
|
22q13 deletion syndrome, also known as Phelan–McDermid syndrome (PMS), is a genetic disorder caused by deletions or rearrangements on the q terminal end (long arm) of chromosome 22. Any abnormal genetic variation in the q13 region that presents with significant manifestations (phenotype) typical of a terminal deletion may be diagnosed as 22q13 deletion syndrome. There is disagreement among researchers as to the exact definition of 22q13 deletion syndrome. The Developmental Synaptopathies Consortium defines PMS as being caused by SHANK3 mutations, a definition that appears to exclude terminal deletions. The requirement to include SHANK3 in the definition is supported by many but not by those who first described 22q13 deletion syndrome.
Prototypical terminal deletion of 22q13 can be uncovered by karyotype analysis, but many terminal and interstitial deletions are too small. The availability of DNA microarray technology for revealing multiple genetic problems simultaneously has been the diagnostic tool of choice. The falling cost for the whole exome sequencing and, eventually, whole genome sequencing, may replace DNA microarray technology for candidate evaluation. However, fluorescence in situ hybridization (FISH) tests remain valuable for diagnosing cases of mosaicism (mosaic genetics) and chromosomal rearrangements (e.g., ring chromosome, unbalanced chromosomal translocation). Although early researchers sought a monogenic (single gene genetic disorder) explanation, recent studies have not supported that hypothesis (see Etiology).
Signs and symptoms
Affected individuals present with a broad array of medical and behavioral manifestations (tables 1 and 2). Patients are consistently characterized by global developmental delay, intellectual disability, speech abnormalities, ASD-like behaviors, hypotonia and mild dysmorphic features. Table 1 summarizes the dysmorphic and medical conditions that have been reported in individuals with PMS. Table 2 summarizes the psychiatri
|
https://en.wikipedia.org/wiki/Submental%20triangle
|
The submental triangle (or suprahyoid triangle) is a division of the anterior triangle of the neck.
Boundaries
It is limited to:
Lateral (away from the midline), formed by the anterior belly of the digastricus
Medial (towards the midline), formed by the midline of the neck between the mandible and the hyoid bone
Inferior (below), formed by the body of the hyoid bone
Floor is formed by the mylohyoideus
Roof is formed by Investing layer of deep cervical fascia
Contents
It contains one or two lymph glands, the submental lymph nodes (three or four in number) and Submental veins and commencement of anterior jugular veins.
(The contents of the triangle actually lie in the superficial fascia over the roof of submental triangle)
Additional images
See also
Anterior triangle of the neck
Submental space
|
https://en.wikipedia.org/wiki/Submandibular%20triangle
|
The submandibular triangle (or submaxillary or digastric triangle) corresponds to the region of the neck immediately beneath the body of the mandible.
Boundaries and coverings
It is bounded:
above, by the lower border of the body of the mandible, and a line drawn from its angle to the mastoid process;
below, by the posterior belly of the Digastricus; in front, by the anterior belly of the Digastricus.
It is covered by the integument, superficial fascia, Platysma, and deep fascia, ramifying in which are branches of the facial nerve and ascending filaments of the cutaneous cervical nerve.
Its floor is formed by the Mylohyoideus anteriorly, and by the hyoglossus posteriorly.
Triangles
Beclard Triangle
Lesser Triangle
Pirogoff Triangle
Divisions
It is divided into an anterior and a posterior part by the stylomandibular ligament.
Anterior part
The anterior part contains the submandibular gland, superficial to which is the anterior facial vein, while imbedded in the gland is the facial artery and its glandular branches.
Beneath the gland, on the surface of the Mylohyoideus, are the submental artery and the mylohyoid artery and nerve.
Posterior part
The posterior part of this triangle contains the external carotid artery, ascending deeply in the substance of the parotid gland
This vessel lies here in front of, and superficial to, the external carotid, being crossed by the facial nerve, and gives off in its course the posterior auricular, superficial temporal, and internal maxillary branches: more deeply are the internal carotid, the internal jugular vein, and the vagus nerve, separated from the external carotid by the Styloglossus and Stylopharyngeus, and the hypoglossal nerve
See also
Anterior triangle of the neck
Submandibular space
Additional images
Summary of contents
The following summarizes the important structures found in the submandibular triangle:
1. The external and internal carotid artery
2. The internal jugular vein
3. The deep cervical ly
|
https://en.wikipedia.org/wiki/Carotid%20triangle
|
The carotid triangle (or superior carotid triangle) is a portion of the anterior triangle of the neck.
Anatomy
Boundaries
It is bounded:
Posteriorly by (the anterior border of) the sternocleidomastoid muscle,
Anteroinferiorly by (the superior belly of) the omohyoid muscle.
Superiorly by (the posterior belly of) the digastric muscle.
Roof
The roof is formed by:
Integument,
Superficial fascia,
Platysma,
Deep fascia.
Floor
The floor is formed by (parts of) the:
Thyrohyoid membrane,
Hyoglossus,
Constrictor pharyngis medius and constrictor pharyngis inferior muscles.
Contents
Arteries
Internal carotid artery
External carotid artery and its branches (all except the posterior auricular artery):
Superior thyroid artery,
Ascending pharyngeal artery,
Lingual artery,
Facial artery,
Occipital artery.
Veins
internal jugular vein and its tributaries (correspondng to the branches of the corresponding artery):
Superior thyroid vein,
Lingual veins,
Common facial vein (draining into the internal jugular vein)
Ascending pharyngeal vein,
Occipital vein (sometimes).
Nerves
Superficial to the carotid sheath lies the hypoglossal nerve, and ansa cervicalis of the cervical plexus.
The hypoglossal nerve crosses both the internal and external carotids, curving around the origin of the occipital artery.
Within the sheath, between the artery and vein, and behind both, is the vagus nerve; behind the sheath, the sympathetic trunk.
On the lateral side of the vessels, the accessory nerve runs for a short distance before it pierces the Sternocleidomastoideus; and on the medial side of the external carotid, just below the hyoid bone, the internal branch of the superior laryngeal nerve may be seen; and, still more inferiorly, the external branch of the same nerve.
Other
The superior portion of the larynx and inferior portion of the pharynx are also found in the anterior portion part of this space.
See also
Anterior triangle of the neck
Farabeuf's trian
|
https://en.wikipedia.org/wiki/Muscular%20triangle
|
The inferior carotid triangle (or muscular triangle), is bounded, in front, by the median line of the neck from the hyoid bone to the sternum; behind, by the anterior margin of the sternocleidomastoid; above, by the superior belly of the omohyoid.
It is covered by the integument, superficial fascia, platysma, and deep fascia, ramifying in which are some of the branches of the supraclavicular nerves.
Beneath these superficial structures are the sternohyoid and sternothyroid, which, together with the anterior margin of the sternocleidomastoid, conceal the lower part of the common carotid artery.
This vessel is enclosed within its sheath, together with the internal jugular vein and vagus nerve; the vein lies lateral to the artery on the right side of the neck, but overlaps it below on the left side; the nerve lies between the artery and vein, on a plane posterior to both.
In front of the sheath are a few descending filaments from the ansa cervicalis; behind the sheath are the inferior thyroid artery, the recurrent nerve, and the sympathetic trunk; and on its medial side, the esophagus, the trachea, the thyroid gland, and the lower part of the larynx.
By cutting into the upper part of this space, and slightly displacing the sternocleidomastoid, the common carotid artery may be tied below the omohyoid.
Gallery
See also
Anterior triangle of the neck
|
https://en.wikipedia.org/wiki/VistaPro
|
VistaPro is 3D scenery generator for the Amiga, Macintosh, MS-DOS, and Microsoft Windows. It was written by John Hinkley as the follow-up to the initial version, Vista. The about box describes it as "a 3-D landscape generator and projector capable of accurately displaying real-world and fractal landscapes." It was published by Virtual Reality Labs and developed by Hypercube Engineering. The latest versions were published and developed by Monkey Byte Development.
Graphics Generation
Vista operates similarly to a ray tracer in that light paths are generated. The user specifies light sources, and camera angles. The ground may be colored to create different ground styles. Vista has water, tree and cloud effects, making some images almost photorealistic. The ground itself may either be generated from a random (or user inputted) number, or it may use DEM landscape files for real-world views, the software having come with a number of maps of Mars and Earth.
Vista can load and save output images in PCX, BMP, JPG and Targa file formats. PCX files can also be imported as elevations and ground colors to allow third-party creation of landscapes in other image editors.
Trees can be placed on landscapes as either 2D or 3D objects. In 2D, the trees always face the camera and are fast to generate. 3D trees are created using fractals and can be given a variable bending of the branches to make them look more complicated.
Releases
For Amiga:
Vista v1.00
Vista v1.20 (1990, reviewed by Amiga Format issue 15, released on cover disk of Amiga Format issue 33)
Vista v1.21 (PAL, 1991)
VistaPro v1.0 (released in 1991)
VistaPro v1.022 PAL (1991-07-09)
VistaPro v2.02 (released in 1992)
VistaPro v3.0 (released in 1993 with AGA support)
VistaLite v3.01 (version for smaller memory Amiga computers)
VistaPro v3.3b
VistaPro v3.04b
VistaPro v3.05
VistaPro v3.10o (unreleased)
For MS-DOS and Microsoft Windows:
VistaPro v1.0 (released in 1992, MS-DOS)
VistaPro v3.0 (released in 1993, MS-DOS)
Vist
|
https://en.wikipedia.org/wiki/Clock%20recovery
|
In serial communication of digital data, clock recovery is the process of extracting timing information from a serial data stream itself, allowing the timing of the data in the stream to be accurately determined without separate clock information. It is widely used in data communications; the similar concept used in analog systems like color television is known as carrier recovery.
Basic concept
Serial data is normally sent as a series of pulses with well-defined timing constraints. This presents a problem for the receiving side; if their own local clock is not precisely synchronized with the transmitter, they may sample the signal at the wrong time and thereby decode the signal incorrectly. This can be addressed with extremely accurate and stable clocks, like atomic clocks, but these are expensive and complex. More common low-cost clock systems, like quartz oscillators, are accurate enough for this task over short periods of time, but over a period of minutes or hours the drift in these systems will make timing too inaccurate for most tasks.
Clock recovery addresses this problem by embedding clock information into the data stream, allowing the transmitter's clock timing to be determined. This normally takes the form of short signals inserted into the data that can be easily seen and then used in a phase-locked loop or similar adjustable oscillator to produce a local clock signal that can be used to time the signal in the periods between the clock signals. The advantage of this approach is that a small drift in the transmitter's clock can be compensated as the receiver will always match it, within limits.
The term is most often used to describe digital data transmission, in which case the entire signal is suitable for clock recovery. For instance, in the case of early 300 bit/s modems, the timing of the signal was recovered from the transitions between the two frequencies used to represent binary 1 and 0. As some data might not have any transitions, a long string
|
https://en.wikipedia.org/wiki/Subclavian%20triangle
|
The subclavian triangle (or supraclavicular triangle, omoclavicular triangle, Ho's triangle), the smaller division of the posterior triangle, is bounded, above, by the inferior belly of the omohyoideus; below, by the clavicle; its base is formed by the posterior border of the sternocleidomastoideus.
Its floor is formed by the first rib with the first digitation of the serratus anterior.
The size of the subclavian triangle varies with the extent of attachment of the clavicular portions of the Sternocleidomastoideus and Trapezius, and also with the height at which the Omohyoideus crosses the neck.
Its height also varies according to the position of the arm, being diminished by raising the limb, on account of the ascent of the clavicle, and increased by drawing the arm downward, when that bone is depressed.
This space is covered by the integument, the superficial and deep fasciæ and the platysma, and crossed by the supraclavicular nerves.
Just above the level of the clavicle, the third portion of the subclavian artery curves lateralward and downward from the lateral margin of the scalenus anterior, across the first rib, to the axilla, and this is the situation most commonly chosen for ligaturing the vessel.
Sometimes this vessel rises as high as 4 cm. above the clavicle; occasionally, it passes in front of the Scalenus anterior, or pierces the fibers of that muscle.
The subclavian vein lies behind the clavicle, and is not usually seen in this space; but in some cases it rises as high as the artery, and has even been seen to pass with that vessel behind the Scalenus anterior.
The brachial plexus of nerves lies above the artery, and in close contact with it. Passing transversely behind the clavicle are the transverse scapular vessels; and traversing its upper angle in the same direction, the transverse cervical artery and vein.
The external jugular vein runs vertically downward behind the posterior border of the Sternocleidomastoideus, to terminate in the subcla
|
https://en.wikipedia.org/wiki/Occipital%20triangle
|
The occipital triangle, the larger division of the posterior triangle, is bounded, in front, by the Sternocleidomastoideus; behind, by the Trapezius; below, by the Omohyoideus.
Its floor is formed from above downward by the Splenius capitis, Levator scapulæ, and the Scalenus medius and posterior.
It is covered by the skin, the superficial and deep fasciæ, and by the Platysma below.
The accessory nerve is directed obliquely across the space from the Sternocleidomastoideus, which it pierces, to the under surface of the Trapezius; below, the supraclavicular nerves and the transverse cervical vessels and the upper part of the brachial plexus cross the space.
The roof of this triangle is formed by the cutaneous nerves of cervical plexus and the external jugular vein and platysma muscle.
A chain of lymph glands is also found running along the posterior border of the Sternocleidomastoideus, from the mastoid process to the root of the neck.
Gallery
See also
Posterior triangle of the neck
|
https://en.wikipedia.org/wiki/Orbiting%20Frog%20Otolith
|
The Orbiting Frog Otolith (OFO) was a NASA space program which sent two bullfrogs into orbit on 9 November 1970 for the study of weightlessness. The name, derived through common use, was a functional description of the biological experiment carried by the satellite. Otolith referred to the frog's inner ear balance mechanism.
The Orbiting Frog Otolith Program was a part of the research program of NASA's Office of Advanced Research and Technology (OART). One of the goals of the OART was to study the vestibular system function in space and on Earth. The experiment was designed to study the adaptability of the otolith to sustained weightlessness, to provide information for human spaceflight. The otolith is a structure in the inner ear that is associated with equilibrium control: acceleration with respect to gravity as its primary sensory input.
The Frog Otolith Experiment (FOE) was developed by Torquato Gualtierotti of the University of Milan, Italy, when he was assigned to the Ames Research Center as a resident Research Associate sponsored by the National Academy of Sciences. Originally planned in 1966 to be included on an early Apollo mission, the experiment was deferred when that mission was canceled. In late 1967 authorization was given to orbit the FOE when a supporting spacecraft could be designed. The project, part of NASA's Human Factor Systems program, was officially designated "OFO" in 1968. After a series of delays, OFO was launched into orbit on 9 November 1970.
After the successful OFO-A mission in 1970, interest in the research continued. A project called Vestibular Function Research was initiated in 1975 to fly a vestibular experiment in an Earth-orbiting spacecraft. This flight project was eventually discontinued, but a number of ground studies were conducted. The research has given rise to several very useful offshoots, including the ground-based Vestibular Research Facility located at ARC.
OFO should not be confused with similar acronyms describing
|
https://en.wikipedia.org/wiki/Biholomorphism
|
In the mathematical theory of functions of one or more complex variables, and also in complex algebraic geometry, a biholomorphism or biholomorphic function is a bijective holomorphic function whose inverse is also holomorphic.
Formal definition
Formally, a biholomorphic function is a function defined on an open subset U of the -dimensional complex space Cn with values in Cn which is holomorphic and one-to-one, such that its image is an open set in Cn and the inverse is also holomorphic. More generally, U and V can be complex manifolds. As in the case of functions of a single complex variable, a sufficient condition for a holomorphic map to be biholomorphic onto its image is that the map is injective, in which case the inverse is also holomorphic (e.g., see Gunning 1990, Theorem I.11 or Corollary E.10 pg. 57).
If there exists a biholomorphism , we say that U and V are biholomorphically equivalent or that they are biholomorphic.
Riemann mapping theorem and generalizations
If every simply connected open set other than the whole complex plane is biholomorphic to the unit disc (this is the Riemann mapping theorem). The situation is very different in higher dimensions. For example, open unit balls and open unit polydiscs are not biholomorphically equivalent for In fact, there does not exist even a proper holomorphic function from one to the other.
Alternative definitions
In the case of maps f : U → C defined on an open subset U of the complex plane C, some authors (e.g., Freitag 2009, Definition IV.4.1) define a conformal map to be an injective map with nonzero derivative i.e., f’(z)≠ 0 for every z in U. According to this definition, a map f : U → C is conformal if and only if f: U → f(U) is biholomorphic. Notice that per definition of biholomorphisms, nothing is assumed about their derivatives, so, this equivalence contains the claim that a homeomorphism that is complex differentiable must actually have nonzero derivative everywhere. Other authors (e.g., Conw
|
https://en.wikipedia.org/wiki/Kharitonov%27s%20theorem
|
Kharitonov's theorem is a result used in control theory to assess the stability of a dynamical system when the physical parameters of the system are not known precisely. When the coefficients of the characteristic polynomial are known, the Routh–Hurwitz stability criterion can be used to check if the system is stable (i.e. if all roots have negative real parts). Kharitonov's theorem can be used in the case where the coefficients are only known to be within specified ranges. It provides a test of stability for a so-called interval polynomial, while Routh–Hurwitz is concerned with an ordinary polynomial.
Definition
An interval polynomial is the family of all polynomials
where each coefficient can take any value in the specified intervals
It is also assumed that the leading coefficient cannot be zero: .
Theorem
An interval polynomial is stable (i.e. all members of the family are stable) if and only if the four so-called Kharitonov polynomials
are stable.
What is somewhat surprising about Kharitonov's result is that although in principle we are testing an infinite number of polynomials for stability, in fact we need to test only four. This we can do using Routh–Hurwitz or any other method. So it only takes four times more work to be informed about the stability of an interval polynomial than it takes to test one ordinary polynomial for stability.
Kharitonov's theorem is useful in the field of robust control, which seeks to design systems that will work well despite uncertainties in component behavior due to measurement errors, changes in operating conditions, equipment wear and so on.
|
https://en.wikipedia.org/wiki/Vapour%20pressure%20of%20water
|
The vapor pressure of water is the pressure exerted by molecules of water vapor in gaseous form (whether pure or in a mixture with other gases such as air). The saturation vapor pressure is the pressure at which water vapor is in thermodynamic equilibrium with its condensed state. At pressures higher than vapor pressure, water would condense, while at lower pressures it would evaporate or sublimate. The saturation vapor pressure of water increases with increasing temperature and can be determined with the Clausius–Clapeyron relation. The boiling point of water is the temperature at which the saturated vapor pressure equals the ambient pressure.
Calculations of the (saturation) vapor pressure of water are commonly used in meteorology. The temperature-vapor pressure relation inversely describes the relation between the boiling point of water and the pressure. This is relevant to both pressure cooking and cooking at high altitudes. An understanding of vapor pressure is also relevant in explaining high altitude breathing and cavitation.
Approximation formulas
There are many published approximations for calculating saturated vapor pressure over water and over ice. Some of these are (in approximate order of increasing accuracy):
Accuracy of different formulations
Here is a comparison of the accuracies of these different explicit formulations, showing saturation vapor pressures for liquid water in kPa, calculated at six temperatures with their percentage error from the table values of Lide (2005):
{| class="wikitable"
|- align="center"
! (°C) !! (Lide Table) !! (Eq 1) !! (Antoine) !! (Magnus) !! (Tetens) !! (Buck) !! (Goff-Gratch)
|- align="center"
| 0 ||0.6113||0.6593 (+7.85%)||0.6056 (-0.93%)||0.6109 (-0.06%)||0.6108 (-0.09%)||0.6112 (-0.01%)||0.6089 (-0.40%)
|- align="center"
| 20 ||2.3388||2.3755 (+1.57%) ||2.3296 (-0.39%) ||2.3334 (-0.23%)||2.3382 (+0.05%)||2.3383 (-0.02%)||2.3355 (-0.14%)
|- align="center"
| 35 ||5.6267||5.5696 (-1.01%
|
https://en.wikipedia.org/wiki/Kevin%20Lano
|
Kevin C. Lano (born 1963) is a British computer scientist.
Life and work
Kevin Lano studied at the University of Reading, attaining a first class degree in Mathematics and Computer Science, and the University of Bristol where he completed his doctorate. He was an originator of formal object-oriented techniques (Z++), and developed a combination of UML and formal methods in a number of papers and books. He was one of the founders of the Precise UML group, who influenced the definition of UML 2.0.
Lano published the book Advanced Systems Design with Java, UML and MDA (Butterworth-Heinemann, ) in 2005. He is also the editor of UML 2 Semantics and Applications, published by Wiley in October 2009, among a number of computer science books.
Lano was formerly a Research Officer at the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science). He is a reader at the Department of Informatics at King's College London.
In 2008, Lano and his co-authors Andy Evans, Robert France, and Bernard Rumpe, were awarded the Ten Year Most Influential Paper Award at the MODELS 2008 Conference on Model Driven Engineering Languages and Systems for the 1998 paper "The UML as a Formal Modeling Notation".
Selected publications
Books
Reverse Engineering and Software Maintenance (McGraw-Hill, 1993)
Object-oriented Specification Case Studies (Prentice Hall, 1993)
Formal Object-oriented Development (Springer, 1995)
The B Language and Method: A Guide to Practical Formal Development (Springer, 1996)
Software Design in Java 2 (Palgrave, 2002)
UML 2 Semantics and Applications (Wiley, 2009), editor
Model-Driven Development using UML and Java (Cengage, 2009)
Agile MBD using UML-RSDS (Taylor & Francis, 2016)
Financial Software Engineering (Springer, 2019), with Howard Haughton
|
https://en.wikipedia.org/wiki/Gerontechnology
|
Gerontechnology, also called gerotechnology, is an inter- and multidisciplinary academic and professional field combining gerontology and technology. Sustainability of an aging society depends upon our effectiveness in creating technological environments, including assistive technology and inclusive design, for innovative and independent living and social participation of older adults in any state of health, comfort and safety. In short, gerontechnology concerns matching technological environments to health, housing, mobility, communication, leisure and work of older people. Gerontechnology is most frequently identified as a subset of HealthTech and is more commonly referred to as AgeTech in Europe and the United States. Research outcomes form the basis for designers, builders, engineers, manufacturers, and those in the health professions (nursing, medicine, gerontology, geriatrics, environmental psychology, developmental psychology, etc.), to provide an optimum living environment for the widest range of ages.
Description
Gerontechnology is considered an adjunct to the promotion of human health and well-being. It pertains to both human development and aging with the aim to compress morbidity and to increase vitality and quality of life throughout the lifespan. It creates solutions to extend the working phase in society by maximizing the vital and productive years in the later years of life, which consequently reduces the cost of care.
The overall framework of gerontechnology may be seen as a matrix of domains of human activity: (1) health & self-esteem, housing & activities of daily living, communication & governance, mobility & transport, work & leisure, as well as (2) technology interventions or impact levels (enhancement & satisfaction, prevention & engagement, compensation & assistance, care and care organisation). Underpinning all these elements are generic and applied evidence-based research. Such research supports the development of products and services.
|
https://en.wikipedia.org/wiki/LG%20Display
|
LG Display (Korean: LG 디스플레이) is one of the world's largest manufacturers and supplier of thin-film transistor liquid crystal display (TFT-LCD) panels, OLEDs and flexible displays. LG Display is headquartered in Seoul, South Korea, and currently operates nine fabrication facilities and seven back-end assembly facilities in Korea, China, Poland and Mexico.
LG Display has manufactured displays used in products such as the iPhone 14 Pro and Sony's OLED TVs.
History
LG Display was originally formed as a joint venture by the Korean electronics company LG Electronics and the Dutch company Philips in 1999 to manufacture active matrix liquid crystal displays (LCDs) and was formerly known as LG.Philips LCD, but Philips sold off all its shares in late 2008. Both companies also had another joint venture, called LG.Philips Displays, dedicated to manufacturing cathode ray tubes, deflection yokes, and related materials such as glass and phosphors.
On 12 December 2008, LG.Philips LCD announced its plan to change its corporate name to LG Display upon receiving approval at the company's annual general meeting of shareholders on 29 February. The company claimed the name change reflected changes following the reduction of Philips' equity stake.
The company has eight manufacturing plants in Gumi and Paju, South Korea. It also has a module assembly plant in Nanjing and Guangzhou in China and Wroclaw in Poland.
LG Display became an independent company in July 2004 when it was concurrently listed on the New York Stock Exchange () and the South Korean Stock Exchange ().
They are one of the main licensed manufacturers of the more color-accurate IPS panels used by Dell, NEC, ASUS, Apple (including iMacs, iPads, iPhones, iPod Touches) and others, which were developed by Hitachi.
LCD price fixing
In December 2010, the EU fined LG Display €215 million for its part in an LCD price fixing scheme. Other companies were fined for a combined total of €648.9 million, including Chimei Innolux,
|
https://en.wikipedia.org/wiki/Viremia
|
Viremia is a medical condition where viruses enter the bloodstream and hence have access to the rest of the body. It is similar to bacteremia, a condition where bacteria enter the bloodstream. The name comes from combining the word "virus" with the Greek word for "blood" (haima). It usually lasts for 4 to 5 days in the primary condition.
Primary versus secondary
Primary viremia refers to the initial spread of virus in the blood from the first site of infection.
Secondary viremia occurs when primary viremia has resulted in infection of additional tissues via bloodstream, in which the virus has replicated and once more entered the circulation.
Usually secondary viremia results in higher viral shedding and viral loads within the bloodstream due to the possibility that the virus is able to reach its natural host cell from the bloodstream and replicate more efficiently than the initial site. An excellent example to profile this distinction is the rabies virus. Usually the virus will replicate briefly within the first site of infection, within the muscle tissues. Viral replication then leads to viremia and the virus spreads to its secondary site of infection, the central nervous system (CNS). Upon infection of the CNS, secondary viremia results and symptoms usually begin. Vaccination at this point is useless, as the spread to the brain is unstoppable. Vaccination must be done before secondary viremia takes place for the individual to avoid brain damage or death.
Active versus passive
Active viremia is caused by the replication of viruses which results in viruses being introduced into the bloodstream. Examples include the measles, in which primary viremia occurs in the epithelial lining of the respiratory tract before replicating and budding out of the cell basal layer (viral shedding), resulting in viruses budding into capillaries and blood vessels.
Passive viremia is the introduction of viruses in the bloodstream without the need of active viral replication. Exampl
|
https://en.wikipedia.org/wiki/Correlates%20of%20immunity
|
Correlates of immunity or correlates of protection to a virus or other infectious pathogen are measurable signs that a person (or other potential host) is immune, in the sense of being protected against becoming infected and/or developing disease.
For many viruses, antibodies and especially neutralizing antibodies serve as a correlate of immunity. Pregnant women, for example, are routinely screened in the UK for rubella antibodies to confirm their immunity, which can cause serious congenital abnormalities in their children. In contrast, for HIV, the simple presence of antibodies is not a correlate of immunity/protection since infected individuals develop antibodies without protection against the disease.
The fact that the correlates of immunity/protection remain unclear is a significant barrier to HIV vaccine research. There is evidence that some highly exposed individuals can develop resistance to HIV infection, suggesting that immunity and therefore a vaccine is possible. However, without knowing the correlates of immunity, scientists cannot know exactly what sort of immune response a vaccine would need to stimulate, and the only method of assessing vaccine effectiveness will be through large phase III trials with clinical outcomes (i.e. infection and/or disease, not just laboratory markers).
Multiple studies used predictive markers to validate higher levels of neutralizing antibodies corresponding with lower likelihood of breakthrough infection after vaccination in COVID-19.
See also
Immunological memory
Immunization
Vaccination
Seroconversion
Serostatus
|
https://en.wikipedia.org/wiki/Recursive%20indexing
|
Recursive indexing is an algorithm used to represent large numeric values using members of a relatively small set.
Recursive indexing writes the successive differences of the number after extracting the maximum value of the alphabet set from the number, and continuing recursively till the difference falls in the range of the set.
Recursive indexing with a 2-letter alphabet is called unary code.
Encoding
To encode a number N, keep reducing the maximum element of this set (Smax) from N and output Smax for each such difference, stopping when the number lies in the half closed half open
range [0 – Smax).
Example
Let S = [0 1 2 3 4 … 10], be an 11-element set, and we have to recursively index the value N=49.
According to this method, subtract 10 from 49 and iterate until the difference is a number in the 0–10 range.
The values are 10 (N = 49 – 10 = 39), 10 (N = 39 – 10 = 29), 10 (N = 29 – 10 = 19), 10 (N = 19 – 10 = 9), 9. The recursively indexed sequence for N = 49 with set S, is 10, 10, 10, 10, 9.
Decoding
Compute the sum of the index values.
Example
Decoding the above example involves 10 + 10 + 10 + 10 + 9 = 49.
Uses
This technique is most commonly used in run-length encoding systems to encode longer runs than the alphabet sizes permit.
|
https://en.wikipedia.org/wiki/Smart%20Display
|
In computing, Smart Display (originally codenamed Mira) was a Microsoft initiative to use a portable touchscreen LCD monitor as a thin client for PCs, connecting via Wi-Fi.
Smart Display was announced in early 2002, released in early 2003 and discontinued in December 2003, having never achieved more than negligible market penetration.
Technology
The Smart Display was a battery-powered 10" or 15" LCD monitor with a touchscreen (similar in size and shape to a Tablet PC), connecting to a PC over an 802.11b WiFi network, with input via Transcriber (similar to Graffiti) or a pop-up soft-keyboard for text entry, and built-in speakers. Some models had a docking unit with wired PC, keyboard and mouse connectors.
The display ran Smart Display OS or Microsoft Windows CE for Smart Displays, based on Windows CE and .NET. The remote technology was based on Windows Terminal Server. Smart Display OS 1.0 would only connect to a Windows XP Professional host system, although some reported that any version of Windows could be remote-controlled using NetMeeting.
ViewSonic was the first manufacturer to bring Smart Display to the market, with the airpanel V150 in early 2003. This included a 15" 1024×768 LCD, a 400 MHz Intel XScale processor, 32MB ROM, 64MB RAM and 802.11b wireless, and a USB wireless hub for the host PC.
Problems
Analysts flagged the problems with the Mira initiative very early on, as soon as it reached their notice in early 2002.
In Smart Display OS 1.0, the display would lock the host PC to it while in use. Microsoft variously attributed this to licensing issues (that Windows XP Professional was licensed for one user per running copy ) and resource management problems. The requirements of licensing — not to allow the devices to work standalone, not to allow the device to connect to the host PC while the PC's main screen was active and not to allow multiple Smart Displays to control one PC — were widely derided in the press.
Only one Smart Display could connec
|
https://en.wikipedia.org/wiki/Clock%20drift
|
Clock drift refers to several related phenomena where a clock does not run at exactly the same rate as a reference clock. That is, after some time the clock "drifts apart" or gradually desynchronizes from the other clock. All clocks are subject to drift, causing eventual divergence unless resynchronized. In particular, the drift of crystal-based clocks used in computers requires some synchronization mechanism for any high-speed communication. Computer clock drift can be utilized to build random number generators. These can however be exploited by timing attacks.
In non-atomic clocks
Everyday clocks such as wristwatches have finite precision. Eventually they require correction to remain accurate. The rate of drift depends on the clock's quality, sometimes the stability of the power source, the ambient temperature, and other subtle environmental variables. Thus the same clock can have different drift rates at different occasions.
More advanced clocks and old mechanical clocks often have some kind of speed trimmer where one can adjust the speed of the clock and thus correct for clock drift. For instance, in pendulum clocks the clock drift can be manipulated by slightly changing the length of the pendulum.
A quartz oscillator is less subject to drift due to manufacturing variances than the pendulum in a mechanical clock. Hence most everyday quartz clocks do not have an adjustable drift correction.
Atomic clocks
Atomic clocks are very precise and have nearly no clock drift. Even the Earth's rotation rate has more drift and variation in drift than an atomic clock due to tidal acceleration and other effects. The principle behind the atomic clock has enabled scientists to re-define the SI unit second in terms of exactly oscillations of the caesium-133 atom. The precision of these oscillations allows atomic clocks to drift roughly only one second in a hundred million years; as of 2015, the most accurate atomic clock loses one second every 15 billion years. The In
|
https://en.wikipedia.org/wiki/Nullable%20type
|
Nullable types are a feature of some programming languages which allow a value to be set to the special value NULL instead of the usual possible values of the data type. In statically typed languages, a nullable type is an option type, while in dynamically typed languages (where values have types, but variables do not), equivalent behavior is provided by having a single null value.
NULL is frequently used to represent a missing value or invalid value, such as from a function that failed to return or a missing field in a database, as in NULL in SQL. In other words NULL is undefined.
Primitive types such as integers and Booleans cannot generally be null, but the corresponding nullable types (nullable integer and nullable Boolean, respectively) can also assume the NULL value. This can be represented in ternary logic as FALSE,NULL,TRUE as in three-valued logic.
Example
An integer variable may represent integers, but 0 (zero) is a special case because 0 in many programming languages can mean "false". Also this doesn't give us any notion of saying that the variable is empty, a need for which occurs in many circumstances. This need can be achieved with a nullable type. In programming languages like C# 2.0, a nullable integer, for example, can be declared by a question mark (int? x). In programming languages like C# 1.0, nullable types can be defined by an external library as new types (e.g. NullableInteger, NullableBoolean).
A Boolean variable makes the effect more clear. Its values can be either "true" or "false", while a nullable boolean may also contain a representation for "undecided". However, the interpretation or treatment of a logical operation involving such a variable depends on the language.
Compared with null pointers
In contrast, object pointers can be set to NULL by default in most common languages, meaning that the pointer or reference points to nowhere, that no object is assigned (the variable does not point to any object).
Nullable references were in
|
https://en.wikipedia.org/wiki/Fearsome%20critters
|
In North American folklore, fearsome critters were tall tale animals jokingly said to inhabit the wilderness in or around logging camps, especially in the Great Lakes region. Today, the term may also be applied to similar fabulous beasts.
Origins
Fearsome critters were an integral part of oral tradition in North American logging camps during the turn of the twentieth century, principally as a means to pass time (such as in tall tales) or as a jest for hazing newcomers. In a typical fearsome critter gag, a person would casually remark about a strange noise or sight they encountered in the wild, and another accomplice would join in with a similar anecdote. Meanwhile, an eavesdropper would begin to investigate, as Henry H. Tryon recorded in his book, Fearsome Critters (1939) —
Lumberjacks, who regularly traveled between camps, would stop to swap stories, which eventually disseminated these myths across the continent. Many fearsome critters were simply the products of pure exaggeration; however, a number were used either jokingly or seriously as explanations for both unexplained and natural phenomena. For example, the hidebehind served to account for loggers who failed to return to camp, while the treesqueak offered justification for strange noises heard in the woods. A handful whether intentionally or unknowing mirrored descriptions of actual animals. The mangrove killifish, which takes up shelter in decaying branches after leaving the water, exhibits similarities to the upland trout, a legendary fish purported to nest in trees. In addition, the story of the fillyloo, about a mythical crane that flies upside-down, may have been inspired by observations of the wood stork, a bird that has been witnessed briefly flying in this manner. In particular instances more elaborate ruses were created using taxidermy or trick photography.
Attributes
The character of the fearsome critters themselves was usually more comical than frightful. Often the greater emphasis is placed
|
https://en.wikipedia.org/wiki/Yum-Tong%20Siu
|
Yum-Tong Siu (; born May 6, 1943, in Guangzhou, China) is the William Elwood Byerly Professor of Mathematics at Harvard University.
Siu is a prominent figure in the study of functions of several complex variables. His research interests involve the intersection of complex variables, differential geometry, and algebraic geometry. He has resolved various conjectures by applying estimates of the complex Neumann problem and the theory of multiplier ideal sheaves to algebraic geometry.
Education and career
Siu obtained his B.A. in mathematics from the University of Hong Kong in 1963, his M.A. from the University of Minnesota, and his Ph.D. from Princeton University in 1966. Siu completed his doctoral dissertation, titled "Coherent Noether-Lasker decomposition of subsheaves and sheaf cohomology", under the supervision of Robert C. Gunning. Before joining Harvard, he taught at Purdue University, the University of Notre Dame, Yale, and Stanford. In 1982 he joined Harvard as a Professor, of Mathematics. He previously served as the Chairman of the Harvard Math Department.
In 2006, Siu published a proof of the finite generation of the pluricanonical ring.
Awards, honors and professional memberships
In 1993, Siu received the Stefan Bergman Prize of the American Mathematical Society. He has holds honorary doctorates from the University of Hong Kong, University of Bochum, Germany, and University of Macau. He is a Corresponding Member of the Goettingen Academy of Sciences (elected 1993); a Foreign member of the Chinese Academy of Sciences (elected 2004); and a member of the American Academy of Arts & Sciences (elected 1998), the National Academy of Sciences (elected 2002), and Academia Sinica, Taiwan (elected 2004). He has been an invited speaker at the International Congress of Mathematicians in Helsinki (1978), Warsaw (1983) and Beijing (2002).
Currently, Siu is a member of the Scientific Advisory Board of the Clay Mathematics Institute (since 2003); the Advisory Committee
|
https://en.wikipedia.org/wiki/Physical%20vapor%20deposition
|
Physical vapor deposition (PVD), sometimes called physical vapor transport (PVT), describes a variety of vacuum deposition methods which can be used to produce thin films and coatings on substrates including metals, ceramics, glass, and polymers. PVD is characterized by a process in which the material transitions from a condensed phase to a vapor phase and then back to a thin film condensed phase. The most common PVD processes are sputtering and evaporation. PVD is used in the manufacturing of items which require thin films for optical, mechanical, electrical, acoustic or chemical functions. Examples include semiconductor devices such as thin-film solar cells, microelectromechanical devices such as thin film bulk acoustic resonator, aluminized PET film for food packaging and balloons, and titanium nitride coated cutting tools for metalworking. Besides PVD tools for fabrication, special smaller tools used mainly for scientific purposes have been developed.
The source material is unavoidably also deposited on most other surfaces interior to the vacuum chamber, including the fixturing used to hold the parts. This is called overshoot.
Examples
Cathodic arc deposition: a high-power electric arc discharged at the target (source) material blasts away some into highly ionized vapor to be deposited onto the workpiece.
Electron-beam physical vapor deposition: the material to be deposited is heated to a high vapor pressure by electron bombardment in "high" vacuum and is transported by diffusion to be deposited by condensation on the (cooler) workpiece.
Evaporative deposition: the material to be deposited is heated to a high vapor pressure by electrical resistance heating in "high" vacuum.
Close-space sublimation, the material, and substrate are placed close to one another and radiatively heated.
Pulsed laser deposition: a high-power laser ablates material from the target into a vapor.
Thermal laser epitaxy: a continuous-wave laser evaporates individual, free-standing
|
https://en.wikipedia.org/wiki/Jacket%20matrix
|
In mathematics, a jacket matrix is a square symmetric matrix of order n if its entries are non-zero and real, complex, or from a finite field, and
where In is the identity matrix, and
where T denotes the transpose of the matrix.
In other words, the inverse of a jacket matrix is determined its element-wise or block-wise inverse. The definition above may also be expressed as:
The jacket matrix is a generalization of the Hadamard matrix; it is a diagonal block-wise inverse matrix.
Motivation
As shown in the table, i.e. in the series, for example with n=2, forward: , inverse : , then, . That is, there exists an element-wise inverse.
Example 1.
:
or more general
:
Example 2.
For m x m matrices,
denotes an mn x mn block diagonal Jacket matrix.
Example 3.
Euler's formula:
, and .
Therefore,
.
Also,
,.
Finally,
A·B = B·A = I
Example 4.
Consider be 2x2 block matrices of order
.
If and are pxp Jacket matrix, then is a block circulant matrix if and only if , where rt denotes the reciprocal transpose.
Example 5.
Let and , then the matrix is given by
,
⇒
where U, C, A, G denotes the amount of the DNA nucleobases and the matrix is the block circulant Jacket matrix which leads to the principle of the Antagonism with Nirenberg Genetic Code matrix.
|
https://en.wikipedia.org/wiki/Ipomoea%20cairica
|
Ipomoea cairica is a vining, herbaceous, perennial plant with palmate leaves and large, showy white to lavender flowers. A species of morning glory, it has many common names, including mile-a-minute vine, Messina creeper, Cairo morning glory, coast morning glory and railroad creeper. The species name cairica translates to "from Cairo", the city where this species was first collected.
Description
A hairless, slim climber with bulbous roots and lignescented base, its leaves are stalked with 2 to 6 cm long petioles. The leaf blade is ovate to circular in outline, 3 to 10 cm long and 6 to 9 cm wide. It is divided into five to seven segments, these are lanceolate, ovate or elliptic, entire and pointed at the tip and base. Often pseudo side-leaves are formed.
The lavender-coloured inflorescences are one to a little bloody cymes. The flower stalks are 12 to 20 mm long, the sepals are 6 to 8 mm long, ovate and sting-pointed. The crown is funnel-shaped, 4 to 6 cm long and violet colored. The stamens and the stylus do not protrude beyond the crown. The ovary is hairless. The fruits are spherical capsules approximately 1 cm in diameter containing one or two hairy seeds. Each fruit matures at about 1 cm across and contains hairy seeds. The vine blooms occasionally throughout the months, but more profusely from spring to summer.
Range
Its exact native range is uncertain, though it is believed to originate from a rather wide area, ranging from Cape Verde to the Arabian Peninsula, including northern Africa, tropical Africa and the Mediterranean. It covers walls, fences or trees, with stems that can measure more than 10 m in length. The altitude at which it has been recorded ranges from 250 to 2250 m.
Invasive species
Because of human dispersal, it occurs today on most continents as an introduced species and is sometimes a noxious weed and an invasive species, such as along the coast of New South Wales. As well as in the United States, where it occurs in Hawaii, California, all
|
https://en.wikipedia.org/wiki/FMOD
|
FMOD is a proprietary sound effects engine and authoring tool for video games and applications developed by Firelight Technologies. It is able to play and mix sounds of diverse formats on many operating systems.
Features
The FMOD sound system is supplied as a programmer's API and authoring tool, similar to a digital audio workstation.
FMOD consists of the following technologies:
FMOD Studio - An audio creation tool for games, designed like a digital audio workstation. Succeeds FMOD Designer.
FMOD Studio run-time API - A programmer API to interface with FMOD Studio.
FMOD Studio low-level API - A programmer API that stands alone, with a simple interface for playing sound files, adding special effects and performing 3D sound.
Legacy products include:
FMOD Ex - The sound playback and mixing engine.
FMOD Designer 2010 - An audio designer tool used for authoring complex sound events and music for playback.
FMOD Event Player - An auditioning tool in conjunction with FMOD Designer 2010.
The FMOD sound system has an advanced plugin architecture that can be used to extend the support of audio formats or to develop new output types, e.g. for streaming.
Licensing
FMOD is available under multiple license schemes:
FMOD Non-Commercial License, which allows software not intended for commercial distribution to use FMOD for free.
FMOD Indie License, a bottom level license for software intended for commercial distribution, with development budgets less than US$500k.
FMOD Basic License, a mid-level license for software intended for commercial distribution, with development budgets between US$500k and US$1.5m.
FMOD Premium License, a top level license for software intended for commercial distribution, with development budgets over US$1.5m.
Support
Platforms
FMOD is written in portable C++, and can thus run on many different PC, mobile and gaming console platforms including:
Microsoft Windows (x86 and x86-64), macOS,
iOS,
Linux (x86 and x86-64),
Android,
BlackBerry,
Wii
|
https://en.wikipedia.org/wiki/Scheuermann%27s%20disease
|
Scheuermann's disease is a self-limiting skeletal disorder of childhood. Scheuermann's disease describes a condition where the vertebrae grow unevenly with respect to the sagittal plane; that is, the posterior angle is often greater than the anterior. This uneven growth results in the signature "wedging" shape of the vertebrae, causing kyphosis. It is named after Danish surgeon Holger Scheuermann.
Signs and symptoms
Scheuermann's disease is considered to be a form of juvenile osteochondrosis of the spine. It is found mostly in teenagers and presents a significantly worse deformity than postural kyphosis. Patients suffering with Scheuermann’s kyphosis cannot consciously correct their posture. The apex of their curve, located in the thoracic vertebrae, is quite rigid.
Scheuermann's disease is notorious for causing lower and mid-level back and neck pain, which can be severe and disabling. The sufferer may feel pain at the apex of the curve, which is aggravated by physical activity and by periods of standing or sitting; this can have a significantly detrimental effect to their lives as their level of activity is curbed by their disability. The sufferer may feel isolated or uneasy amongst their peers if they are children, depending on the level of deformity.
In addition to the pain associated with Scheuermann's disease, many sufferers of the disorder have loss of vertebral height, and depending on where the apex of the curve is, may have a visual 'hunchback' or 'roundback'. It has been reported that curves in the lower thoracic region cause more pain, whereas curves in the upper region present a more visual deformity. Nevertheless, it is typically pain or cosmetic reasons that prompt sufferers to seek help for their condition. In studies, kyphosis is better characterized for the thoracic spine than for the lumbar spine.
The seventh and tenth thoracic vertebrae are most commonly affected. It causes backache and spinal curvature. In very serious cases it may cause in
|
https://en.wikipedia.org/wiki/Neglected%20tropical%20diseases
|
Neglected tropical diseases (NTDs) are a diverse group of tropical infections that are common in low-income populations in developing regions of Africa, Asia, and the Americas. They are caused by a variety of pathogens, such as viruses, bacteria, protozoa, and parasitic worms (helminths). These diseases are contrasted with the "big three" infectious diseases (HIV/AIDS, tuberculosis, and malaria), which generally receive greater treatment and research funding. In sub-Saharan Africa, the effect of neglected tropical diseases as a group is comparable to that of malaria and tuberculosis. NTD co-infection can also make HIV/AIDS and tuberculosis more deadly.
Some treatments for NTDs are relatively inexpensive. For example, treatment for schistosomiasis costs US$0.20 per child per year. Nevertheless, in 2010 it was estimated that control of neglected diseases would require funding of between US$2 billion and $3 billion over the subsequent five to seven years. Some pharmaceutical companies have committed to donating all the drug therapies required, and mass drug administration efforts (for example, mass deworming) have been successful in several countries. While preventive measures are often more accessible in the developed world, they are not universally available in poorer areas.
Within developed countries, neglected tropical diseases affect the very poorest in society. In the United States, there are up to 1.46 million families, including 2.8 million children, living on less than two dollars per day. In developed countries, the burdens of neglected tropical diseases are often overshadowed by other public health issues. However, many of the same issues put populations at risk in developed as well as developing nations. For example, other problems stemming from poverty, such as lack of adequate housing, can expose individuals to the vectors of these diseases.
Twenty neglected tropical diseases are prioritized by the World Health Organization (WHO), though other organiz
|
https://en.wikipedia.org/wiki/Neocognitron
|
The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in 1979. It has been used for Japanese handwritten character recognition and other pattern recognition tasks, and served as the inspiration for convolutional neural networks.
The neocognitron was inspired by the model proposed by Hubel & Wiesel in 1959. They found two types of cells in the visual primary cortex called simple cell and complex cell, and also proposed a cascading model of these two types of cells for use in pattern recognition tasks.
The neocognitron is a natural extension of these cascading models. The neocognitron consists of multiple types of cells, the most important of which are called S-cells and C-cells. The local features are extracted by S-cells, and these features' deformation, such as local shifts, are tolerated by C-cells. Local features in the input are integrated gradually and classified in the higher layers. The idea of local feature integration is found in several other models, such as the Convolutional Neural Network model, the SIFT method, and the HoG method.
There are various kinds of neocognitron. For example, some types of neocognitron can detect multiple patterns in the same input by using backward signals to achieve selective attention.
See also
Artificial neural network
Deep learning
Pattern recognition
Receptive field
Self-organizing map
Unsupervised learning
Notes
|
https://en.wikipedia.org/wiki/SEG-Y
|
The SEG-Y (sometimes SEG Y) file format is one of several standards developed by the Society of Exploration Geophysicists (SEG) for storing geophysical data. It is an open standard, and is controlled by the SEG Technical Standards Committee, a non-profit organization.
History
The format was originally developed in 1973 to store single-line seismic reflection digital data on magnetic tapes. The specification was published in 1975.
The format and its name evolved from the SEG "Ex" or Exchange Tape Format. However, since its release, there have been significant advancements in geophysical data acquisition, such as 3-dimensional seismic techniques and high speed, high capacity recording.
The most recent revision of the SEG-Y format was published in 2017, named the rev 2.0 specification. It still features certain legacies of the original format (referred as rev 0), such as an optional SEG-Y tape label, the main 3200 byte textual EBCDIC character encoded tape header and a 400 byte binary header.
Data structure
This image shows the byte stream structure of a SEG-Y file, with rev 1 Extended Textual File Header records.
Since the first SEG-Y standard was published, many companies dealing with seismic data have produced variants of the SEG-Y standard which have run contrary to the aims of defining a standard for universal interchange, thus generally causing confusion and delay when data received by a company in expected SEG-Y format turns out to be a variant of that format. Initially, many of these derived from the fact that the format was based on the de facto standard of using IBM computers for digital processing where character data was coded in EBCDIC and number data in IBM Floating Point, whereas processing systems in use quickly evolved based on ASCII character and IEEE number representations.
Even before the SEG-Y standard was agreed and published, earlier seismic data format standards published by the SEG such as SEG-A, SEG-B and SEG-C were modified by seismic
|
https://en.wikipedia.org/wiki/Release%20factor
|
A release factor is a protein that allows for the termination of translation by recognizing the termination codon or stop codon in an mRNA sequence. They are named so because they release new peptides from the ribosome.
Background
During translation of mRNA, most codons are recognized by "charged" tRNA molecules, called aminoacyl-tRNAs because they are adhered to specific amino acids corresponding to each tRNA's anticodon. In the standard genetic code, there are three mRNA stop codons: UAG ("amber"), UAA ("ochre"), and UGA ("opal" or "umber"). Although these stop codons are triplets just like ordinary codons, they are not decoded by tRNAs. It was discovered by Mario Capecchi in 1967 that, instead, tRNAs do not ordinarily recognize stop codons at all, and that what he named "release factor" was not a tRNA molecule but a protein. Later, it was demonstrated that different release factors recognize different stop codons.
Classification
There are two classes of release factors. Class 1 release factors recognize stop codons; they bind to the A site of the ribosome in a way mimicking that of tRNA, releasing the new polypeptide as it disassembles the ribosome. Class 2 release factors are GTPases that enhance the activity of class 1 release factors. It helps the class 1 RF dissociate from the ribosome.
Bacterial release factors include RF1, RF2, and RF3 (or PrfA, PrfB, PrfC in the "peptide release factor" gene nomenclature). RF1 and RF2 are class 1 RFs: RF1 recognizes UAA and UAG while RF2 recognizes UAA and UGA. RF3 is the class 2 release factor. Eukaryotic and archaeal release factors are named analogously, with the naming changed to "eRF" for "eukaryotic release factor" and vice versa. a/eRF1 can recognize all three stop codons, while eRF3 (archaea use aEF-1α instead) works just like RF3.
The bacterial and archaeo-eukaryotic release factors are believed to have evolved separately. The two groups class 1 factors do not show sequence or structural homology with each o
|
https://en.wikipedia.org/wiki/Bernays%E2%80%93Sch%C3%B6nfinkel%20class
|
The Bernays–Schönfinkel class (also known as Bernays–Schönfinkel–Ramsey class) of formulas, named after Paul Bernays, Moses Schönfinkel and Frank P. Ramsey, is a fragment of first-order logic formulas where satisfiability is decidable.
It is the set of sentences that, when written in prenex normal form, have an quantifier prefix and do not contain any function symbols.
This class of logic formulas is also sometimes referred as effectively propositional (EPR) since it can be effectively translated into propositional logic formulas by a process of grounding or instantiation.
The satisfiability problem for this class is NEXPTIME-complete.
See also
Prenex normal form
Notes
|
https://en.wikipedia.org/wiki/Wellfleet%20Communications
|
Wellfleet Communications was an Internet router company founded in 1986 by Paul Severino, Bill Seifert, Steven Willis and David Rowe based in Bedford, Massachusetts, and later Billerica, Massachusetts. In an attempt to more effectively compete with Cisco Systems, its chief rival, it merged in October, 1994 with SynOptics Communications of Santa Clara, California to form Bay Networks in a deal worth US$ 2.7B. Bay Networks would in turn be acquired by Nortel in June, 1998 for US$ 9.1B.
Wellfleet was ranked the fastest-growing company in the United States by Fortune Magazine in both 1992 and 1993. Wellfleet sold routers.
Wellfleet also emphasized on support of the up-and-coming Internet Protocol. In 1991, Cisco led the global multi-protocol router market with a 51% share, whereas Wellfleet was third with only 9% market share. By 1993, Wellfleet had grown to a 14% market share, second only to Cisco's 50%. Wellfleet concluded the best way to gain strategic positioning over Cisco would be to merge with hub manufacturer SynOptics. By combining these technologies, the joined companies could provide their customers with common product interfaces and network management tools.
The resulting merged company, Bay Networks (named as such because Wellfleet was based in Boston, Massachusetts and SynOptics in San Francisco, California, two classic bay cities).
|
https://en.wikipedia.org/wiki/G-factor%20%28physics%29
|
A g-factor (also called g value) is a dimensionless quantity that characterizes the magnetic moment and angular momentum of an atom, a particle or the nucleus. It is essentially a proportionality constant that relates the different observed magnetic moments μ of a particle to their angular momentum quantum numbers and a unit of magnetic moment (to make it dimensionless), usually the Bohr magneton or nuclear magneton. Its value is proportional to the gyromagnetic ratio.
Definition
Dirac particle
The spin magnetic moment of a charged, spin-1/2 particle that does not possess any internal structure (a Dirac particle) is given by
where μ is the spin magnetic moment of the particle, g is the g-factor of the particle, e is the elementary charge, m is the mass of the particle, and S is the spin angular momentum of the particle (with magnitude ħ/2 for Dirac particles).
Baryon or nucleus
Protons, neutrons, nuclei and other composite baryonic particles have magnetic moments arising from their spin (both the spin and magnetic moment may be zero, in which case the g-factor is undefined). Conventionally, the associated g-factors are defined using the nuclear magneton, and thus implicitly using the proton's mass rather than the particle's mass as for a Dirac particle. The formula used under this convention is
where μ is the magnetic moment of the nucleon or nucleus resulting from its spin, g is the effective g-factor, I is its spin angular momentum, μN is the nuclear magneton, e is the elementary charge and mp is the proton rest mass.
Calculation
Electron g-factors
There are three magnetic moments associated with an electron: one from its spin angular momentum, one from its orbital angular momentum, and one from its total angular momentum (the quantum-mechanical sum of those two components). Corresponding to these three moments are three different g-factors:
Electron spin g-factor
The most known of these is the electron spin g-factor (more often called simply the
|
https://en.wikipedia.org/wiki/Exoticorum%20libri%20decem
|
Exoticorum libri decem ("Ten books of exotic life forms") is an illustrated zoological and botanical compendium in Latin, published at Leiden in 1605 by Charles de l'Écluse.
On the title page the author's name appears in its well-known Latin form Carolus Clusius. The full title is: Exoticorum libri decem, quibus animalium, plantarum, aromatum, aliorumque peregrinorum fructuum historiae describuntur ("Ten books of exotica: the history and uses of animals, plants, aromatics and other natural products from distant lands").
Clusius was not only an original biologist but also a remarkable linguist. He became well known as a translator and editor of the works of others. Exoticorum libri decem consists partly of his own discoveries, partly of translated and edited versions of earlier publications, always properly acknowledged, and with many new illustrations. Separately identifiable within this compendium can be found Clusius's Latin translations, with his own notes, from:
Garcia de Orta, Colóquios dos simples e drogas he cousas medicinais da Índia (1563)
Nicolás Monardes, Historia medicinal de las cosas que se traen de nuestras Indias Occidentales (1565–1574)
Cristóbal Acosta, Tractado de las drogas y medicinas de las Indias orientales (1578)
There is also material by Prospero Alpini (Prosper Alpinus) with notes by Clusius. As a separately paginated appendix appears Clusius's Latin translation (first published in 1589) of:
Pierre Belon, Observations (1553)
|
https://en.wikipedia.org/wiki/Cinnamyl%20alcohol
|
Cinnamyl alcohol or styron is an organic compound that is found in esterified form in storax, Balsam of Peru, and cinnamon leaves. It forms a white crystalline solid when pure, or a yellow oil when even slightly impure. It can be produced by the hydrolysis of storax.
Cinnamyl alcohol has a distinctive odour described as "sweet, balsam, hyacinth, spicy, green, powdery, cinnamic" and is used in perfumery and as a deodorant.
Cinnamyl alcohol is naturally occurrent only in small amount, so its industrial demand is usually fulfilled by chemical synthesis starting from cinnamaldehyde.
Properties
The compound is a solid at room temperature, forming colourless crystals that melt upon gentle heating. As is typical of most higher-molecular weight alcohols, it is sparingly soluble in water at room temperature, but highly soluble in most common organic solvents.
Safety
Cinnamyl alcohol has been found to have a sensitising effect on some people and as a result is the subject of a Restricted Standard issued by IFRA (International Fragrance Association).
Glycosides
Rosarin and rosavin are cinnamyl alcohol glycosides isolated from Rhodiola rosea.
|
https://en.wikipedia.org/wiki/Dmitrii%20Menshov
|
Dmitrii Yevgenyevich Menshov (also spelled Men'shov, Menchoff, Menšov, Menchov; ; 18 April 1892 – 25 November 1988) was a Soviet and Russian mathematician known for his contributions to the theory of trigonometric series.
Biography
Dmitrii Menshov studied languages as a schoolboy, but from the age of 13 he began to show great interest in mathematics and physics. In 1911, he completed high school with a gold medal. After a semester at the Moscow Engineering School, he enrolled at Moscow State University in 1912 and became a student of Nikolai Luzin.
In 1916, Menshov completed his dissertation on the topic of trigonometric series. He became a docent of Moscow State University in 1918. Soon after, he moved to Nizhny Novgorod where he was appointed a professor of the Ivanovsky Pedagogical Institute. After a few years, he returned to Moscow in 1922 and began to teach at Moscow State University.
In 1935, Menshov became a full professor of Moscow State University and was awarded the title of Doctor of Physical and Mathematical Sciences. He gave lectures at Moscow State University and also Moscow State Pedagogical University in numerical analysis, complex functions, and differential equations to undergraduate and graduate students. In this position, he taught and influenced an entire generation of young up-and-coming Russian mathematicians and physicists, including such renowned scientists as his student Sergey Stechkin. He received the Stalin Prize in 1951 and was elected to the position of corresponding member of the Russian Academy of Sciences in 1953.
His construction of a Fourier series with non-zero coefficients which converges to zero almost everywhere gave rise to the theory of Menshov sets.
He proved the Rademacher–Menchov theorem, the Looman–Menchoff theorem, and the Lusin–Menchoff theorem.
Menshov was an Invited Speaker of the ICM in 1928 in Bologna and in 1958 in Edinburgh.
|
https://en.wikipedia.org/wiki/Bilateral%20key%20exchange
|
Bilateral key exchange (BKE) was an encryption scheme utilized by the Society for Worldwide Interbank Financial Telecommunication (SWIFT).
The scheme was retired on January 1, 2009 and has now been replaced by the Relationship Management Application (RMA). All key management is now based on the SWIFT PKI that was implemented in SWIFT phase two.
A bilateral key allowed secure communication across the SWIFT Network. The text of a SWIFT message and the authentication key were used to generate a message authentication code or MAC. The MAC ensured the origin of a message and the authenticity of the message contents. This was normally accomplished by the exchange of various SWIFT messages used specifically for establishing a communicating key pair.
BKE keys were generated either manually inside the SWIFT software, or automatically with the use of a secure card reader (SCR).
Since 1994, the keys used in the card reader and the authentication keys themselves were 1,024 bit RSA.
|
https://en.wikipedia.org/wiki/Yaroslav%20Lopatynskyi
|
Yaroslav Borysovych Lopatynskyi (1906–1981) was a Soviet mathematician. Born in Tbilisi, Lopatinskii acquired wide acclaim for his contributions to the theory of differential equations. He is especially known for his condition of stability for boundary-value problems in elliptic equations and for initial boundary-value problems in evolution PDEs.
See also
Lev Lopatinsky
|
https://en.wikipedia.org/wiki/Hansken
|
Hansken (1630 – Florence, 9 November 1655) was a female Sri Lankan elephant that became famous in early 17th-century Europe. She toured many countries, demonstrating circus tricks, and influenced many artists including Stefano della Bella, Theodoor van Thulden and notably, Rembrandt.
Hansken was born in what was then Ceylon and was brought to Holland in 1637 at the request of Prince Frederick Henry. She was purchased by Cornelis van Groenevelt for 20,000 guilders, who transported her around Europe on tour. Her name is a Dutch diminutive form of the Tamil word aanai, meaning "elephant". Rembrandt saw her in Amsterdam in 1637, and made four sketches of her in chalk.
Hansken toured fairs in the Netherlands and Germany. She appeared in Hamburg in 1638, in Bremen in 1640, in Rotterdam in 1641, in Frankfurt in 1646 and 1647, and in Lüneburg in 1650. She was likely in Leipzig in 1649 and 1651.
In the 17th century, it was believed that elephants had very advanced intellectual abilities. Following Pliny, it was thought that the elephant was the nearest to man in intelligence, and that elephants could understand speech, follow orders, and had a sense of religion and conscience. Pliny even reports that an elephant had learned to write words in the Greek alphabet. Hansken did not live up to these expectations, but she could wave a flag, fire a pistol, strike a drum, hold out her front feet, pinch money from pockets, put on a hat, carry a bucket of water, and pick up coins from the ground.
In July 1651, Hansken travelled to Zürich, Solothurn, Bregenz and St. Gallen, and on to Rome. She visited Florence, where she was drawn by artist Stefano della Bella. On the way back from Rome, the elephant died in the Piazza della Signoria, Florence. Della Bella also drew her corpse after her death on 9 November 1655.
The skeleton of Hansken is still preserved in Florence at Museo della Specola. The skin, which was mounted on a wooden support, is now lost.
See also
Cultural depiction
|
https://en.wikipedia.org/wiki/Linux
|
Linux ( ) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution (distro), which includes the kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses and recommends the name "GNU/Linux" to emphasize the use and importance of GNU software in many distributions, causing some controversy.
Popular Linux distributions include Debian, Fedora Linux, Arch Linux and Ubuntu. Commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise. Desktop Linux distributions include a windowing system such as X11 or Wayland, and a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, or include a solution stack such as LAMP. Because Linux is freely redistributable, anyone may create a distribution for any purpose.
Linux was originally developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system. Because of the dominance of the Linux-based Android on smartphones, Linux, including Android, has the largest installed base of all general-purpose operating systems, . Although Linux is, , used by only around 2.6 percent of desktop computers, the Chromebook, which runs the Linux kernel-based ChromeOS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux is the leading operating system on servers (over 96.4% of the top 1 million web servers' operating systems are Linux), leads other big iron systems such as mainframe computers, and is used on all of the world's 500 fastest supercomputers (, having gradually displaced all competitors).
Linux also runs on embedded syst
|
https://en.wikipedia.org/wiki/Visual%20Basic%20%28classic%29
|
The original Visual Basic (also referred to as Classic Visual Basic) is a third-generation event-driven programming language from Microsoft known for its Component Object Model (COM) programming model first released in 1991 and declared legacy during 2008. Microsoft intended Visual Basic to be relatively easy to learn and use. Visual Basic was derived from BASIC and enables the rapid application development (RAD) of graphical user interface (GUI) applications, access to databases using Data Access Objects, Remote Data Objects, or ActiveX Data Objects, and creation of ActiveX controls and objects.
A programmer can create an application using the components provided by the Visual Basic program itself. Over time the community of programmers developed third-party components. Programs written in Visual Basic can also make use of the Windows API, which requires external functions declarations.
The final release was version 6 in 1998. On April 8, 2008, Microsoft stopped supporting Visual Basic 6.0 IDE. The Microsoft Visual Basic team still maintains compatibility for Visual Basic 6.0 applications through its "It Just Works" program on supported Windows operating systems.
In 2014, some software developers still preferred Visual Basic 6.0 over its successor, Visual Basic .NET. Visual Basic 6.0 was selected as the most dreaded programming language by respondents of Stack Overflow's annual developer survey in 2016, 2017, and 2018.
A dialect of Visual Basic, Visual Basic for Applications (VBA), is used as a macro or scripting language within several Microsoft and ISV applications, including Microsoft Office.
Language features
Like the BASIC programming language, Visual Basic was designed to have an easy learning curve. Programmers can create both simple and complex GUI applications.
Programming in VB is a combination of visually arranging components or controls on a form, specifying attributes and actions for those components, and writing additional lines of code for mor
|
https://en.wikipedia.org/wiki/Voxel-based%20morphometry
|
Voxel-based morphometry is a computational approach to neuroanatomy that measures differences in local concentrations of brain tissue, through a voxel-wise comparison of multiple brain images.
In traditional morphometry, volume of the whole brain or its subparts is measured by drawing regions of interest (ROIs) on images from brain scanning and calculating the volume enclosed. However, this is time consuming and can only provide measures of rather large areas. Smaller differences in volume may be overlooked. The value of VBM is that it allows for comprehensive measurement of differences, not just in specific structures, but throughout the entire brain. VBM registers every brain to a template, which gets rid of most of the large differences in brain anatomy among people. Then the brain images are smoothed so that each voxel represents the average of itself and its neighbors. Finally, the image volume is compared across brains at every voxel.
However, VBM can be sensitive to various artifacts, which include misalignment of brain structures, misclassification of tissue types, differences in folding patterns and in cortical thickness. All these may confound the statistical analysis and either decrease the sensitivity to true volumetric effects, or increase the chance of false positives. For the cerebral cortex, it has been shown that volume differences identified with VBM may reflect mostly differences in surface area of the cortex, than in cortical thickness.
History
Over the past two decades, hundreds of studies have shed light on the neuroanatomical structural correlates of neurological and psychiatric disorders. Many of these studies were performed using voxel-based morphometry (VBM), a whole-brain technique for characterizing between groups' regional volume and tissue concentration differences from structural magnetic resonance imaging (MRI) scans.
One of the first VBM studies and one that came to attention in mainstream media was a study on the hippocampus bra
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.