source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Xak%20II%3A%20Rising%20of%20the%20Redmoon
|
Xak II: Rising of the Redmoon is a fantasy role-playing video game developed and published by the Japanese software developer MicroCabin. It is a direct sequel to Xak: The Art of Visual Stage (Xak I). The game was released in Japan only, but due to an MSX scene that arose in Europe (predominately in the Netherlands region) some of the MSX versions of Xak received fan translations. An enhanced remake was later released for the NEC PC-Engine, together with the first game in the series Xak as Xak I & II by Telenet Japan's development team Riot.
Setting
Xak II, being a direct sequel to the first game in the series, it features the same high fantasy setting as Xak. The gods' division of the world into Xak, the world of men, Oceanity, the world faeries, and Xexis, the world of demons, as referenced in Xak, is depicted in this game's introduction. In this adventure, the main hero of the Xak series, Latok Kart is exploring a vast region situated around a single central village of Banuwa.
Story
In Xak, the protagonist Latok Kart fought and defeated the demon Zemu Badu. One of Badu's minions escaped, a black-robed man known only as Necromancer. Three years later, Necromancer is able to contact one of his allies from the demon world of Xexis: a fearsome demon called Zamu Gospel. Following a prophecy foretold by an ancient and extremely powerful sorcerer by the name of Amadok, the Necromancer and three other demons (referred to as Demonlords) are attempting to complete a dark ritual which will revive Zamu Gospel into the world of Xak.
The player once again controls Latok, now nineteen years of age. A rumour about the whereabouts of Latok's father Dork has surfaced around the village of Banuwa. Latok and his faerie companion Pixie travel to the village to investigate, but soon run into Gospel's minions.
Characters
Latok is the only playable character in the game. The faerie Pixie accompanies him throughout the game and comments on Latok's actions. She is not controllable b
|
https://en.wikipedia.org/wiki/Polar%20overdominance
|
Polar overdominance is a unique form of inheritance originally described in livestock, with relevant examples in humans and mice being discovered shortly after. The term polar is used to describe this type of overdominance because the phenotype of the heterozygote is more prevalent than the other genotypes. This polarity is shown as differential phenotype is only present in one of the heterozygote configurations when the recessive allele is inherited in a parent of origin type fashion. Polar overdominance differs from regular overdominance (also known as heterozygote advantage) where both heterozygote genotypes display a phenotype that has increased fitness regardless of the parent of origin. Studying this type of inheritance could have practical applications in preventative medicine for humans as well as a variety of other agricultural applications.
Discovery
The first described occurrence of polar overdominance in sheep was shown after finding that a mutant allele, called callipyge (after Venus Callipyge), must be inherited from the father to cause a condition called muscle hypertrophy. Muscle hypertrophy in the offspring is caused by an increase in the size and proportion of muscle fibers, namely the fast-twitch muscle fibers. This increase is generally located in the hind quarters and torso. Muscle hypertrophy only manifests itself in the offspring approximately one month after birth. Polar overdominance shows evidence of an imprinted locus displayed as the difference between the expression of heterozygote phenotypes in a parent-of-origin type fashion. It was discovered that a single-nucleotide polymorphism in the DLK1–DIO3 imprinted gene cluster affects the gene expression of paternal allele-specific genes and several maternal allele-specific long non-coding RNA and microRNA. Ectopic expression of the Delta-like 1 homologue (DLK1) and the Retrotransposon-like 1 (RTL1/PEG11) genes which are paternally expressed proteins in skeletal muscle are a hallmark of the
|
https://en.wikipedia.org/wiki/Formation%20and%20evolution%20of%20the%20Solar%20System
|
The formation of the Solar System began about 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
This model, known as the nebular hypothesis, was first developed in the 18th century by Emanuel Swedenborg, Immanuel Kant, and Pierre-Simon Laplace. Its subsequent development has interwoven a variety of scientific disciplines including astronomy, chemistry, geology, physics, and planetary science. Since the dawn of the Space Age in the 1950s and the discovery of exoplanets in the 1990s, the model has been both challenged and refined to account for new observations.
The Solar System has evolved considerably since its initial formation. Many moons have formed from circling discs of gas and dust around their parent planets, while other moons are thought to have formed independently and later to have been captured by their planets. Still others, such as Earth's Moon, may be the result of giant collisions. Collisions between bodies have occurred continually up to the present day and have been central to the evolution of the Solar System. Beyond Neptune, many sub-planet sized objects formed. Several thousand trans-Neptunian objects have been observed. Unlike the planets, these trans-Neptunian objects mostly move on eccentric orbits, inclined to the plane of the planets. The positions of the planets might have shifted due to gravitational interactions. Planetary migration may have been responsible for much of the Solar System's early evolution.
In roughly 5 billion years, the Sun will cool and expand outward to many times its current diameter (becoming a red giant), before casting off its outer layers as a planetary nebula and leaving behind a stellar remnant known as a white dwarf. In the distant future, the gravity of p
|
https://en.wikipedia.org/wiki/Slip%20ratio
|
Slip ratio is a means of calculating and expressing the slipping behavior of the wheel of an automobile. It is of fundamental importance in the field of vehicle dynamics, as it allows to understand the relationship between the deformation of the tire and the longitudinal forces (i.e. the forces responsible for forward acceleration and braking) acting upon it. Furthermore, it is essential to the effectiveness of any anti-lock braking system.
When accelerating or braking a vehicle equipped with tires, the observed angular velocity of the tire does not match the expected velocity for pure rolling motion, which means there appears to be apparent sliding between outer surface of the rim and the road in addition to rolling due to deformation of the part of tire above the area in contact with the road. When driving on dry pavement the fraction of slip that is caused by actual sliding taking place between road and tire contact patch is negligible in magnitude and thus does not in practice make slip ratio dependent on speed. It is only relevant in soft or slippery surfaces, like snow, mud, ice, etc and results constant speed difference in same road and load conditions independently of speed, and thus fraction of slip ratio due to that cause is inversely related to speed of the vehicle.
The difference between theoretically calculated forward speed based on angular speed of the rim and rolling radius, and actual speed of the vehicle, expressed as a percentage of the latter, is called ‘slip ratio’. This slippage is caused by the forces at the contact patch of the tire, not the opposite way, and is thus of fundamental importance to determine the accelerations a vehicle can produce.
There is no universally agreed upon definition of slip ratio. The SAE J670 definition is, for tires pointing straight ahead:
Where is the angular velocity of the wheel, is the effective radius of the corresponding free-rolling tire, which can be calculated from the revolutions per kilometer, a
|
https://en.wikipedia.org/wiki/Nexus%20file
|
The extensible NEXUS file format is widely used in bioinformatics. It stores information about taxa, morphological and molecular characters, distances, genetic codes, assumptions, sets, trees, etc. Several popular phylogenetic programs such as PAUP*, MrBayes, Mesquite, MacClade and SplitsTree use this format.
Syntax
A NEXUS file is made out of a fixed header #NEXUS followed by multiple blocks. Each block starts with BEGIN block_name; and ends with END;. The keywords are case-insensitive. Comments are enclosed inside square brackets .
There are a few pre-defined block names for common types of data. Examples include:
TAXA block The TAXA block contains information about taxa.
DATA block The DATA block contains the data matrix (e.g. sequence alignment).
TREES block The TREES block contains phylogenetic trees described using the Newick format, e.g. ((A,B),C);:
The following example uses the three block types above:
#NEXUS
Begin TAXA;
Dimensions ntax=4;
TaxLabels SpaceDog SpaceCat SpaceOrc SpaceElf;
End;
Begin data;
Dimensions nchar=15;
Format datatype=dna missing=? gap=- matchchar=.;
Matrix
[ When a position is a "matchchar", it means that it is the same as the first entry at the same position. ]
SpaceDog
SpaceCat
SpaceOrc [ same as atgttagctag-tgg ]
SpaceElf
;
End;
BEGIN TREES;
Tree tree1 = (((SpaceDog,SpaceCat),SpaceOrc,SpaceElf));
END;
See also
Newick format
NeXML format
phyloXML
PAUP*
|
https://en.wikipedia.org/wiki/Incidents%20at%20SeaWorld%20parks
|
This is a summary of notable incidents that have taken place at various SeaWorld Parks & Entertainment-owned amusement parks, water parks or theme parks. This list is not intended to be a comprehensive list of every such event, but only those that have a significant impact on the parks or park operations, or are otherwise significantly newsworthy.
The term incidents refers to major accidents, injuries, or deaths that occur at a SeaWorld Parks facility. While these incidents were required to be reported to regulatory authorities due to where they occurred, they usually fall into one of the following categories:
Caused by negligence on the part of a guest. This can be refusal to follow specific ride safety instructions, or deliberate intent to violate park rules.
The result of a guest's known, or unknown, health issues.
Negligence on the part of the park, either by ride operator or maintenance.
Act of God or a generic accident (e.g. slipping and falling) that is not a direct result of an action on anybody's part.
Adventure Island Tampa Bay
On September 10, 2011, a 21-year-old lifeguard was killed after being struck by lightning while clearing guests from the Key West Rapids ride tower due to inclement weather. No injuries to the guests were reported. The park installed a system in place to warn of incoming weather.
Aquatica
Orlando, Florida location
On October 4, 2010, a 68-year-old man from Manchester, England was found unresponsive on Roa's Rapids. He was taken to Dr. Phillips Hospital but was later pronounced dead on arrival. Preliminary findings found he died of natural causes.
On July 15, 2017, a 58-year-old man from Savannah, Georgia was also found unresponsive on Roa's Rapids. He died the next day. It was later revealed he had a history of health problems.
San Antonio, Texas location
On July 1, 2018, a woman was found unresponsive after riding the Wahalla Wave water slide. She was given CPR by lifeguards before being taken to a nearby Christus Sa
|
https://en.wikipedia.org/wiki/Polycomb-group%20proteins
|
Polycomb-group proteins (PcG proteins) are a family of protein complexes first discovered in fruit flies that can remodel chromatin such that epigenetic silencing of genes takes place. Polycomb-group proteins are well known for silencing Hox genes through modulation of chromatin structure during embryonic development in fruit flies (Drosophila melanogaster). They derive their name from the fact that the first sign of a decrease in PcG function is often a homeotic transformation of posterior legs towards anterior legs, which have a characteristic comb-like set of bristles.
In insects
In Drosophila, the Trithorax-group (trxG) and Polycomb-group (PcG) proteins act antagonistically and interact with chromosomal elements, termed Cellular Memory Modules (CMMs). Trithorax-group (trxG) proteins maintain the active state of gene expression while the Polycomb-group (PcG) proteins counteract this activation with a repressive function that is stable over many cell generations and can only be overcome by germline differentiation processes. Polycomb Gene complexes or PcG silencing consist of at least three kinds of multiprotein complex Polycomb Repressive Complex 1 (PRC1), PRC2 and PhoRC. These complexes work together to carry out their repressive effect. PcGs proteins are evolutionarily conserved and exist in at least two separate protein complexes; the PcG repressive complex 1 (PRC1) and the PcG repressive complex 2–4 (PRC2/3/4). PRC2 catalyzes trimethylation of lysine 27 on histone H3 (H3K27me2/3), while PRC1 mono- ubiquitinates histone H2A on lysine 119 (H2AK119Ub1).
In mammals
In mammals Polycomb Group gene expression is important in many aspects of development like homeotic gene regulation and X chromosome inactivation, being recruited to the inactive X by Xist RNA, the master regulator of XCI or embryonic stem cell self-renewal. The Bmi1 polycomb ring finger protein promotes neural stem cell self-renewal. Murine null mutants in PRC2 genes are embryonic lethals while m
|
https://en.wikipedia.org/wiki/Temporal%20branches%20of%20the%20facial%20nerve
|
The temporal branches of the facial nerve (frontal branch of the facial nerve) crosses the zygomatic arch to the temporal region, supplying the auriculares anterior and superior, and joining with the zygomaticotemporal branch of the maxillary nerve, and with the auriculotemporal branch of the mandibular nerve.
The more anterior branches supply the frontalis, the orbicularis oculi, and corrugator supercilii, and join the supraorbital and lacrimal branches of the ophthalmic. The temporal branch acts as the efferent limb of the corneal reflex.
Testing the temporal branches of the facial nerve
To test the function of the temporal branches of the facial nerve, a patient is asked to frown and wrinkle their forehead.
Additional images
External links
- "Branches of Facial Nerve (CN VII)"
()
()
http://www.dartmouth.edu/~humananatomy/figures/chapter_47/47-5.HTM
|
https://en.wikipedia.org/wiki/Zygomatic%20branches%20of%20the%20facial%20nerve
|
The zygomatic branches of the facial nerve (malar branches) are nerves of the face. They run across the zygomatic bone to the lateral angle of the orbit. Here, they supply the orbicularis oculi muscle, and join with filaments from the lacrimal nerve and the zygomaticofacial branch of the maxillary nerve (CN V2).
Structure
The zygomatic branches of the facial nerve are branches of the facial nerve (CN VII). They run across the zygomatic bone to the lateral angle of the orbit. This is deep to zygomaticus major muscle. They send fibres to orbicularis oculi muscle.
Connections
The zygomatic branches of the facial nerve have many nerve connections. Along their course, there may be connections with the buccal branches of the facial nerve. They join with filaments from the lacrimal nerve and the zygomaticofacial nerve from the maxillary nerve (CN V2). They also join with the inferior palpebral nerve and the superior labial nerve, both from the infraorbital nerve.
Function
The zygomatic branches of the facial nerve supply part of the orbicularis oculi muscle. This is used to close the eyelid.
Clinical significance
Testing
To test the zygomatic branches of the facial nerve, a patient is asked to close their eyes tightly. This uses orbicularis oculi muscle. The zygomatic branches of the facial nerve may be recorded and stimulated with an electrode.
Surgical damage
Rarely, the zygomatic branches of the facial nerve may be damaged during surgery on the temporomandibular joint (TMJ).
Additional images
See also
Zygomatic nerve
Zygomaticus major muscle
Zygomaticus minor muscle
|
https://en.wikipedia.org/wiki/Cervical%20branch%20of%20the%20facial%20nerve
|
The cervical branch of the facial nerve is a nerve in the neck. It is a branch of the facial nerve (VII). It supplies the platysma muscle, among other functions.
Structure
The cervical branch of the facial nerve is a branch of the facial nerve (VII). It runs forward beneath the platysma muscle, and forms a series of arches across the side of the neck over the suprahyoid region. One branch descends to join the cervical cutaneous nerve from the cervical plexus.
Function
The lateral part of the cervical branch of the facial nerve supplies the platysma muscle.
Additional images
|
https://en.wikipedia.org/wiki/Solaris%20Trusted%20Extensions
|
Solaris Trusted Extensions is a set of security extensions incorporated in the Solaris 10 operating system by Sun Microsystems, featuring a mandatory access control model. It succeeds Trusted Solaris, a family of security-evaluated operating systems based on earlier versions of Solaris.
Solaris 10 5/09 is Common Criteria certified at Evaluation Assurance Level EAL4+ against the CAPP, RBACPP, and LSPP protection profiles.
Overview
Certain Trusted Solaris features, such as fine-grained privileges, are now part of the standard Solaris 10 release. Beginning with Solaris 10 11/06, Solaris now includes a component called Solaris Trusted Extensions which gives it the additional features necessary to position it as the successor to Trusted Solaris. Inclusion of these features in the mainstream Solaris release marks a significant change from Trusted Solaris, as it is no longer necessary to use a different Solaris release with a modified kernel for labeled security environments. Solaris Trusted Extensions is an OpenSolaris project.
Trusted Extensions additions and enhancements include:
Accounting
Role-Based Access Control
Auditing
Device Allocation
Mandatory Access Control Labeling
Solaris Trusted Extensions enforce a mandatory access control policy on all aspects of the operating system, including device access, file, networking, print and window management services. This is achieved by adding sensitivity labels to objects, thereby establishing explicit relationships between these objects. Only appropriate (and explicit) authorization allows applications and users read and/or write access to the objects.
The component also provides labeled security features in a desktop environment. Apart from extending support for the Common Desktop Environment from the Trusted Solaris 8 release, it delivers the first labeled environment based on GNOME. Solaris Trusted Extensions facilitate the access of data at multiple classification levels through a single desktop environment.
Sol
|
https://en.wikipedia.org/wiki/Pyramidal%20eminence
|
The pyramidal eminence is a hollow conical projection upon the posterior wall of the tympanic cavity of the middle ear. The stapedius muscle arises in the hollow of the eminence and its tendon exits through its apex.
The pyramidal eminence is situated inferior to the aditus to mastoid antrum, immediately inferior to the oval window (fenestra vestibuli), and anterior to the vertical portion of the facial canal. The apex of the eminence is directed anteriorly toward the oval window.
The cavity in the pyramidal eminence is prolonged inferoposteriorly anterior to the facial canal, with which it communicates by a minute aperture which transmits the nerve to the stapedius from the facial nerve (CN VII).
|
https://en.wikipedia.org/wiki/Nerve%20to%20the%20stapedius
|
The nerve to the stapedius is a branch of the facial nerve (CN VII) which innervates the stapedius muscle. It arises from the CN VII within the facial canal, opposite the pyramidal eminence. It passes through a small canal in this eminence to reach the stapedius muscle.
|
https://en.wikipedia.org/wiki/Limited-memory%20BFGS
|
Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.
Like the original BFGS, L-BFGS uses an estimate of the inverse Hessian matrix to steer its search through variable space, but where BFGS stores a dense approximation to the inverse Hessian (n being the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse Hessian Hk, L-BFGS maintains a history of the past m updates of the position x and gradient ∇f(x), where generally the history size m can be small (often ). These updates are used to implicitly do operations requiring the Hk-vector product.
Algorithm
The algorithm starts with an initial estimate of the optimal value, , and proceeds iteratively to refine that estimate with a sequence of better estimates . The derivatives of the function are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) of .
L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplication is carried out, where is the approximate Newton's direction, is the current gradient, and is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion."
We take as given , the position at the -th iteratio
|
https://en.wikipedia.org/wiki/Zoological%20Survey%20of%20India
|
The Zoological Survey of India (ZSI), founded on 1 July 1916 by the Ministry of Environment, Forest and Climate Change of the Government of India as a premier Indian organisation in zoological research and studies to promote the survey, exploration and research of the fauna in the country.
History
The annals of Zoological Survey of India (ZSI) reflect an eventful beginning for the Survey even before its formal birth and growth. The history of ZSI begins from the days of the Asiatic Society of Bengal founded by Sir William Jones on 15 January 1784. The Asiatic Society of Bengal was the mother institution not only to the Indian Museum (1875) but also to the institutions like the Zoological Survey of India and the Geological Survey of India. ZSI's establishment was in fact a fulfillment of the dream of Sir William Jones, the founder of the Asiatic Society of Bengal, whose vision encompassed the entire range of human knowledge. The Asiatic Society had started collecting zoological and geological specimens since 1796 and set up a museum in 1814. Nathaniel Wallich, the first Superintendent of the "Museum of the Asiatic Society", was in charge of the increasing collections of Geological and Zoological specimens; he had augmented the animal collections to the Zoological Galleries of the Museum.
The genesis of the ZSI was in 1875 with the opening of the Indian Museum. The new museum on its inception comprised only three sections: the Zoological, the Archaeological and the Geological. The zoological collections of the Asiatic Society of Bengal were formally handed over to the board of trustees of the Indian Museum in 1875.
Zoological Section of the Museum during the period from 1875 to 1916 steadily expanded, growing to the greatest collection of natural history in Asia. By the care and activity of the Curators of the Asiatic society of Bengal and the Superintendents of the Indian Museum, viz., John McClelland, Edward Blyth, John Anderson, James Wood-Mason, Alfred William
|
https://en.wikipedia.org/wiki/Direct-coupled%20amplifier
|
A direct-coupled amplifier or DC amplifier is a type of amplifier in which the output of one stage of the amplifier is coupled to the input of the next stage in such a way as to permit signals with zero frequency, also referred to as direct current, to pass from input to output. This is an application of the more general direct coupling. It was invented by Harold J Paz and Francis P. Keiper Jr. in 1955. It displaced the triode vacuum tube amplifier designed by Lee de Forest. Almost all vacuum tube circuit designs are now replaced with direct coupled transistor circuit design. It is the first transistor amplifier design that did not include coupling capacitors. The direct-coupled amplifier allowed analog circuits to be built smaller with the elimination of coupling capacitors and removed the lower frequency limitation that is dependent on capacitors.
History
Paz first started his career at Bell Labs as an intern from December 1950 to April 1952 as an Engineering Aid. Paz worked on testing several transistor parameters, such as rise time, RC timing constant, alpha coefficient, to determine their effects on a transistor circuit design. He then went on to work at RCA as a summer student engineering intern from June 1953 to September 1953. Paz was assigned to determine the effects of several variables on a transistor's noise factor at various radio frequencies. It was the result of this research that Paz designed the first transistor-based wireless microphone, called Phantom. RCA took interest in Paz's design and made their subsidiary National Broadcasting Company aware of the new microphone. RCA decided to file patent US2,810,110 for the microphone on July 16, 1954 and was granted on October 15, 1957. The design was used for the ND-433 wireless microphone that NBC used in 1955.
It was In June 1954, thatPaz took an engineering position at Philco and was assigned to the Transistor Product Engineering Group to study the theory of operation of the direct-coupled swit
|
https://en.wikipedia.org/wiki/Multistage%20amplifier
|
A multistage amplifier is an electronic amplifier consisting of two or more single-stage amplifiers connected together. In this context, a single stage is an amplifier containing only a single transistor (sometimes a pair of transistors) or other active device. The most common reason for using multiple stages is to increase the gain of the amplifier in applications where the input signal is very small, for instance in radio receivers. In these applications a single stage has insufficient gain by itself. In some designs it is possible to obtain more desirable values of other parameters such as input resistance and output resistance.
Connection schemes
The simplest, and most common, connection scheme is a cascade connection of identical, or similar, stages forming a cascade amplifier. In a cascade connection, the output port of one stage is connected to the input port of the next. Typically, the individual stages are bipolar junction transistors (BJTs) in a common emitter configuration or field-effect transistors (FETs) in a common source configuration. There are some applications where the common base configuration is preferred. Common base has high voltage gain but no current gain. It is used in UHF television and radio receivers because its low input resistance is easier to match to antennas than common emitter. In amplifiers that have a differential input and are required to output a differential signal the stages must be differential amplifiers such as long-tailed pairs. These stages contain two transistors to deal with the differential signalling.
More complex schemes can be used with different stages having different configurations to create an amplifier whose characteristics exceed those of a single-stage for several different parameters, such as gain, input resistance and output resistance. The final stage can be a common collector configuration to act as a buffer amplifier. Common collector stages have no voltage gain but high current gain a
|
https://en.wikipedia.org/wiki/Levon%20Kemalyan
|
Levon John Kemalyan (24 February 1907 Fresno, California – 2 November 1976 Fresno, California) was a model railroading entrepreneur. He founded Kemtron Corporation, a manufacturer of model railway cars, locomotives, parts (especially for scratchbuilders), and accessories. In 1960 it was the world's largest maker of scale railroad kits, producing one million parts a year and selling them worldwide to enthusiasts as far away as India and Australia.
Companies owned
Kemalyan had ownership stakes in various companies; Fresno Photo-Engraving Company, U.S. Hobbies, Inc., and Kemtron Corporation.
Fresno Photo-Engraving Company was founded in 1903 by A. F. Kemalyan. Levon purchased the company with his brother-in-law in 1929. In 1935, he took over the photo-engraving company himself. Levon sold the firm December 26, 1962, to his brothers-in-law, Thomas N. Vartanian and Jerry Mootafian.
Kemtron Corporation was founded and owned by Kemalyan. :Kemalyan started Kemtron in Fresno in the early '50s (possibly even 1948 or 1949), and he provided layout space for the Fresno Model Railroad club in the early '50s. In 1960, the Kemtron plant was with 15 employees. One of Kemtron's product lines, photo engraved car kits, particularly the flats, often used zinc (or a very high zinc content brass) sheet, as opposed to brass. The 'blue' coating, was the 'photo resist' that was not cleaned off. Kemtron was initially a sideline of Fresno Photo Engraving, which explains why common photo engraving materials were often used. In the mid-1960s Kemtron also produced a line of slot cars and accessories.
Lawrence S. Kazoyan (b. November 7, 1931 – d. April 15, 2000, Palm Beach), a retired aerospace engineer, acquired Kemtron in 1970, and moved it from Fresno to Los Angeles. T. Fredrick Hill and Wayne Lyndon, owners of The Original Whistle Stop Inc., acquired Kemtron in 1978 and moved it to Sacramento. The Precision Scale Company, Inc. acquired Kemtron as a merger in 1986. Former Kemtron employe
|
https://en.wikipedia.org/wiki/Critical%20speed
|
In solid mechanics, in the field of rotordynamics, the critical speed is the theoretical angular velocity that excites the natural frequency of a rotating object, such as a shaft, propeller, leadscrew, or gear. As the speed of rotation approaches the object's natural frequency, the object begins to resonate, which dramatically increases system vibration. The resulting resonance occurs regardless of orientation. When the rotational speed is equal to the numerical value of the natural vibration, then that speed is referred to as critical speed.
Critical speed of shafts
All rotating shafts, even in the absence of external load, will deflect during rotation. The unbalanced mass of the rotating object causes deflection that will create resonant vibration at certain speeds, known as the critical speeds. The magnitude of deflection depends upon the following:
Stiffness of the shaft and its support
Total mass of shaft and attached parts
Unbalance of the mass with respect to the axis of rotation
The amount of damping in the system
In general, it is necessary to calculate the critical speed of a rotating shaft, such as a fan shaft, in order to avoid issues with noise and vibration.
Critical speed equation
Like vibrating strings and other elastic structures, shafts and beams can vibrate in different mode shapes, with corresponding natural frequencies. The first vibrational mode corresponds to the lowest natural frequency. Higher modes of vibration correspond to higher natural frequencies. Often when considering rotating shafts, only the first natural frequency is needed.
There are two main methods used to calculate critical speed—the Rayleigh–Ritz method and Dunkerley's method. Both calculate an approximation of the first natural frequency of vibration, which is assumed to be nearly equal to the critical speed of rotation. The Rayleigh–Ritz method is discussed here. For a shaft that is divided into n segments, the first natural frequency for a given beam, in rad/s, can b
|
https://en.wikipedia.org/wiki/Tunnell%27s%20theorem
|
In number theory, Tunnell's theorem gives a partial resolution to the congruent number problem, and under the Birch and Swinnerton-Dyer conjecture, a full resolution.
Congruent number problem
The congruent number problem asks which positive integers can be the area of a right triangle with all three sides rational. Tunnell's theorem relates this to the number of integral solutions of a few fairly simple Diophantine equations.
Theorem
For a given square-free integer n, define
Tunnell's theorem states that supposing n is a congruent number, if n is odd then 2An = Bn and if n is even then 2Cn = Dn. Conversely, if the Birch and Swinnerton-Dyer conjecture holds true for elliptic curves of the form , these equalities are sufficient to conclude that n is a congruent number.
History
The theorem is named for Jerrold B. Tunnell, a number theorist at Rutgers University, who proved it in .
Importance
The importance of Tunnell's theorem is that the criterion it gives is testable by a finite calculation. For instance, for a given , the numbers can be calculated by exhaustively searching through in the range .
See also
Birch and Swinnerton-Dyer conjecture
Congruent number
|
https://en.wikipedia.org/wiki/Radium%20dial
|
Radium dials are watch, clock and other instrument dials painted with luminous paint containing radium-226 to produce radioluminescence. Radium dials were produced throughout most of the 20th century before being replaced by safer tritium-based luminous material in the 1970s and finally by non-toxic, non-radioactive strontium aluminate–based photoluminescent material from the middle 1990s.
History
Radium was discovered by Marie and Pierre Curie in 1898 and was soon combined with paint to make luminescent paint, which was applied to clocks, airplane instruments, and the like, to be able to read them in the dark.
In 1914, Dr. Sabin Arnold von Sochocky and Dr. George S. Willis founded the Radium Luminous Material Corporation. The company made luminescent paint. The company later changed its name to the United States Radium Corporation. The use of radium to provide luminescence for hands and indices on watches soon followed.
The Ingersoll Watch division of the Waterbury Clock Company, a nationally-known maker of low-cost pocket and wristwatches, was a leading popularizer of the use of radium for watch hands and indices through the introduction of their "Radiolite" watches in 1916. The Radiolite series, made in various sizes and models, became a signature of the Connecticut-based company.
Radium dials were typically painted by young women, who used to 'point' their brushes by licking and shaping the bristles prior to painting the fine lines and numbers on the dials. This practice resulted in the ingestion of radium, which caused serious jaw-bone degeneration and malignancy and other dental diseases. The disease, radium-induced osteonecrosis, was recognized as an occupational disease in 1925 after a group of radium painters, known as the Radium Girls, from the United States Radium Corporation sued. By 1930, all dial painters stopped pointing their brushes by mouth. Stopping this practice drastically reduced the amount of radium ingested and therefore, the incidence o
|
https://en.wikipedia.org/wiki/Magnetospheric%20eternally%20collapsing%20object
|
The magnetospheric eternally collapsing object (MECO) is an alternative model for black holes initially proposed by Indian scientist Abhas Mitra in 1998 and later generalized by American researchers Darryl J. Leiter and Stanley L. Robertson. A proposed observable difference between MECOs and black holes is that a MECO can produce its own intrinsic magnetic field. An uncharged black hole cannot produce its own magnetic field, though its accretion disc can.
Theoretical model
In the theoretical model a MECO begins to form in much the same way as a black hole, with a large amount of matter collapsing inward toward a single point. However, as it becomes smaller and denser, a MECO does not form an event horizon.
As the matter becomes denser and hotter, it glows more brightly. Eventually its interior approaches the Eddington limit. At this point the internal radiation pressure is sufficient to slow the inward collapse almost to a standstill.
In fact, the collapse gets slower and slower, so a singularity could only form in an infinite future. Unlike a black hole, the MECO never fully collapses. Rather, according to the model it slows down and enters an eternal collapse.
Mitra provides a review of the evolution of black hole alternatives including his model of eternal collapse and MECOs.
Eternal collapse
Mitra's paper claiming non-occurrence of event horizons and exact black holes later appeared in Pramana - Journal of Physics. In this paper, Mitra proposes that so-called black holes are eternally collapsing while Schwarzschild black holes have a gravitational mass M = 0. He argued that all proposed black holes are instead quasi-black holes rather than exact black holes and that during the gravitational collapse to a black hole, the entire mass energy and angular momentum of the collapsing objects is radiated away before formation of exact mathematical black holes. Mitra proposes that in his formulation since a mathematical zero-mass black hole requires infinite pro
|
https://en.wikipedia.org/wiki/KBRO-LD
|
KBRO-LD, virtual channel 34 (VHF digital channel 2), is a low-powered television station serving Fort Collins, Colorado that is licensed to Lyons. The station is owned by Echonet Corporation, a company majority owned by Dish Network chairman Charlie Ergen.
History
In the 1980s, the then-K49AY carried a combination of Home Shopping Network and Spanish-language programming, serving Cheyenne, Wyoming. In 2000, it rebroadcast ABC affiliate KMGH (channel 7) in Denver.
K49AY was shut down on October 22, 2002, following the FCC fining the station for $10,000 for being on the air with an improper license, following its renewal. The fine was lowered to $4000, after it convinced the FCC that it was a clerical error on their part.
K49AY reemerged in 2005, carrying Almavision programming.
The station was licensed for digital operation on June 24, 2015, changing the call sign to K16LE-D. On July 17, 2015, the call letters were changed to KBRO-LD. Echonet Corporation had earlier held a construction permit for a digital low power station (K18II-D) in Cheyenne that would have broadcast on UHF channel 18.
Effective April 10, 2019, KBRO-LD was licensed to move its community of license from Cheyenne to Fort Collins.
|
https://en.wikipedia.org/wiki/Neural%20coding
|
Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble. Based on the theory that
sensory and other information is represented in the brain by networks of neurons, it is thought that neurons can encode both digital and analog information.
Overview
Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses called action potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such as light, sound, taste, smell and touch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information through graded potentials. This differs from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials are higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.
Although action potentials can vary somewhat in duration, amplitude and shape, they are typically treated as identical stereotyped events in neural coding studies. If the brief duration of an action potential (about 1ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series of all-or-none point events in time. The lengths of interspike intervals (ISIs) between two successive spikes in a spi
|
https://en.wikipedia.org/wiki/Alkaline%20lysis
|
Alkaline lysis or alkaline extraction is a method used in molecular biology to isolate plasmid DNA from bacteria.
Method
Bacteria containing the plasmid of interest are first cultured, then a sample is centrifuged in order to concentrate cellular material (including DNA) into a pellet at the bottom of the containing vessel. The supernatant is discarded, and the pellet is then re-suspended in an EDTA-containing physiological buffer. The purpose of the EDTA is to chelate divalent metal cations such as Mg2+ and Ca2+, which are required for the function of DNA degrading enzymes (DNAses) and also serve to de-stabilise the DNA phosphate backbone and cell wall. Glucose in the buffer will maintain the osmotic pressure of the cell in order to prevent the cell from bursting. Tris in the buffer will retain the pH of the cell with 8.0 and RNase will remove the RNA which will disrupt the experiment.
Separately, a strong alkaline solution consisting of the detergent sodium dodecyl sulfate (SDS) and a strong base such as sodium hydroxide (NaOH) is prepared and then added. The resulting mixture is incubated for a few minutes. During this time, the detergent disrupts cell membranes and allows the alkali to contact and denature both chromosomal and plasmid DNA.
After tearing apart the cell membrane by SDS, the cell content will neutralize the NaOH; this is why the pH of the lysis goes down from 12.8 to 12.3. So if there are not enough bacterial cells, the extra NaOH will function to generate small DNA fragments. But 0.5 M L-arginine, which can supply a stable pH, can be used to replace 0.1 M sodium hydroxide.
Finally, potassium acetate is added. This acidifies the solution and allows the renaturing of plasmid DNA, but not chromosomal DNA, which is precipitated out of solution. Another function of the potassium is to cause the precipitation of sodium dodecyl sulfate and thus removal of the detergent. A final centrifugation is carried out, and this time the pellet contains only debr
|
https://en.wikipedia.org/wiki/Diebold%2010xx
|
The Diebold 10xx (or Modular Delivery System, MDS) series is a third and fourth generation family of automated teller machines manufactured by Diebold.
History
Introduced in 1985 as a successor to the TABS 9000 series, the 10xx family of ATMs was re-styled to the "i Series" variant in 1991, the "ix Series" variant in 1994, and finally replaced by the Diebold Opteva series of ATMs in 2003.
The 10xx series of ATMs were also marketed under the InterBold brand; a joint venture between IBM and Diebold. IBM machines were marketed under the IBM 478x series. Not all of the 10xx series of ATMs were offered by IBM.
Diebold stopped producing the 1000-series ATM's around 2008.
Listing of 10xx Series Models
Members of the 10xx Series included:
MDS Series - Used a De La Rue cash dispensing mechanism
1060 - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser
1062 - Multi-function, indoor lobby unit
1072 - Multi-function, exterior "through-the-wall" unit
i Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism
1060i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser
1061i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser
1062i - Multi-function, indoor lobby unit
1064i - Mono-function, indoor cash dispenser
1070i - Multi-function, exterior "through-the-wall" unit with a longer "top-hat throat"
1072i - Multi-function, exterior "through-the-wall" unit
1073i - Multi-function, exterior "through-the-wall" unit, modified for use while sitting in a car
1074i - Multi-function, exterior unit, designed as a stand-alone unit for use in a drive-up lane.
ix Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism
1062ix - Multi-function, indoor lobby unit
1063ix - Mono-function, indoor cash dispenser with a smaller screen than the 1064ix
1064ix - Mono-function, indoor cash dispenser
1070ix - Multi-function, exterior "through-the-wall" unit
1071
|
https://en.wikipedia.org/wiki/Curtido
|
Curtido () is a type of lightly fermented cabbage relish. It is typical in Salvadoran cuisine and that of other Central American countries, and is usually made with cabbage, onions, carrots, oregano, and sometimes lime juice; it resembles sauerkraut, kimchi, or tart coleslaw. It is commonly served alongside pupusas, the national delicacy.
Fellow Central American country Belize has a similar recipe called "curtido" by its Spanish speakers; however, it is a spicy, fermented relish made with onions, habaneros, and vinegar. It is used to top salbutes, garnaches, and other common dishes in Belizean cuisine.
See also
Encurtido – a pickled vegetable appetizer, side dish and condiment in the Mesoamerican region
|
https://en.wikipedia.org/wiki/Gazetteer%20of%20Australia
|
The Gazetteer of Australia is an index or dictionary of the location and spelling of geographical names across Australia. Geographic names include towns, suburbs and roads, plus geographical features such as hills, rivers, and lakes.
The index is compiled by the Intergovernmental Committee on Surveying and Mapping (ICSM) from determinations made by state, territory, and Australian government agencies.
The authorities that work on geographic names in Australia are as follows:
Australian Capital Territory - National Memorials Committee - National Memorials Ordinance 1928
New South Wales - Geographical Names Board of New South Wales - Geographical Names Act, 1966
Northern Territory - Place Names Committee for the Northern Territory - Place Names Act 1978
Queensland - Department of Natural Resources and Mines manages Queensland place names - Queensland Place Names Act 1988
South Australia - Geographical Names Board of South Australia - Act 101 1969
Tasmania - Nomenclature Board of Tasmania - Survey Co-ordination Act 1944 amendments of 1955 and 1964
Victoria - Place Names Committee - Survey Co-ordination (Place Names) Act 1965, updated to Geographic Place Names Act 1998
Western Australia - Geographic Names Committee - Land Administration Act 1997 (originally the Nomenclature Advisory Committee, appointed in 1936)
As of January 2012, there are 370,000 place names in Australia. These are searchable in an online database hosted by Geoscience Australia and the entire data set can be downloaded here.
See also
Committee for Geographical Names in Australasia
Suburbs and localities (Australia)
|
https://en.wikipedia.org/wiki/Particle%20Data%20Group
|
The Particle Data Group (PDG) is an international collaboration of particle physicists that compiles and reanalyzes published results related to the properties of particles and fundamental interactions. It also publishes reviews of theoretical results that are phenomenologically relevant, including those in related fields such as cosmology. The PDG currently publishes the Review of Particle Physics and its pocket version, the Particle Physics Booklet, which are printed biennially as books, and updated annually via the World Wide Web.
In previous years, the PDG has published the Pocket Diary for Physicists, a calendar with the dates of key international conferences and contact information of major high energy physics institutions, which is now discontinued. PDG also further maintains the standard numbering scheme for particles in event generators, in association with the event generator authors.
Review of Particle Physics
The Review of Particle Physics (formerly Review of Particle Properties, Data on Particles and Resonant States, and Data on Elementary Particles and Resonant States) is a voluminous, 1,200+ page reference work which summarizes particle properties and reviews the current status of elementary particle physics, general relativity and big-bang cosmology. Usually singled out for citation analysis, it is currently the most cited article in high energy physics, being cited more than 2,000 times annually in the scientific literature ().
The Review is currently divided into 3 sections:
Particle Physics Summary Tables—Brief tables of particles: gauge and higgs bosons, leptons, quarks, mesons, baryons, constraints for the search for hypothetical particles and violation of physical laws.
Reviews, Tables and Plots—Review of fundamental concepts from mathematics and statistics, table of Clebsch-Gordan coefficients, periodic table of elements, table of electronic configuration of the elements, brief table of material properties, review of current status in th
|
https://en.wikipedia.org/wiki/Actinide%20concept
|
In nuclear chemistry, the actinide concept (also known as actinide hypothesis) proposed that the actinides form a second inner transition series homologous to the lanthanides. Its origins stem from observation of lanthanide-like properties in transuranic elements in contrast to the distinct complex chemistry of previously known actinides. Glenn Theodore Seaborg, one of the researchers who synthesized transuranic elements, proposed the actinide concept in 1944 as an explanation for observed deviations and a hypothesis to guide future experiments. It was accepted shortly thereafter, resulting in the placement of a new actinide series comprising elements 89 (actinium) to 103 (lawrencium) below the lanthanides in Dmitri Mendeleev's periodic table of the elements.
Origin
In the late 1930s, the first four actinides (actinium, thorium, protactinium, and uranium) were known. They were believed to form a fourth series of transition metals, characterized by the filling of 6d orbitals, in which thorium, protactinium, and uranium were respective homologs of hafnium, tantalum, and tungsten. This view was widely accepted as chemical investigations of these elements revealed various high oxidation states and characteristics that closely resembled the 5d transition metals. Nevertheless, research into quantum theory by Niels Bohr and subsequent publications proposed that these elements should constitute a 5f series analogous to the lanthanides, with calculations that the first 5f electron should appear in the range from atomic number 90 (thorium) to 99 (einsteinium). Inconsistencies between theoretical models and known chemical properties thus made it difficult to place these elements in the periodic table.
The first appearance of the actinide concept may have been in a 32-column periodic table constructed by Alfred Werner in 1905. Upon determining the arrangement of the lanthanides in the periodic table, he placed thorium as a heavier homolog of cerium, and left spaces for hypot
|
https://en.wikipedia.org/wiki/Softmax%20function
|
The softmax function, also known as softargmax or normalized exponential function, converts a vector of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
Definition
The softmax function takes as input a vector of real numbers, and normalizes it into a probability distribution consisting of probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval , and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities.
The standard (unit) softmax function where is defined by the formula
In words, it applies the standard exponential function to each element of the input vector and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vector is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input vector. For example, the standard softmax of is approximately , which amounts to assigning almost all of the total unit weight in the result to the position of the vector's maximal element (of 8).
In general, instead of a different base can be used. If , smaller input components will result in larger output probabilities, and decreasing the value of will create probability distributions that are more concentrated around the positions of the smallest input values. Conversely, as ab
|
https://en.wikipedia.org/wiki/Scott%E2%80%93Potter%20set%20theory
|
An approach to the foundations of mathematics that is of relatively recent origin, Scott–Potter set theory is a collection of nested axiomatic set theories set out by the philosopher Michael Potter, building on earlier work by the mathematician Dana Scott and the philosopher George Boolos.
Potter (1990, 2004) clarified and simplified the approach of Scott (1974), and showed how the resulting axiomatic set theory can do what is expected of such theory, namely grounding the cardinal and ordinal numbers, Peano arithmetic and the other usual number systems, and the theory of relations.
ZU etc.
Preliminaries
This section and the next follow Part I of Potter (2004) closely. The background logic is first-order logic with identity. The ontology includes urelements as well as sets, which makes it clear that there can be sets of entities defined by first-order theories not based on sets. The urelements are not essential in that other mathematical structures can be defined as sets, and it is permissible for the set of urelements to be empty.
Some terminology peculiar to Potter's set theory:
ι is a definite description operator and binds a variable. (In Potter's notation the iota symbol is inverted.)
The predicate U holds for all urelements (non-collections).
ιxΦ(x) exists iff (∃!x)Φ(x). (Potter uses Φ and other upper-case Greek letters to represent formulas.)
{x : Φ(x)} is an abbreviation for ιy(not U(y) and (∀x)(x ∈ y ⇔ Φ(x))).
a is a collection if {x : x∈a} exists. (All sets are collections, but not all collections are sets.)
The accumulation of a, acc(a), is the set {x : x is an urelement or ∃b∈a (x∈b or x⊂b)}.
If ∀v∈V(v = acc(V∩v)) then V is a history.
A level is the accumulation of a history.
An initial level has no other levels as members.
A limit level is a level that is neither the initial level nor the level above any other level.
A set is a subcollection of some level.
The birthday of set a, denoted V(a), is the lowest level V such that a⊂V.
Axio
|
https://en.wikipedia.org/wiki/Cem%20Kaner
|
Cem Kaner is a professor of software engineering at Florida Institute of Technology, and the Director of Florida Tech's Center for Software Testing Education & Research (CSTER) since 2004. He is perhaps best known outside academia as an advocate of software usability and software testing.
Prior to his professorship, Kaner worked in the software industry beginning in 1983 in Silicon Valley "as a tester, programmer, tech writer, software development manager, product development director, and independent software development consultant." In 1988, he and his co-authors Jack Falk and Hung Quoc Nguyen published what became, at the time, "the best selling book on software testing," Testing Computer Software. He has also worked as a user interface designer.
In 2004 he cofounded the non-profit Association for Software Testing.
Education
Kaner received a Bachelor's Degree from Brock University in 1974, having focused on mathematics and philosophy. He went on to receive a Ph.D. in experimental psychology from McMaster University in 1984, with a dissertation in the area of psychophysics (the measurement of perceptual experiences). He later attended Golden Gate University Law School, with a primary interest in the law of software quality, graduating with a J.D. in 1994.
Consumer and Software Quality Advocacy
Kaner worked as a part-time volunteer for the Santa Clara, California Department of Consumer Affairs, investigating and mediating consumer complaints. In the 1990s, he got trial experience working as a full-time volunteer Deputy District Attorney, and later counselled independent consultants, technical book writers, and independent test labs on contract and intellectual property issues as an attorney. He also did legislative work as a consumer protection advocate, including participation in the drafting of the Uniform Computer Information Transactions Act (as an advocate for customers and small software development firms), and the Uniform Electronic Transactions Act
|
https://en.wikipedia.org/wiki/Paternal%20mtDNA%20transmission
|
In genetics, paternal mtDNA transmission and paternal mtDNA inheritance refer to the incidence of mitochondrial DNA (mtDNA) being passed from a father to his offspring. Paternal mtDNA inheritance is observed in a small proportion of species; in general, mtDNA is passed unchanged from a mother to her offspring, making it an example of non-Mendelian inheritance. In contrast, mtDNA transmission from both parents occurs regularly in certain bivalves.
In animals
Paternal mtDNA inheritance in animals varies. For example, in Mytilidae mussels, paternal mtDNA "is transmitted through the sperm and establishes itself only in the male gonad." In testing 172 sheep, "The Mitochondrial DNA from three lambs in two half-sib families were found to show paternal inheritance." An instance of paternal leakage resulted in a study on chickens. There has been evidences that paternal leakage is an integral part of mitochondrial inheritance of Drosophila simulans.
In humans
In human mitochondrial genetics, there is debate over whether or not paternal mtDNA transmission is possible. Many studies hold that paternal mtDNA is never transmitted to offspring. This thought is central to mtDNA genealogical DNA testing and to the theory of mitochondrial Eve. The fact that mitochondrial DNA is maternally inherited enables researchers to trace maternal lineage far back in time. Y chromosomal DNA, paternally inherited, is used in an analogous way to trace the agnate lineage.
In sexual reproduction, paternal mitochondria found in the sperm are actively decomposed, thus preventing "paternal leakage". Mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. In 1999 it was reported that paternal sperm mitochondria (containing mtDNA) are marked with ubiquitin to select them for later destruction inside the embryo. Some in vitro fertilization (IVF) techniques, particularly intracytoplasmic sperm injection (ICSI) of a sperm into an oocyte, may interfere with thi
|
https://en.wikipedia.org/wiki/Directory%20information%20tree
|
A directory information tree (DIT) is data represented in a hierarchical tree-like structure consisting of the Distinguished Names (DNs) of directory service entries.
Both the X.500 protocols and the Lightweight Directory Access Protocol (LDAP) use directory information trees as their fundamental data structure.
Typically, an X.500 or LDAP deployment for a single organization will have a directory information tree that consists of two parts:
a top level name structure for the name of the organization itself
a representation of the data model structure within the organization
Top level naming
The top level of a directory information tree frequently represent political and geographic divisions.
The original assumption of X.500 was that all directory servers would be interconnected to form a single, global namespace. The entries at the top level of the tree corresponded to countries, identified by their ISO 3166 two letter country code. The entries subordinate to a country's entry would correspond to states or provinces, and national organizations. The naming system for a particular country was determined by that country's national standards body or telecommunications provider.
A limitation of the original directory information tree structure was the assumption that applications searching for an entry in a particular organization would navigate the directory tree by first browsing to the particular country where that organization was based, then to the region where that organization was based, then locate the entry for the organization itself, and then search within that organization for the entry in question. The desire to support searching more broadly for an individual person when all the particulars of that person's location or organization were not known led to experiments in directory deployment and interconnection, such as the Common Indexing Protocol.
Today, most LDAP deployments, and in particular Active Directory deployments, are not interconnec
|
https://en.wikipedia.org/wiki/Immittance
|
Immittance is a term used within electrical engineering and acoustics, specifically bioacoustics and the inner ear, to describe the combined measure of electrical or acoustic admittance and electrical or acoustic impedance. Immittance was initially coined by H. W. Bode in 1945, and was first used to describe the electrical admittance or impedance of either a nodal or a mesh network. Bode also suggested the name "adpedence", however the current name was more widely adopted. In bioacoustics, immittance is typically used to help define the characteristics of noise reverberation within the middle ear and assist with differential diagnosis of middle-ear disease. Immittance is typically a complex number which can represent either or both the impedance and the admittance (ratio of voltage to current or vice versa in electrical circuits, or volume velocity to sound pressure or vice versa in acoustical systems) of a system.
Immittance does not have an associated unit because it applies to both impedance, which is measured in ohms () or acoustic ohms, and admittance, which is commonly measured in siemens () and historically has also been measured in mhos (), the reciprocal of ohms.
Notable usage
Bioacoustics
In audiology, tympanometry is sometimes referred to as immittance testing. Tympanometry is especially effective when both the impedance and admittance of the inner ear are accounted for. Immittance allows for the analysis of both, and therefore is crucial to multiple-component, multiple-frequency tympanometry. Clinically, few cases require the use of this technique for accurate diagnosis; but for the fewer than 20% of cases which do require it, the technique is a necessity. Multiple-component, multiple-frequency tympanometry is invaluable for the differential diagnosis of fixation of the lateral ossicular chain from fixation of the stapes, profound mixed hearing losses, clinical otosclerosis from disruption of the ossicular chain, hypermobility of the incudostapedial
|
https://en.wikipedia.org/wiki/Position-effect%20variegation
|
Position-effect variegation (PEV) is a variegation caused by the silencing of a gene in some cells through its abnormal juxtaposition with heterochromatin via rearrangement or transposition. It is also associated with changes in chromatin conformation.
Overview
The classical example is the Drosophila wm4 (speak white-mottled-4) translocation. In this mutation, an inversion on the X chromosome placed the white gene next to pericentric heterochromatin, or a sequence of repeats that becomes heterochromatic. Normally, the white gene is expressed in every cell of the adult Drosophila eye resulting in a red-eye phenotype. In the w[m4] mutant, the eye color was variegated (red-white mosaic colored) where the white gene was expressed in some cells in the eyes and not in others. The mutation was described first by Hermann Muller in 1930. PEV is a heterochromatin-induced gene inactivation. Gene silencing phenomena similar to this have also been observed in S. cerevisiae and S. pombe.
Typically, the barrier DNA sequences prevent the heterochromatic region from spreading into the euchromatin but they are no longer present in the flies that inherit certain chromosomal rearrangements.
Etymology
PEV is a position effect because the change in position of a gene from its original position to somewhere near a heterochromatic region has an effect on its expression. The effect is the variegation in a particular phenotype i.e., the appearance of irregular patches of different colour(s), due to the expression of the original wild-type gene in some cells of the tissue but not in others, as seen in the eye of mutated Drosophila melanogaster.
However, it is possible that the effect of the silenced gene is not phenotypically visible in some cases. PEV was observed first in Drosophila because it was one of the first organisms on which X-ray irradiation was used as a mutation inducer. X-rays can cause chromosomal rearrangements that can result in PEV.
Mechanisms
Among a number of mod
|
https://en.wikipedia.org/wiki/Press%20cake
|
A press cake or oil cake is the solids remaining after pressing something to extract the liquids. Their most common use is in animal feed.
Some foods whose processing creates press cakes are olives for olive oil (pomace), peanuts for peanut oil, coconut flesh for coconut cream and milk (sapal), grapes for wine (pomace), apples for cider (pomace), mustard cake, and soybeans for soy milk (used to make tofu) (this is called soy pulp) or oil. Other common press cakes come from flax seed (linseed), cottonseed, and sunflower seeds. However, some specific kinds may be toxic, and are rather used as fertilizer, for example cottonseed contains a toxic pigment, gossypol, that must be removed before processing.
Culinary use
In Nepalese cuisine the oil cake of the Persian walnut is used for culinary purposes, and it is also applied to the forehead to treat headaches. In some regions it is used as boiler fuel as a means of reducing energy costs, for which it is quite suitable.
Military use
In 1942 the Porton Down biology department outsourced the production of 5,273,400 linseed press cakes to Olympia Oil and Cake Company in Blackburn Meadows which would then be infected with Bacillus anthracis (bacteria that causes Anthrax) and using in the biological warfare program Operation Vegetarian.
|
https://en.wikipedia.org/wiki/Cyanovirin-N
|
Cyanovirin-N (CV-N) is a protein produced by the cyanobacterium Nostoc ellipsosporum that displays virucidal activity against several viruses, including human immunodeficiency virus (HIV). A cyanobacterial protein called cyanovirin-N (CV-N) has strong anti-human immunodeficiency virus (HIV) neutralizing properties. The virucidal activity of CV-N is mediated through specific high-affinity interactions with the viral surface envelope glycoproteins gp120 and gp41, as well as to high-mannose oligosaccharides found on the HIV envelope. In addition, CV-N is active against rhinoviruses, human parainfluenza virus, respiratory syncytial virus, and enteric viruses. The virucidal activity of CV-N against influenza virus is directed towards viral haemagglutinin.
The blue-green alga Nostoc ellipsosporum naturally contains the protein cyanovirin-N. The National Cancer Institute (NCI) in the United States carried out the initial isolation and characterisation of this protein in 1999. The use of cyanovirin-N as an antiviral drug, particularly against HIV, has since been the subject of investigation. Its ability to bind to the HIV-encapsulating glycoprotein gp120 has been demonstrated in several studies, which has led to the development of Cyanovirin-N-based therapies and preventatives.
Structure
Cyanovirin-N is a lengthy, mostly beta-sheet protein that displays internal two-fold pseudosymmetry. The fundamental atomic root-mean-square of the two sequence repeats (1-50 and 51-101) differs by 1.3 A while sharing 32% of the same sequence. The total fold depends on a number of interactions between the two repetitions, therefore they don't actually belong in separate domains. CV-N has a complex fold composed of a duplication of a tandem repeat of two homologous motifs comprising three-stranded beta-sheet and beta-hairpins.
|
https://en.wikipedia.org/wiki/Xak%3A%20The%20Tower%20of%20Gazzel
|
Xak Precious Package: The Tower of Gazzel is a fantasy role-playing video game developed and published by the Japanese software developer MicroCabin. The game is a direct sequel to Xak: The Art of Visual Stage and Xak II: The Rising of the Red Moon. While technically being the third installment of the series, The Tower of Gazzel is a sidestory taking place between the events of Xak II and Xak III. The game was released in Japan only.
Story
After Latok Kart defeated Zamu Gospel during the events portrayed in Xak II, he and his friends are intrigued by rumours of a demonic tower and a man looking like Latok roaming its neighbourhood. The appearance of a false Latok and the kidnapping of Rune Greed's family are ploys to lure the two descendants of Duel into the tower, laid by the villains Al Acrila, Gill Berzes and a demon called Zegraya. Using Latok and Rune, they plan to resurrect the ancient demon Gazzel, a demon with unimaginable power comparable to destroying an entire mountain in a single attack.
Gameplay
The player controls Latok, looking onto the game world in bird's-eye view. Latok can swing his sword, optionally firing magical shots from its tip at the expense of magic points, and jump short distances. The player can choose to take along one of a party of four characters on his exploration of the tower. Each of these so-called 'support members' subtly change Latok's statistics, in addition to triggering different events within the game.
The entirety of the Tower is a large labyrinth spanning six floors, each with an elemental theme: darkness for the basement and respectively earth, fire, water, wind and heaven for the first through fifth floors. The game is one large puzzle with the goal of reaching the bottom floor and defeating Zegraya and Gazzel there. Many of the puzzles revolve around the fact the floors are heavily interconnected. On the floor of fire for example, there is a large wall of flames that Latok cannot pass through in any way. On the water
|
https://en.wikipedia.org/wiki/Single-cable%20distribution
|
Single-cable distribution is a satellite TV technology that enables the delivery of broadcast programming to multiple users over a single coaxial cable, and eliminates the numerous cables required to support consumer electronics devices such as twin-tuner digital video recorders (DVRs) and high-end receivers.
Without single-cable distribution, providing full-spectrum access for multiple receivers, or receivers with multiple tuners, in a single-family home has required a separate coaxial cable feeding each tuner from the antenna equipment (either multiple LNBs, a multi-output LNB or a multiswitch distribution system) because of the large bandwidth requirement of the signals.
Single-cable distribution technology enables one coaxial cable from the antenna equipment to multiple tuners, to provide independent tuning across the whole range of satellite reception for each tuner.
A European industry standard for distributing satellite signals over a single coaxial cable - CENELEC EN50494 - was defined in 2007 and developed by a consortium led by SES.
Single-cable distribution technology can be found in commercial equipment with the Unicable trademark from FTA Communications Technologies. Unicable uses an integrated software and hardware solution that allows Unicable-certified DVRs and receivers to multiplex selected programming when using Unicable LNB or multiswitching products.
The Unicable Interoperability Platform is open to companies designing and/or marketing satellite and other broadcast-related products. The platform is designed to facilitate the acceptance of Unicable-certified solutions in the consumer TV broadcast market.
How it works
Each satellite receiver in the installation has a dedicated user band of a bandwidth approximately the same as a transponder. The receiver requests a particular transponder frequency via a DiSEqC-compliant command. A mixer in the dish-end equipment (an LNB or distribution unit) converts the received signal to the correct user
|
https://en.wikipedia.org/wiki/Algorithmic%20Number%20Theory%20Symposium
|
Algorithmic Number Theory Symposium (ANTS) is a biennial academic conference, first held in Cornell in 1994, constituting an international forum for the presentation of new research in computational number theory. They are devoted to algorithmic aspects of number theory, including elementary number theory, algebraic number theory, analytic number theory, geometry of numbers, arithmetic geometry, finite fields, and cryptography.
Selfridge Prize
In honour of the many contributions of John Selfridge to mathematics, the Number Theory Foundation has established a prize to be awarded to those individuals who have authored the best paper accepted for presentation at ANTS. The prize, called the Selfridge Prize, is awarded every two years in an even numbered year. The prize winner(s) receive a cash award and a sculpture.
The prize winners and their papers selected by the ANTS Program Committee are:
2006 – ANTS VII – Werner Bley and Robert Boltje – Computation of locally free class groups.
2008 – ANTS VIII – Juliana Belding, Reinier Bröker, Andreas Enge and Kristin Lauter – Computing hilbert class polynomials.
2010 – ANTS IX – John Voight – Computing automorphic forms on Shimura curves over fields with arbitrary class number.
2012 – ANTS X – Andrew Sutherland – On the evaluation of modular polynomials.
2014 – ANTS XI – Tom Fisher – Minimal models for 6-coverings of elliptic curves.
2016 – ANTS XII – Jan Steffen Müller and Michael Stoll – Computing canonical heights on elliptic curves in quasi-linear time.
2018 – ANTS XIII – Michael Musty, Sam Schiavone, Jeroen Sijsling and John Voight – A database of Belyĭ maps.
2020 – ANTS XIV – Jonathan Love and Dan Boneh – Supersingular curves with small non-integer endomorphisms.
2022 – ANTS XV – Harald Helfgott and Lola Thompson – Summing mu(n): a faster elementary algorithm.
Proceedings
Prior to ANTS X, the refereed Proceedings of ANTS were published in the Springer Lecture Notes in Computer Science (LNCS). The proc
|
https://en.wikipedia.org/wiki/Micro%20heat%20exchanger
|
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramic.
Microchannel heat exchangers can be used for many applications including:
high-performance aircraft gas turbine engines
heat pumps
Microprocessor and microchip cooling
air conditioning
Background
Investigation of microscale thermal devices is motivated by the single phase internal flow correlation for convective heat transfer:
Where is the heat transfer coefficient, is the Nusselt number, is the thermal conductivity of the fluid and is the hydraulic diameter of the channel or duct. In internal laminar flows, the Nusselt number becomes a constant. This is a result which can be arrived at analytically: For the case of a constant wall temperature, and for the case of constant heat flux for round tubes. The last value is increased to 140/17 = 8.23 for flat parallel plates. As Reynolds number is proportional to hydraulic diameter, fluid flow in channels of small hydraulic diameter will predominantly be laminar in character. This correlation therefore indicates that the heat transfer coefficient increases as channel diameter decreases. Should the hydraulic diameter in forced convection be on the order of tens or hundreds of micrometres, an extremely high heat transfer coefficient should result.
This hypothesis was initially investigated by Tuckerman and Pease. Their positive results led to further research ranging from classical investigations of single channel heat transfer to more applied investigations in parallel micro-channel and micro scale plate fin heat exchangers. Recent work in the field has focused on the potential of two-phase flows at the micro-scale.
Classification of mic
|
https://en.wikipedia.org/wiki/ITU%20model%20for%20indoor%20attenuation
|
The ITU indoor propagation model, also known as ITU model for indoor attenuation, is a radio propagation model that estimates the path loss inside a room or a closed area inside a building delimited by walls of any form. Suitable for appliances designed for indoor use, this model approximates the total path loss an indoor link may experience.
Applicable to/under conditions
This model is applicable to only the indoor environments. Typically, such appliances use the lower microwave bands around 2.4 GHz. However, the model applies to a much wider range.
Coverage
Frequency: 900 MHz to 5.2 GHz
Floors: 1 to 3
Mathematical formulations
The model
The ITU indoor path loss model is formally expressed as:
where,
L = the total path loss. Unit: decibel (dB).
f = Frequency of transmission. Unit: megahertz(MHz).
d = Distance. Unit: meter (m).
N = The distance power loss coefficient.
n = Number of floors between the transmitter and receiver.
Pf(n) = the floor loss penetration factor.
Calculation of distance power loss coefficient
The distance power loss coefficient, N is the quantity that expresses the loss of signal power with distance. This coefficient is an empirical one. Some values are provided in Table 1.
Calculation of floor penetration loss factor
The floor penetration loss factor is an empirical constant dependent on the number of floors the waves need to penetrate. Some values are tabulated in Table 2.
See also
Log-distance path loss model
Radio propagation model
Young model
|
https://en.wikipedia.org/wiki/Log-distance%20path%20loss%20model
|
The log-distance path loss model is a radio propagation model that predicts the path loss a signal encounters inside a building or densely populated areas over distance.
Mathematical formulation
The model
Log-distance path loss model is formally expressed as:
where
is the total path loss in decibels (dB).
is the transmitted power in dBm where is the transmitted power in watts.
is the received power in dBm where is the received power in watts.
is the path loss in decibels (dB) at the reference distance calculated using the Friis free-space path loss model.
is the length of the path.
is the reference distance, usually 1 km (or 1 mile) for a large cell and 1 m to 10 m for a microcell.
is the path loss exponent.
is a normal (or Gaussian) random variable with zero mean, reflecting the attenuation (in decibels) caused by flat fading. In the case of no fading, this variable is 0. In the case of only shadow fading or slow fading, this random variable may have Gaussian distribution with standard deviation in decibels, resulting in a log-normal distribution of the received power in watts. In the case of only fast fading caused by multipath propagation, the corresponding fluctuation of the signal envelope in volts may be modelled as a random variable with Rayleigh distribution or Ricean distribution (and thus the corresponding gain in watts may be modelled as a random variable with exponential distribution).
Corresponding non-logarithmic model
This corresponds to the following non-logarithmic gain model:
where
is the average multiplicative gain at the reference distance from the transmitter. This gain depends on factors such as carrier frequency, antenna heights and antenna gain, for example due to directional antennas; and
is a stochastic process that reflects flat fading. In case of only slow fading (shadowing), it may have log-normal distribution with parameter dB. In case of only fast fading due to multipath propagation, its amplitud
|
https://en.wikipedia.org/wiki/Unfigured%20bass
|
Unfigured bass, less commonly known as under-figured bass, is a kind of musical notation used during the Baroque music era in Western Classical music (ca. 1600–1750) in which a basso continuo performer playing a chordal instrument (e.g., harpsichord, organ, or lute) improvises a chordal accompaniment from a notated bass line which lacks the guidance of figures indicating which harmonies should be played above the bass note (see figured bass). Figured bass parts have numbers or accidentals above the bass line which indicate which intervals above the bass should be played in the chord. However, not all basso continuo parts from the Baroque period were figured.
History
From the earliest days of thoroughbass, composers and copyists have been chastised for providing bass parts without any figures to guide performers. Despite perennial complaints, however, unfigured basses persisted right through the eighteenth century, though it is speculated that unfigured basses would not have existed if it were not for the suggestion of harmonies in bass lines of the time.
Performance
In the early baroque period published parts were as likely to be unfigured as figured, leading to unusual clashes of harmony on a first reading. In an effort to perform a piece the first time without such harmonic clashes, various methods were devised and used to anticipate the harmonic structure and progression of a piece. Among these are:
Specific chords might be placed over a given solmization syllable, or an easily identified note, such as a sharped note.
Specific chords might be applied to various patterns of bass intervals.
Model bass lines with chords might be learned by rote to be used whenever applicable.
Or specific chords might be placed over particular scale degrees.
Unfigured bass notes in an otherwise figured part
In a figured bass part, not all bass notes were necessarily figured. By convention, bass notes of root-position chords () were often left unfigured. Exceptions to this rule,
|
https://en.wikipedia.org/wiki/Higher%20residuosity%20problem
|
In cryptography, most public key cryptosystems are founded on problems that are believed to be intractable. The higher residuosity problem (also called the n th-residuosity problem) is one such problem. This problem is easier to solve than integer factorization, so the assumption that this problem is hard to solve is stronger than the assumption that integer factorization is hard.
Mathematical background
If n is an integer, then the integers modulo n form a ring. If n=pq where p and q are primes, then the Chinese remainder theorem tells us that
The group of units of any ring form a group, and the group of units in is traditionally denoted .
From the isomorphism above, we have
as an isomorphism of groups. Since p and q were assumed to be prime, the groups and are cyclic of orders p-1 and q-1 respectively. If d is a divisor of p-1, then the set of dth powers in form a subgroup of index d. If gcd(d,q-1) = 1, then every element in is a dth power, so the set of dth powers in is also a subgroup of index d. In general, if gcd(d,q-1) = g, then there are (q-1)/(g) dth powers in , so the set of dth powers in has index dg.
This is most commonly seen when d=2, and we are considering the subgroup of quadratic residues, it is well known that exactly one quarter of the elements in are
quadratic residues (when n is the product of exactly two primes, as it is here).
The important point is that for any divisor d of p-1 (or q-1) the set of dth powers forms a subgroup of
Problem statement
Given an integer n = pq where p and q are unknown, an integer d such that d divides p-1, and an integer x < n, it is infeasible to determine whether x is a dth power (equivalently dth residue) modulo n.
Notice that if p and q are known it is easy to determine whether x is a dth residue modulo n because x will be a dth residue modulo p if and only if
When d=2, this is called the quadratic residuosity problem.
Applications
The semantic security of the Benaloh cryptosystem and the
|
https://en.wikipedia.org/wiki/Hata%20model
|
The Hata model is a radio propagation model for predicting the path loss of cellular transmissions in exterior environments, valid for microwave frequencies from 150 to 1500 MHz. It is an empirical formulation based on the data from the Okumura model, and is thus also commonly referred to as the Okumura–Hata model. The model incorporates the graphical information from Okumura model and develops it further to realize the effects of diffraction, reflection and scattering caused by city structures. Additionally, the Hata Model applies corrections for applications in suburban and rural environments.
Model description
Though based on the Okumura model, the Hata model does not provide coverage to the whole range of frequencies covered by Okumura model. Hata model does not go beyond 1500 MHz while Okumura provides support for up to 1920 MHz. The model is suited for both point-to-point and broadcast communications, and covers mobile station antenna heights of 1–10 m, base station antenna heights of 30–200 m, and link distances from 1–10 km.
Urban environments
The Hata model for urban environments is the basic formulation since it was based on Okumura's measurements made in the built-up areas of Tokyo. It is formulated as following:
For small or medium-sized city,
and for large cities,
where
LU = Path loss in urban areas. Unit: decibel (dB)
hB = Height of base station antenna. Unit: meter (m)
hM = Height of mobile station antenna. Unit: meter (m)
f = Frequency of transmission. Unit: Megahertz (MHz)
CH = Antenna height correction factor
d = Distance between the base and mobile stations. Unit: kilometer (km).
Suburban environments
The Hata model for suburban environments is applicable to the transmissions just out of the cities and on rural areas where man-made structures are there but not so high and dense as in the cities. To be more precise, this model is suitable where buildings exist, but the mobile station does not have a significant variation of its height.
|
https://en.wikipedia.org/wiki/National%20Emerging%20Infectious%20Diseases%20Laboratories
|
The National Emerging Infectious Diseases Laboratories (NEIDL), is a biosciences facility of Boston University located on Albany street, within the clinical and biopharma hub of the South End neighborhood of Boston, Massachusetts.
The lab is part of a national network of secure facilities that study infectious diseases, whether naturally occurring or introduced through bioterrorism. The Labs include a BSL-4 laboratory, which study the most dangerous and deadly human diseases.
The NEIDL's current director is Dr. Nancy Sullivan, ScD, who previously served at the National Institute of Allergy and Infectious Diseases (NIAID), National Institutes of Health (NIH).
History
On February 2, 2006, Boston Medical Center received regulatory approval from the federal government to fund construction of a biosafety laboratory on its medical campus in the South End, Boston.
There has been strong community opposition to the planned building, and BSL-2 level research did not begin until 2012 due to court injunctions.
In early 2014, BSL-4 research was still being opposed by community groups including the Union Park Neighborhood Association and Boston City Councilor Charles Yancey who was conducting hearings on its safety and recommending a citywide ban on BSL-4 research.
The NEIDL was given final approval for BSL-4 research by the Boston Public Health Commission on December 6, 2017, with the support of Boston Mayor Marty Walsh. Every project at the lab will also need individual BPHC review and approval.
Current research
As a result of the COVID-19 pandemic, the NEIDL paused research outside of SARS-CoV-2 diagnositics and countermeasures to respond to the COVID-19 pandemic.
See also
Galveston National Laboratory (GNL)
National Institute of Allergy and Infectious Diseases (NIAID)
Rocky Mountain Laboratories (RML)
|
https://en.wikipedia.org/wiki/Computational%20hardness%20assumption
|
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where efficiently typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
Computational hardness assumptions are of particular importance in cryptography. A major goal in cryptography is to create cryptographic primitives with provable security. In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure assuming that any adversaries are computationally limited, as all adversaries are in practice.
Computational hardness assumptions are also useful for guiding algorithm designers: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such as P ≠ NP.
Comparing hardness assumptions
Computer scientists have different ways of assessing which hardness assumptions are more reliable.
Strength of hardness assumptions
We say that assumption is stronger than assumption when implies (and the converse is false or not known).
In other words, even if assumption were false, assumption may still be true, and cryptographic protocols based on assumption may still be safe to use.
Thus when devising cryptographic protocols, one hopes to be able to prove security using the weakest possible assumptions.
Average-case vs. worst-case assumptions
An average-case assumption says that a specific problem is hard on most instances from some explicit distribution, whereas a worst-case assu
|
https://en.wikipedia.org/wiki/Aircraft%20Meteorological%20Data%20Relay
|
Aircraft Meteorological Data Relay (AMDAR) is a program initiated by the World Meteorological Organization.
AMDAR is used to collect meteorological data worldwide by using commercial aircraft.
Data is collected by the aircraft navigation systems and the onboard standard temperature and static pressure probes.
The data is then preprocessed before linking them down to the ground either via VHF communication (ACARS)
or via satellite link ASDAR.
A detailed description is given in the AMDAR Reference Manual (WMO-No 958) available from the World Meteorological Organization,
Geneva, Switzerland
Usage
AMDAR transmissions are most commonly used in forecast models as a supplement to radiosonde data, to aid in the plotting of upper-air data between the standard radiosonde soundings at 00Z and 12Z.
|
https://en.wikipedia.org/wiki/Sable%20Chief
|
Sable Chief was a Newfoundland dog that served as the mascot of the Royal Newfoundland Regiment during World War I. He was presented officially on 1 Oct 1914, before troops left St. John's on the SS Florizel, by James R. Stick of the Royal Stores, Ltd, father of Leonard Stick, the first man to enlist in the regiment. Prior to his official presentation, the mascot was photographed at the Pleasantville training camp in Sept 1914 with Prime Minister Edward Patrick Morris (1859–1935), Governor Sir Walter Edward Davidson (1859–1923), Capt. William Hodgson Franklin (the first Commanding Officer of the Newfoundland Regiment), Capt. J. W. March, Capt. Cluny Macpherson (1879-1966) (Principal Medical Officer, 1st Newfoundland Regiment and inventor of the gas mask), and other dignitaries.
Sable Chief became well known for his immense size and dignified demeanor. At 3 years of age he weighed 150 lbs. During ceremonial events, he would march at the front of the band, keeping in step throughout, and he would stand at attention during the playing of the Newfoundland National Anthem. He was regarded among the troops as a general morale booster, and visited wounded soldiers. Sable Chief was outfitted with a collection box at events to raise donations for the British Red Cross Prisoners of War Fund.
Sable Chief was a pure-bred Newfoundland dog, and by WW1 this original strain of the breed had become rare in the colony. He was the same lineage as Bouncer, the Newfoundland dog accompanied by a dog-cart, presented by the children of Newfoundland to the Duke & Duchess of Cornwall and York (later George V and Queen Mary), during their visit to the colony in 1901.
The organizers of Newfoundland Week (22-29 Sept 1917) arranged for the Regimental Band to travel from Scotland to visit London to represent the colony, especially to represent the 1st Battalion, which at that time was engaged in the Battle of Passchendaele in West Flanders. The band performed at least two concerts a day ac
|
https://en.wikipedia.org/wiki/Modal%20%CE%BC-calculus
|
In theoretical computer science, the modal μ-calculus (Lμ, Lμ, sometimes just μ-calculus, although this can have a more general meaning) is an extension of propositional modal logic (with many modalities) by adding the least fixed point operator μ and the greatest fixed point operator ν, thus a fixed-point logic.
The (propositional, modal) μ-calculus originates with Dana Scott and Jaco de Bakker, and was further developed by Dexter Kozen into the version most used nowadays. It is used to describe properties of labelled transition systems and for verifying these properties. Many temporal logics can be encoded in the μ-calculus, including CTL* and its widely used fragments—linear temporal logic and computational tree logic.
An algebraic view is to see it as an algebra of monotonic functions over a complete lattice, with operators consisting of functional composition plus the least and greatest fixed point operators; from this viewpoint, the modal μ-calculus is over the lattice of a power set algebra. The game semantics of μ-calculus is related to two-player games with perfect information, particularly infinite parity games.
Syntax
Let P (propositions) and A (actions) be two finite sets of symbols, and let Var be a countably infinite set of variables. The set of formulas of (propositional, modal) μ-calculus is defined as follows:
each proposition and each variable is a formula;
if and are formulas, then is a formula;
if is a formula, then is a formula;
if is a formula and is an action, then is a formula; (pronounced either: box or after necessarily )
if is a formula and a variable, then is a formula, provided that every free occurrence of in occurs positively, i.e. within the scope of an even number of negations.
(The notions of free and bound variables are as usual, where is the only binding operator.)
Given the above definitions, we can enrich the syntax with:
meaning
(pronounced either: diamond or after possibly ) meaning
mea
|
https://en.wikipedia.org/wiki/ASME%20Y14.41
|
ASME Y14.41 is a standard published by American Society of Mechanical Engineers (ASME) which establishes requirements and reference documents applicable to the preparation and revision
of digital product definition data (also known as model-based definition), which pertains to CAD software and those who use CAD software to create the product definition within the 3D model. ASME issued the first version of this industrial standard on Aug 15, 2003 as ASME Y14.41-2003. It was immediately adopted by several industrial organizations, as well as the Department of Defense (DOD). The latest revision of ASME Y14.41 was issued on Jan 23, 2019 as ASME Y14.41-2019.
History and purpose
ASME Y14.41 was born of the need to utilize Computer-aided design (CAD) data as a manufacturing and/or inspection source. In the late 1980s and early 1990s, solid modeling and CAD were becoming important tools for engineers looking to create and define increasingly complex geometry. For example, ergonomic and aerodynamic contoured surfaces were extremely difficult to define on engineering drawings. In response to requests from various segments of the industry, a new subcommittee began development of the standard in 1998.
Since various companies in industries including aerospace, automotive, agricultural, and heavy equipment had already begun utilizing the CAD data for industrial purposes without a standard, several definitions needed to be established as the universal interpretation. The standard was written to be independent of any specific CAD software implementation.
ASME Y14.41 served the basis for the international standard ISO 16792:2006 Technical product documentation — Digital product definition data practices. Both standards focus on the presentation of Geometric dimensioning and tolerancing (GD&T) together with the geometry of the product.
The material in the 2012 revision was reorganized to locate all of the information on a topic together in the text. This allows the reader to fi
|
https://en.wikipedia.org/wiki/Cryptozoa
|
Cryptozoa is the collective name for small animals who live in darkness and under conditions of high relative humidity, as in the wet soil underneath rocks, decomposing tree bark etc. Examples include pseudoscorpions, slugs, centipedes and earwigs. The habitat of the cryptozoa allows avoidance of fluctuations of temperature and humidity, which makes the contained range of considerably different species quite remarkable. Moreover, cryptozoa are notable for their inclusion of often unnamed varieties of organisms.
The term "cryptozoic fauna" was originally coined by Arthur Dendy.
Habitat
Sometimes referred to as the cryptozoic niche, the habitat allowing for cryptozoic life is characterized by a shielding of exterior light sources, with stable and cool temperature and high humidity. Forest humus and leaf litter can provide the necessary conditions for cryptozoic life in part because of the shielding from surrounding trees. Nonetheless, temperate woodlands are not the only ground for such a habitat: the tropics and the desert are often suitable for cryptozoa, such as scorpions or Solifugae.
Examples
Examples of the cryptozoa include land-planarians, amphipods, pill-woodlice, centipedes, pill-millipedes, thysanurans, false-scorpions and oribatid mites.
|
https://en.wikipedia.org/wiki/Peyton%20Young
|
Hobart Peyton Young (born March 9, 1945) is an American game theorist and economist known for his contributions to evolutionary game theory and its application to the study of institutional and technological change, as well as the theory of learning in games. He is currently centennial professor at the London School of Economics, James Meade Professor of Economics Emeritus at the University of Oxford, professorial fellow at Nuffield College Oxford, and research principal at the Office of Financial Research at the U.S. Department of the Treasury.
Peyton Young was named a fellow of the Econometric Society in 1995, a fellow of the British Academy in 2007, and a fellow of the American Academy of Arts and Sciences in 2018. He served as president of the Game Theory Society from 2006–08. He has published widely on learning in games, the evolution of social norms and institutions, cooperative game theory, bargaining and negotiation, taxation and cost allocation, political representation, voting procedures, and distributive justice.
Education and career
In 1966, he graduated cum laude in general studies from Harvard University. He completed a PhD in Mathematics at the University of Michigan in 1970, where he graduated with the Sumner B. Myers thesis prize for his work in combinatorial mathematics.
His first academic post was at the graduate school of the City University of New York as assistant professor and then associate professor, from 1971 to 1976. From 1976 to 1982, Young was research scholar and deputy chairman of the Systems and Decision Sciences Division at the Institute for Applied Systems Analysis, Austria. He was then appointed professor of Economics and Public Policy in the School of Public Affairs at the University of Maryland, College Park from 1992 to 1994. Young was Scott & Barbara Black Professor of Economics at the Johns Hopkins University from 1994, until moving to Oxford as James Meade Professor of Economics in 2007. In 2004 he was a Fulbright Di
|
https://en.wikipedia.org/wiki/Pollination%20syndrome
|
Pollination syndromes are suites of flower traits that have evolved in response to natural selection imposed by different pollen vectors, which can be abiotic (wind and water) or biotic, such as birds, bees, flies, and so forth through a process called pollinator-mediated selection. These traits include flower shape, size, colour, odour, reward type and amount, nectar composition, timing of flowering, etc. For example, tubular red flowers with copious nectar often attract birds; foul smelling flowers attract carrion flies or beetles, etc.
The "classical" pollination syndromes were first studied in the 19th century by the Italian botanist Federico Delpino. Although they are useful in understanding of plant-pollinator interactions, sometimes the pollinator of a plant species cannot be accurately predicted from the pollination syndrome alone, and caution must be exerted in making assumptions.
The naturalist Charles Darwin surmised that the flower of the orchid Angraecum sesquipedale was pollinated by a then undiscovered moth with a proboscis whose length was unprecedented at the time. His prediction had gone unverified until 21 years after his death, when the moth was discovered and his conjecture vindicated. The story of its postulated pollinator has come to be seen as one of the celebrated predictions of the theory of evolution.
Abiotic
These do not attract animal pollinators. Nevertheless, they often have suites of shared traits.
Wind pollination (anemophily)
Flowers may be small and inconspicuous, as well as green and not showy. They produce enormous numbers of relatively small pollen grains (hence wind-pollinated plants may be allergens, but seldom are animal-pollinated plants allergenic). Their stigmas may be large and feathery to catch the pollen grains. Insects may visit them to collect pollen; in some cases, these are ineffective pollinators and exert little natural selection on the flowers, but there are also examples of ambophilous flowers which are bo
|
https://en.wikipedia.org/wiki/Poisson%E2%80%93Boltzmann%20equation
|
The Poisson–Boltzmann equation is a useful equation in many settings, whether it be to understand physiological interfaces, polymer science, electron interactions in a semiconductor, or more. It aims to describe the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. The Poisson–Boltzmann equation is derived via mean-field assumptions.
From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions.
Origins
Background and derivation
The Poisson–Boltzmann equation describes a model proposed independently by Louis Georges Gouy and David Leonard Chapman in 1910 and 1913, respectively. In the Gouy-Chapman model, a charged solid comes into contact with an ionic solution, creating a layer of surface charges and counter-ions or double layer. Due to thermal motion of ions, the layer of counter-ions is a diffuse layer and is more extended than a single molecular layer, as previously proposed by Hermann Helmholtz in the Helmholtz model. The Stern Layer model goes a step further and takes into account the finite ion size.
The Gouy–Chapman model explains the capacitance-like qualities of the electric double layer. A simple planar case with a negatively charged surface can be seen in the figure below. As expected, the concentration of counter-ions is higher near the surface than in the bulk solution.
The Poisson–Boltzmann equation describes the electrochemical potential of ions in the diffuse layer. The three-dimensional potential distribution can be described by the Poisson equation
where
is the local electric charge density in C/m3,
is the dielectric constant (relative permittivity) of the solvent,
is the permittivity of free space,
is the electric potential.
The freedom of movement of ions in solution can be accounted for by Boltzmann statistics. The Bolt
|
https://en.wikipedia.org/wiki/Nonthermal%20plasma
|
A nonthermal plasma, cold plasma or non-equilibrium plasma is a plasma which is not in thermodynamic equilibrium, because the electron temperature is much hotter than the temperature of heavy species (ions and neutrals). As only electrons are thermalized, their Maxwell-Boltzmann velocity distribution is very different from the ion velocity distribution. When one of the velocities of a species does not follow a Maxwell-Boltzmann distribution, the plasma is said to be non-Maxwellian.
A kind of common nonthermal plasma is the mercury-vapor gas within a fluorescent lamp, where the "electron gas" reaches a temperature of while the rest of the gas, ions and neutral atoms, stays barely above room temperature, so the bulb can even be touched with hands while operating.
Applications
Food industry
In the context of food processing, a nonthermal plasma (NTP) or cold plasma is specifically an antimicrobial treatment being investigated for application to fruits, vegetables and meat products with fragile surfaces.
These foods are either not adequately sanitized or are otherwise unsuitable for treatment with chemicals, heat or other conventional food processing tools. While the applications of nonthermal plasma were initially focused on microbiological disinfection, newer applications such as enzyme inactivation, biomolecule oxidation, protein modification, prodrug activation, and pesticide dissipation are being actively researched.
Nonthermal plasma also sees increasing use in the sterilization of teeth and hands, in hand dryers as well as in self-decontaminating filters.
The term cold plasma has been recently used as a convenient descriptor to distinguish the one-atmosphere, near room temperature plasma discharges from other plasmas, operating at hundreds or thousands of degrees above ambient (see ). Within the context of food processing the term "cold" can potentially engender misleading images of refrigeration requirements as a part of the plasma treatment. However, in
|
https://en.wikipedia.org/wiki/3%2BShare
|
3+Share, also known simply as 3+ or 3 Plus, was a pioneering file and print sharing product from 3Com. Introduced in the early 1980s, 3+Share was competitive with Novell's NetWare in the network server business throughout the 1980s. It was replaced by the joint Microsoft-3Com LAN Manager in 1990, but 3Com exited the server market in 1991.
History
In 1984, Microsoft announced MS-Net, a framework for building multitasking network servers that ran on top of single-tasking MS-DOS. MS-Net implemented only the basic services for file and print sharing, and left out the actual networking protocol stack in favor of a virtual system in the form of IBM's NetBIOS. Vendors, like 3Com, licensed the MS-Net system and then added device drivers and other parts of the protocol stack to implement a complete server system.
In the case of 3+Share, 3Com based their networking solution on the seminal Xerox Network Systems (XNS), which 3Com's CEO Robert Metcalfe had helped design. XNS provided the networking protocol as well as connections to the underlying Ethernet hardware it ran on, which Metcalf had also helped design. They also modified MS-Net's servers to produce what they called EtherShare and EtherPrint protocols, which could be accessed with any MS-DOS computer that had the MS-Net client software installed.
Internally, 3+Share had a network stack, file and print server modules, disk caching, user handling and more, all running simultaneously inside the DOS memory space. Because they were not limited by the PC memory map, 3+Share could support a megabyte or so of flat memory, breaking the x86 PC's 640 kByte barrier. This was a large amount of RAM for the time.
They later added the XNS Name Service, which mapped network addresses to human-readable names and allowed users to look up devices by looking for strings like "3rd floor printer". Name services were part of the IBM-created NetBIOS in MS-Net, but Microsoft had left those commands out of the server, and it was common for 3
|
https://en.wikipedia.org/wiki/List%20of%20educational%20software
|
This is a list of educational software that is computer software whose primary purpose is teaching or self-learning.
Educational software by subject
Anatomy
3D Indiana
Bodyworks Voyager – Mission in Anatomy
Primal Pictures
Visible Human Project
Chemistry
Aqion - simulates water chemistry
Children's software
Bobo Explores Light
ClueFinders titles
Delta Drawing
Edmark
Fun School titles
GCompris - free software (GPL)
Gold Series
JumpStart titles
Kiwaka
KidPix
Lola Panda
Museum Madness
Ozzie series
Reader Rabbit titles
Tux Paint - free software (GPL)
Zoombinis titles
Computer science
JFLAP - Java Formal language and Automata Package
Cryptography
CrypTool - illustrates cryptographic and cryptanalytic concepts
Dictionaries and reference
Britannica
Encarta
Encyclopædia Britannica Ultimate Reference Suite
Geography and Astronomy
Cartopedia: The Ultimate World Reference Atlas
Celestia
Google Earth - (proprietary license)
Gravit - a free (GPL) Newtonian gravity simulator
KGeography
KStars
NASA World Wind - free software (NASA open source)
Stellarium
Swamp Gas Visits the United States of America - a game that teaches geography to children
Where is Carmen Sandiego? game series
WorldWide Telescope - a freeware from Microsoft
Health
TeachAids
History
Encyclopedia Encarta Timeline
Euratlas
Back in Time (iPad)
Balance of Power
Lemonade Stand
Number Munchers
Odell Lake
Spellevator
Windfall: The Oil Crisis Game
Word Munchers
Literacy
Accelerated Reader
AutoTutor
Compu-Read
DISTAR
Managed learning environments
ATutor (GPL)
Blackboard Inc.
Chamilo
Claroline
eCollege
eFront (CPAL)
Fle3 (GPL)
GCompris (GPL)
Google Classroom
ILIAS (GPL)
Kannu
LON-CAPA - free software (GPL)
Moodle - free software (GPL)
OLAT - free software
Renaissance Place
Sakai Project - free software
WebAssign
Mathematics
Accelerated Math
Cantor (software)
Compu-Math: Fractions
DrGeo
Geogebra
The Geometer's Sketchpad
Maple
M
|
https://en.wikipedia.org/wiki/Bony%20labyrinth
|
The bony labyrinth (also osseous labyrinth or otic capsule) is the rigid, bony outer wall of the inner ear in the temporal bone. It consists of three parts: the vestibule, semicircular canals, and cochlea. These are cavities hollowed out of the substance of the bone, and lined by periosteum. They contain a clear fluid, the perilymph, in which the membranous labyrinth is situated.
A fracture classification system in which temporal bone fractures detected by computed tomography are delineated based on disruption of the otic capsule has been found to be predictive for complications of temporal bone trauma such as facial nerve injury, sensorineural deafness and cerebrospinal fluid otorrhea. On radiographic images, the otic capsule is the densest portion of the temporal bone.
In otospongiosis, a leading cause of adult-onset hearing loss, the otic capsule is exclusively affected. This area normally undergoes no remodeling in adult life and is extremely dense. With otospongiosis, the normally dense enchondral bone is replaced by haversian bone, a spongy and vascular matrix that results in sensorineural hearing loss due to compromise of the conductive capacity of the inner ear ossicles. This results in hypodensity on CT, with the portion first affected usually being the fissula ante fenestram.
The bony labyrinth is studied in paleoanthropology as it is a good indicator for distinguishing Neanderthals and Modern humans.
|
https://en.wikipedia.org/wiki/Radial%20styloid%20process
|
The radial styloid process is a projection of bone on the lateral surface of the distal radius bone.
Structure
The radial styloid process is found on the lateral surface of the distal radius bone. It extends obliquely downward into a strong, conical projection. The tendon of the brachioradialis attaches at its base. The radial collateral ligament of the wrist attaches at its apex. The lateral surface is marked by a flat groove for the tendons of the abductor pollicis longus and extensor pollicis brevis.
Clinical significance
Breakage of the radius at the radial styloid is known as a Chauffeur's fracture; it is typically caused by compression of the scaphoid bone of the hand against the styloid.
De Quervain syndrome causes pain over the styloid process of the radius. This is due to the passage of the inflamed extensor pollicis brevis tendon and abductor pollicis longus tendon around it.
The styloid process of the radius is a useful landmark during arthroscopic resection of the scaphoid bone.
A prominent styloid process of the radius makes applying a wrist splint more difficult.
|
https://en.wikipedia.org/wiki/Ulnar%20styloid%20process
|
The styloid process of the ulna is a bony prominence found at distal end of the ulna in the forearm.
Structure
The styloid process of the ulna projects from the medial and back part of the ulna. It descends a little lower than the head. The head is separated from the styloid process by a depression for the attachment of the apex of the triangular articular disk, and behind, by a shallow groove for the tendon of the extensor carpi ulnaris muscle.
The styloid process of the ulnar varies in length between 2 mm and 6 mm.
Function
The rounded end of the styloid process of the ulna connects to the ulnar collateral ligament of the wrist. The radioulnar ligaments also attaches to the base of the styloid process of the ulna.
Clinical significance
Fractures of the styloid process of the ulna seldom require treatment when they occur in association with a distal radius fracture. The major exception is when the joint between these bones, the distal radioulnar joint (or DRUJ), is unstable. When the DRUJ is unstable, the ulnar styloid may require independent treatment.
An excessively long styloid process of the ulna can cause painful contact with the triquetral bone in the wrist, known as ulnar styloid impaction syndrome. Radiology is used to diagnose it. Conservative management involves injection of triamcinolone, while surgery involves shortening of the styloid process of the ulna via resection.
The position of the styloid process of the ulna in relation to the wrist must be considered when applying a wrist splint. This is important in preventing pressure ischaemia.
|
https://en.wikipedia.org/wiki/Aditus%20to%20mastoid%20antrum
|
The aditus to mastoid antrum (otomastoid foramen), is a large, irregular opening upon the posterior wall of the tympanic cavity by which the mastoid antrum (situated posteriorly) communicates with the epitympanic recess of the tympanic cavity (situated anteriorly). The walls of the antrum are lined by mucosa which is continuous with that lining the mastoid cells and tympanic cavity.
The medial wall of the aditus features a ridge created by the underlying facial canal, and a bulge created by the underlying ampulla of the lateral semicircular canal. The short limb of incus is lodged in a shallow fossa upon the posterior wall of the tympanic cavity just inferior to the aditus. The pyramidal eminence is situated inferior to the aditus.
See also
Aditus
|
https://en.wikipedia.org/wiki/Lyman-alpha%20blob
|
In astronomy, a Lyman-alpha blob (LAB) is a huge concentration of a gas emitting the Lyman-alpha emission line. LABs are some of the largest known individual objects in the Universe. Some of these gaseous structures are more than 400,000 light years across. So far they have only been found in the high-redshift universe because of the ultraviolet nature of the Lyman-alpha emission line. Since Earth's atmosphere is very effective at filtering out UV photons, the Lyman-alpha photons must be redshifted in order to be transmitted through the atmosphere.
The most famous Lyman-alpha blobs were discovered in 2000 by Steidel et al. Matsuda et al., using the Subaru Telescope of the National Astronomical Observatory of Japan extended the search for LABs and found over 30 new LABs in the original field of Steidel et al., although they were all smaller than the originals. These LABs form a structure which is more than 200 million light-years in extent. It is currently unknown whether LABs trace overdensities of galaxies in the high-redshift universe (as high redshift radio galaxies—which also have extended Lyman-alpha halos—do, for example), nor which mechanism produces the Lyman-alpha emission line, or how the LABs are connected to the surrounding galaxies. Lyman-alpha blobs may hold valuable clues to determine how galaxies are formed.
The most massive Lyman-alpha blobs have been discovered by Tristan Friedrich et al. (2021), Steidel et al. (2000), Francis et al. (2001), Matsuda et al. (2004), Dey et al. (2005), Nilsson et al. (2006), and Smith & Jarvis et al. (2007).
Examples
Himiko
LAB-1
EQ J221734.0+001701, the SSA22 Protocluster
Ton 618, hyperluminous quasar powering a Lyman-alpha blob; also possesses one of the most massive black holes known.
See also
Damped Lyman-alpha system
Galaxy filament
Green bean galaxy
Lyman-alpha forest
Lyman-alpha emitter
Lyman break galaxy
Newfound Blob (disambiguation)
Notes
Astronomical spectroscopy
Intergalactic media
Lar
|
https://en.wikipedia.org/wiki/Antitragicus
|
The antitragicus is an intrinsic muscle of the outer ear.
In human anatomy, the antitragicus arises from the outer part of the antitragus, and is inserted into the cauda helicis (or tail of the helix) and antihelix.
The function of the muscle is to adjusts the shape of the ear by pulling the antitragus and cauda helicis towards each other. While the muscle modifies the auricular shape only minimally in the majority of individuals, this action could increase the opening into the external acoustic meatus in some.
The helicis minor is developmentally derived from the second pharyngeal arch.
Additional images
See also
Intrinsic muscles of external ear
|
https://en.wikipedia.org/wiki/Tragicus
|
The tragicus (muscle of tragus or Valsalva muscle) is an intrinsic muscle of the outer ear.
It is a short, flattened vertical band on the lateral surface of the tragus.
While the muscle modifies the auricular shape only minimally in the majority of individuals, this action could increase the opening of the external acoustic meatus in some.
Additional images
See also
Intrinsic muscles of external ear
|
https://en.wikipedia.org/wiki/Oblique%20muscle%20of%20auricle
|
The oblique muscle of auricle (oblique auricular muscle or Tod muscle) is an intrinsic muscle of the outer ear.
The oblique muscle of auricle is placed on the cranial surface of the pinna. It consists of a few fibers extending from the upper and back part of the concha to the convexity immediately above it.
See also
Intrinsic muscles of external ear
|
https://en.wikipedia.org/wiki/Transverse%20muscle%20of%20auricle
|
The transverse muscle of auricle (transverse auricular muscle, transversus auriculae, transversus auricularis or transverse muscle of pinna) is an intrinsic muscle of the outer ear.
The muscle is located on the cranial surface of the pinna. It consists of scattered fibers, partly tendinous and partly muscular, extending from the eminentia conchae to the prominence corresponding with the scapha.
While the muscle modifies the auricular shape only minimally in the majority of individuals, it could help flatten the cranial profile of the auricular cartilage.
The transverse muscle is developmentally derived from the second pharyngeal arch.
Additional images
See also
Intrinsic muscles of external ear
|
https://en.wikipedia.org/wiki/Helicis%20minor
|
The Helicis minor (musculus helicis minor or smaller muscle of helix) is a small skeletal muscle. The helicis minor is an intrinsic muscle of the outer ear. The muscle runs obliques and covers the helical crus, part of the helix located just above the tragus.
The helicis minor originates from the base of the helical crus, runs obliques and inserts at the anterior aspect of the helical crus where it curves upward above the tragus.
The function of the muscle is to assist in adjusting the shape of the anterior margin of the ear cartilage. While this is a potential action in some individuals, in the majority of individuals the muscle modifies auricular shape to a minimal degree.
The helicis minor is developmentally derived from the second pharyngeal arch It seem that only in primates is the helicis major and minor two distinctive muscles.
Additional images
See also
Intrinsic muscles of external ear
Helicis major
|
https://en.wikipedia.org/wiki/Helicis%20major
|
The helicis major (or large muscle of helix) is an intrinsic muscle of the outer ear.
In human anatomy, it is the form of a narrow vertical band situated upon the anterior margin of the helix, at the point where the helix becomes transverse.
It arises below, from the spina helicis, and is inserted into the anterior border of the helix, just where it is about to curve backward.
The function of the muscle is to adjust the shape of the ear by depressing the anterior margin of the ear cartilage. While the muscle modifies the auricular shape only minimally in the majority of individuals, this action could increase the opening into the external acoustic meatus in some.
The helicis major is developmentally derived from the second pharyngeal arch. It seem that only in primates is the helicis minor and major two distinctive muscles.
Additional images
See also
Intrinsic muscles of external ear
Helicis minor
|
https://en.wikipedia.org/wiki/Spina%20helicis
|
At the front part of the auricula, where the helix bends upward, is a small projection of cartilage, called the spina helicis.
|
https://en.wikipedia.org/wiki/Cauda%20helicis
|
In the lower part of the helix the cartilage is prolonged downward as a tail-like process, the cauda helicis; this is separated from the antihelix by a fissure, the fissura antitragohelicina.
|
https://en.wikipedia.org/wiki/James%20Ax
|
James Burton Ax (10 January 1937 – 11 June 2006) was an American mathematician who made groundbreaking contributions in algebra and number theory using model theory. He shared, with Simon B. Kochen, the seventh Frank Nelson Cole Prize in Number Theory, which was awarded for a series of three joint papers on Diophantine problems.
Education and career
James Ax graduated from Stuyvesant High School in New York City and then the Brooklyn Polytechnic University. He earned his Ph.D. from the University of California, Berkeley in 1961 under the direction of Gerhard Hochschild, with a dissertation on The Intersection of Norm Groups. After a year at Stanford University, he joined the mathematics faculty at Cornell University. He spent the academic year 1965–1966 at Harvard University on a Guggenheim Fellowship. In 1969, he moved from Cornell to the mathematics department at Stony Brook University and remained on the faculty until 1977, when he retired from his academic career. In 1970 he was an Invited Speaker at the ICM in Nice with talk Transcendence and differential algebraic geometry. In the 1970s, he worked on the fundamentals of physics, including an axiomatization of space-time and the group theoretical properties of the axioms of quantum mechanics.
In the 1980s, he and Berkeley classmate Jim Simons founded a quantitative finance firm, Axcom Trading Advisors, which was later acquired by Renaissance Technologies and renamed the Medallion Fund. The latter fund was named after the Cole Prize won by James Ax and the Veblen Prize won by James Simons.
In the early 1990s, Ax retired from his financial career and went to San Diego, California, where he studied further on the foundations of quantum mechanics and also attended, at the University of California, San Diego, courses on playwriting and screenwriting. (In 2005 he completed a thriller screenplay entitled Bots.)
The Ax Library in the Department of Mathematics at the University of California, San Diego houses his m
|
https://en.wikipedia.org/wiki/Lacrimal%20papilla
|
The lacrimal papilla is the small rise in the bottom (inferior) and top (superior) eyelid just before it ends at the corner of the eye closest to the nose. At the medial edge of it is the lacrimal punctum, a small hole that lets tears drain into the inside of the nose through the lacrimal canaliculi.
In medical terms, the lacrimal papilla is a small conical elevation on the margin of each eyelid at the basal angles of the lacrimal lake. Its apex is pierced by a small orifice, the lacrimal punctum, the commencement of the lacrimal canaliculi.
It is otherwise known commonly as simply the 'tear duct'.
See also
Papilla (disambiguation)
|
https://en.wikipedia.org/wiki/NRC%20Herzberg%20Astronomy%20and%20Astrophysics%20Research%20Centre
|
The NRC Herzberg Astronomy and Astrophysics Research Centre (NRC Herzberg, HAA) is the leading Canadian centre for astronomy and astrophysics. It is based in Victoria, British Columbia. The current director-general, as of 2021, is Luc Simard.
History
Named for the Nobel laureate Gerhard Herzberg, it was formed in 1975 as part of the National Research Council of Canada in Ottawa, Ontario. The NRC-HIA headquarters were moved to Victoria, British Columbia in 1995 to the site of the Dominion Astrophysical Observatory. In 2012, the organization was restructured and renamed NRC Herzberg Astronomy and Astrophysics.
Facilities
NRC-HAA also operates the Dominion Radio Astrophysical Observatory outside of Penticton, British Columbia and the Canadian Astronomy Data Centre as well as managing Canadian involvement in the Canada-France-Hawaii Telescope, Gemini Observatory, Atacama Large Millimeter Array, the Square Kilometre Array, and the Thirty Meter Telescope, as well as Canada's national astronomy data centre.
The institute is also involved in the development and construction of instruments and telescopes.
Members of NRC-HAA are currently involved in The Next Generation Virgo Cluster Survey.
Members of NRC-HAA are currently involved with Pan-Andromeda Archaeological Survey.
Plaskett Fellowship
The Plaskett Fellowship is named after John Stanley Plaskett and is awarded to an outstanding, recent doctoral graduate in astrophysics or a closely related discipline. Fellows conduct independent research in a stimulating, collegial environment at the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada. Expertise in observational astrophysics is the norm, but some theoreticians were also among this distinguished group of astronomers.
Covington Fellowship
The Covington Fellowship is named after Arthur Covington and is awarded to an outstanding, recent doctoral graduate in astrophysics or a closely related discipline. Fellows conduct independent researc
|
https://en.wikipedia.org/wiki/Vinum%20volume%20manager
|
Vinum is a logical volume manager, also called software RAID, allowing implementations of the RAID-0, RAID-1 and RAID-5 models, both individually and in combination. The original Vinum was part of the base distribution of the FreeBSD operating system since 3.0, and also NetBSD between 2003-10-10 and 2006-02-25, as well as descendants of FreeBSD, including DragonFly BSD; in more recent versions of FreeBSD, it has been replaced with gvinum, which was first introduced around FreeBSD 6. Vinum source code is maintained in the FreeBSD and DragonFly source trees. Vinum supports RAID levels 0, 1, 5, and JBOD. Vinum was inspired by Veritas Volume Manager.
Vinum is invoked as gvinum (GEOM Vinum) on FreeBSD version 5.4 and up.
In modern FreeBSD, it may be considered to be a legacy volume manager; modern alternatives being GEOM and ZFS.
In NetBSD, it has been removed before NetBSD 4.0 due to lack of interest and maintenance; RAIDframe was cited as providing similar functionality.
In DragonFly BSD, DragonFly's own HAMMER filesystem already implements network mirroring, and the utility could be used to configure , another software RAID implementation, which originally appeared with FreeBSD 6.0 as , but was deprecated with FreeBSD 9, and removed before FreeBSD 10.0; and a NetBSD's port of Red Hat's lvm2 is also available in the base system of DragonFly as well all in addition to vinum.
Software RAID vs. Hardware RAID
The distribution of data across multiple disks can be managed by either dedicated hardware or by software. Additionally, there are hybrid RAIDs that are partly software- and partly hardware-based solutions.
With a software implementation, the operating system manages the disks of the array through the normal drive controller (ATA, SATA, SCSI, Fibre Channel, etc.). With present CPU speeds, software RAID can be faster than hardware RAID.
A hardware implementation of RAID requires at a minimum a special-purpose RAID controller. On a desktop system, this may be a P
|
https://en.wikipedia.org/wiki/Orbital%20fascia
|
The Orbital Fascia forms the periosteum of the orbit.
It is loosely connected to the bones and can be readily separated from them.
Behind, it is united with the dura mater by processes which pass through the optic foramen and superior orbital fissure, and with the sheath of the optic nerve.
In front, it is connected with the periosteum at the margin of the orbit, and sends off a process which assists in forming the orbital septum.
From it two processes are given off; one to enclose the lacrimal gland, the other to hold the pulley of the Obliquus superior in position.
Anatomy
The orbital fascia consisting of 3 parts:
Periorbita
Considered the periosteum of the bones forming the orbit, and is continuous with dura mater through the superior orbital fissure. It also forms the lacrimal sac.
Bulbar fascia
also known as Tenon's capsule.
It encapsulates the eyeball, forming a narrow space, called the Episcleral space, between the fascia and eyeball. This allows for the movement of the eyeball, while providing a socket that continues posteriorly with the optic nerve and its dural covering. Anteriorly it is attached to the corneoscleral junction.
Orbital septum
The framework that binds the orbital fat pad into the orbit, it also binds the palpebra to the bony orbit.
Other contents of the orbital cavity
Eyeball
Lacrimal gland
Extraocular muscles
Orbital adipose tissue
optic nerve
Oculomotor nerve branches
Trochlear nerve branches
Ophthalmic nerve branches
Abducent nerve branches
Ciliary ganglion
Ophthalmic artery
Ophthalmic veins
|
https://en.wikipedia.org/wiki/Orbitalis%20muscle
|
The orbitalis muscle is a vestigial or rudimentary nonstriated muscle (smooth muscle) that crosses from the infraorbital groove and sphenomaxillary fissure and is intimately united with the periosteum of the orbit. It was described by Heinrich Müller and is often called Müller's muscle. It lies at the back of the orbit and spans the infraorbital fissure. It is a thin layer of smooth muscle that bridges the inferior orbital fissure. It is supplied by sympathetic nerves, and its function is unknown.
Function
The muscle forms an important part of the lateral orbital wall in some animals and can act to change the wall's volume in lower mammals, while in humans it is not known to have any significant function, but its contraction may possibly produce a slight forward protrusion of the eyeball. Several sources have suggested a role in the autonomic regulation of the vascular system due to the pattern of innervation of the orbitalis.
Pathology
Horner's syndrome causes paralysis of the structures of the eye and orbit that receive sympathetic innervation. The signs of Horner's syndrome are ptosis, miosis, anhydrosis, and (apparent) enophthalmos. While some attribute the enophthalmos of Horner's Syndrome to paralysis of the orbitalis muscle, this is inaccurate. Enophthalmos in Horner's syndrome is an illusion created by the subtle ptosis of the upper eyelid caused by paralysis of the superior tarsal muscle.
Sinking in of the eye (true enophthalmos) is possibly caused by paralysis of the smooth (orbitalis) muscle in the floor of the orbit.
Eponym
While the orbitalis muscle is also known as Müller's muscle, the use of this term should be discouraged to avoid confusion with the superior tarsal muscle and the circular fibres of the ciliary muscle.
|
https://en.wikipedia.org/wiki/Greater%20palatine%20nerve
|
The greater palatine nerve is a branch of the pterygopalatine ganglion. This nerve is also referred to as the anterior palatine nerve, due to its location anterior to the lesser palatine nerve. It carries both general sensory fibres from the maxillary nerve, and parasympathetic fibers from the nerve of the pterygoid canal. It may be anaesthetised for procedures of the mouth and maxillary (upper) teeth.
Structure
The greater palatine nerve is a branch of the pterygopalatine ganglion. It descends through the greater palatine canal, moving anteriorly and inferiorly. Here, it is accompanied by the descending palatine artery. It emerges upon the hard palate through the greater palatine foramen. It then passes forward in a groove in the hard palate, nearly as far as the incisor teeth.
While in the pterygopalatine canal, it gives off lateral posterior inferior nasal branches, which enter the nasal cavity through openings in the palatine bone, and ramify over the inferior nasal concha and middle and inferior meatuses. At its exit from the canal, a palatine branch is distributed to both surfaces of the soft palate.
Function
The greater palatine nerve carries both general sensory fibres from the maxillary nerve, and parasympathetic fibers from the nerve of the pterygoid canal. It supplies the gums, the mucous membrane and glands of the hard palate, and communicates in front with the terminal filaments of the nasopalatine nerve.
Clinical significance
The greater palatine nerve may be anaesthetised to perform dental procedures on the maxillary (upper) teeth, and sometimes for cleft lip and cleft palate surgery.
|
https://en.wikipedia.org/wiki/Cryptomorphism
|
In mathematics, two objects, especially systems of axioms or semantics for them, are called cryptomorphic if they are equivalent but not obviously equivalent. In particular, two definitions or axiomatizations of the same object are "cryptomorphic" if it is not obvious that they define the same object. Examples of cryptomorphic definitions abound in matroid theory and others can be found elsewhere, e.g., in group theory the definition of a group by a single operation of division, which is not obviously equivalent to the usual three "operations" of identity element, inverse, and multiplication.
This word is a play on the many morphisms in mathematics, but "cryptomorphism" is only very distantly related to "isomorphism", "homomorphism", or "morphisms". The equivalence may in a cryptomorphism, if it is not actual identity, be informal, or may be formalized in terms of a bijection or equivalence of categories between the mathematical objects defined by the two cryptomorphic axiom systems.
Etymology
The word was coined by Garrett Birkhoff before 1967, for use in the third edition of his book Lattice Theory. Birkhoff did not give it a formal definition, though others working in the field have made some attempts since.
Use in matroid theory
Its informal sense was popularized (and greatly expanded in scope) by Gian-Carlo Rota in the context of matroid theory: there are dozens of equivalent axiomatic approaches to matroids, but two different systems of axioms often look very different.
In his 1997 book Indiscrete Thoughts, Rota describes the situation as follows:
Though there are many cryptomorphic concepts in mathematics outside of matroid theory and universal algebra, the word has not caught on among mathematicians generally. It is, however, in fairly wide use among researchers in matroid theory.
See also
Combinatorial class, an equivalence among combinatorial enumeration problems hinting at the existence of a cryptomorphism
|
https://en.wikipedia.org/wiki/Bimorph
|
A bimorph is a cantilever used for actuation or sensing which consists of two active layers. It can also have a passive layer between the two active layers. In contrast, a piezoelectric unimorph has only one active (i.e. piezoelectric) layer and one passive (i.e. non-piezoelectric) layer.
Piezoelectric bimorph
The term bimorph is most commonly used with piezoelectric bimorphs. In actuator applications, one active layer contracts and the other expands if voltage is applied, thus the bimorph bends. In sensing applications, bending the bimorph produces voltage which can for example be used to measure displacement or acceleration. This mode can also be used for energy harvesting.
Bimetal bimorph
A bimetal could be regarded as a thermally activated bimorph. The first theory about the bending of thermally activated bimorphs was given by Stoney. Newer developments also enabled electrostatically activated bimorphs for the use in microelectromechanical systems.
See also
Shape-memory alloy
|
https://en.wikipedia.org/wiki/Water%20column
|
The (oceanic) water column is a concept used in oceanography to describe the physical (temperature, salinity, light penetration) and chemical (pH, dissolved oxygen, nutrient salts) characteristics of seawater at different depths for a defined geographical point. Generally, vertical profiles are made of temperature, salinity, chemical parameters at a defined point along the water column. The water column is the largest, yet one of the most under-explored, habitats on the planet; it is explored to better understand the ocean as a whole, including the huge biomass that lives there and its importance to the global carbon and other biogeochemical cycles. Studying the water column also provides understanding on the links between living organisms and environmental parameters, large-scale water circulation and the transfer of matter between water masses.
Water columns are used chiefly for environmental studies evaluating the stratification or mixing of thermal or chemically stratified layers in a lake, stream or ocean. Some of the common parameters analyzed in the water column are pH, turbidity, temperature, hydrostatic pressure, salinity, total dissolved solids, various pesticides, pathogens and a wide variety of chemicals and biota.
Descriptively, the deep sea water column is divided into five parts—pelagic zones (from Greek πέλαγος (pélagos), 'open sea')—from the surface to below the floor.
The term water column is also commonly used in scuba diving to describe the vertical space through which divers ascend and descend.
See also
Hydrological transport model
|
https://en.wikipedia.org/wiki/Unimorph
|
A unimorph or monomorph is a cantilever that consists of one active layer and one inactive layer. In the case where active layer is piezoelectric, deformation in that layer may be induced by the application of an electric field. This deformation induces a bending displacement in the cantilever. The inactive layer may be fabricated from a non-piezoelectric material. Expanded URL for Paper referenced is located here: https://people.eecs.berkeley.edu/~ronf/PAPERS/sitti-icra01.pdf
See also
Bimorph
|
https://en.wikipedia.org/wiki/Down%20syndrome%20research
|
Research of Down syndrome-related genes is based on studying the genes located on chromosome 21. In general, this leads to an overexpression of the genes. Understanding the genes involved may help to target medical treatment to individuals with Down syndrome. It is estimated that chromosome 21 contains 200 to 250 genes. Recent research has identified a region of the chromosome that contains the main genes responsible for the pathogenesis of Down syndrome, located proximal to 21q22.3. The search for major genes involved in Down syndrome characteristics is normally in the region 21q21–21q22.3.
Genes
Some suspected genes involved in features of Down syndrome are given in the Table 1:
General research
Research by Arron et al. shows that some of the phenotypes associated with Down syndrome can be related to the disregulation of transcription factors (596), and in particular, NFAT. NFAT is controlled in part by two proteins, DSCR1 and DYRK1A; these genes are located on chromosome-21 (Epstein 582). In people with Down syndrome, these proteins have 1.5 times greater concentration than normal (Arron et al. 597). The elevated levels of DSCR1 and DYRK1A keep NFAT primarily located in the cytoplasm rather than in the nucleus, preventing NFATc from activating the transcription of target genes and thus the production of certain proteins (Epstein 583).
This disregulation was discovered by testing in transgenic mice that had segments of their chromosomes duplicated to simulate a human chromosome-21 trisomy (Arron et al. 597). A test involving grip strength showed that the genetically modified mice had a significantly weaker grip, much like the characteristically poor muscle tone of an individual with Down syndrome (Arron et al. 596). The mice squeezed a probe with a paw and displayed a 0.2 newton weaker grip (Arron et al. 596). Down syndrome is also characterized by increased socialization. When modified and unmodified mice were observed for social interaction, the modifie
|
https://en.wikipedia.org/wiki/Dynamic%20light%20scattering
|
Dynamic light scattering (DLS) is a technique in physics that can be used to determine the size distribution profile of small particles in suspension or polymers in solution. In the scope of DLS, temporal fluctuations are usually analyzed using the intensity or photon auto-correlation function (also known as photon correlation spectroscopy - PCS or quasi-elastic light scattering - QELS). In the time domain analysis, the autocorrelation function (ACF) usually decays starting from zero delay time, and faster dynamics due to smaller particles lead to faster decorrelation of scattered intensity trace. It has been shown that the intensity ACF is the Fourier transform of the power spectrum, and therefore the DLS measurements can be equally well performed in the spectral domain. DLS can also be used to probe the behavior of complex fluids such as concentrated polymer solutions.
Setup
A monochromatic light source, usually a laser, is shot through a polarizer and into a sample. The scattered light then goes through a second polarizer where it is collected by a photomultiplier and the resulting image is projected onto a screen. This is known as a speckle pattern (Figure 1).
All of the molecules in the solution are being hit with the light and all of the molecules diffract the light in all directions. The diffracted light from all of the molecules can either interfere constructively (light regions) or destructively (dark regions). This process is repeated at short time intervals and the resulting set of speckle patterns is analyzed by an autocorrelator that compares the intensity of light at each spot over time.
The polarizers can be set up in two geometrical configurations. One is a vertical/vertical (VV) geometry, where the second polarizer allows light through that is in the same direction as the primary polarizer. In vertical/horizontal (VH) geometry the second polarizer allows light that is not in the same direction as the incident light.
Description
When light hits s
|
https://en.wikipedia.org/wiki/Triangle%20K
|
Triangle K is a kosher certification agency under the leadership of Rabbi Aryeh R. Ralbag. It was founded by his late father, Rabbi Yehosef Ralbag. The hechsher is a letter K enclosed in an equilateral triangle.
Supervision and certification
They supervise a number of major brands, including Del Monte, Hebrew National, Ocean Spray, Sunsweet, Sunny Delight, SunChips and Wonder Bread.
Minute Maid products used to be supervised by Triangle K. Since 2013, the Orthodox Union has been providing kosher certification for Minute Maid products instead.
Many Orthodox Jews eat only glatt kosher. Triangle K continues to certify foods as kosher that are not glatt kosher. As a result, some Orthodox Jews will not eat food that is certified by Triangle K.
K Meshulash
The name K Meshulash is sometimes used in addition to the trademarked Triangle K name. Meshulash means triangle, triple, or tripled in Hebrew.
See also
Kosher foods
Kashrut
|
https://en.wikipedia.org/wiki/BasicX
|
BasicX is a free programming language designed specifically for NetMedia's BX-24 microcontroller and based on the BASIC programming language. It is used in the design of robotics projects such as the Robodyssey Systems Mouse robot.
Further reading
Odom, Chris D. BasicX and Robotics. Robodyssey Systems LLC,
External links
NetMedia Home Page
BasicX Free Downloads
Sample Code
, programmed in BasicX
Videos, Sample Code, and Tutorials from the author of BasicX and Robotics
BASIC compilers
Embedded systems
|
https://en.wikipedia.org/wiki/Prefuse
|
Prefuse is a Java-based toolkit for building interactive information visualization applications. It supports a rich set of features for data modeling, visualization and interaction. It provides optimized data structures for tables, graphs, and trees, a host of layout and visual encoding techniques, and support for animation, dynamic queries, integrated search, and database connectivity.
Prefuse uses the Java 2D graphics library, and is easily integrated into Swing applications or Java applets. Prefuse is licensed under the terms of a BSD license, and can be used freely for commercial and non-commercial purposes.
Overview
Prefuse is a Java-based extensible software framework for creating interactive information visualization applications. It can be used to build standalone applications, visual components and Java applets. Prefuse intends to simplify the processes of visualizing, handling and mapping of data, as well as user interaction.
Some of Prefuse's features include:
Table, graph, and tree data structures supporting arbitrary data attributes, data indexing, and selection queries, all with an efficient memory footprint.
Components for layout, color, size, and shape encodings, distortion techniques and more.
A library of controls for common interactive, direct-manipulation operations.
Animation support through a general activity scheduling mechanism.
View transformations supporting panning and zooming, including both geometric and semantic zooming.
Dynamic queries for interactive filtering of data.
Integrated text search using a number of available search engines.
A physical force simulation engine for dynamic layout and animation (s.a. Force-directed graph drawing)
Flexibility for multiple views, including "overview+detail" and "small multiples" displays.
A built in, SQL-like expression language for writing queries to prefuse data structures and creating derived data fields.
Support for issuing queries to SQL databases and mapping query results
|
https://en.wikipedia.org/wiki/Control-Lyapunov%20function
|
In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or (more restrictively) asymptotically stable. Lyapunov stability means that if the system starts in a state in some domain D, then the state will remain in D for all time. For asymptotic stability, the state is also required to converge to . A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state asymptotically by applying the control u.
The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s.
Definition
Consider an autonomous dynamical system with inputs
where is the state vector and is the control vector. Suppose our goal is to drive the system to an equilibrium from every initial state in some domain . Without loss of generality, suppose the equilibrium is at (for an equilibrium , it can be translated to the origin by a change of variables).
Definition. A control-Lyapunov function (CLF) is a function that is continuously differentiable, positive-definite (that is, is positive for all except at where it is zero), and such that for all there exists such that
where denotes the inner product of .
The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem.
Some results apply only to control-affine systems—i.e., control systems in the following form:
where and for .
Theorems
E. D. Sontag
|
https://en.wikipedia.org/wiki/TNF%20inhibitor
|
A TNF inhibitor is a pharmaceutical drug that suppresses the physiologic response to tumor necrosis factor (TNF), which is part of the inflammatory response. TNF is involved in autoimmune and immune-mediated disorders such as rheumatoid arthritis, ankylosing spondylitis, inflammatory bowel disease, psoriasis, hidradenitis suppurativa and refractory asthma, so TNF inhibitors may be used in their treatment. The important side effects of TNF inhibitors include lymphomas, infections (especially reactivation of latent tuberculosis), congestive heart failure, demyelinating disease, a lupus-like syndrome, induction of auto-antibodies, injection site reactions, and systemic side effects.
The global market for TNF inhibitors in 2008 was $13.5 billion and $22 billion in 2009.
Examples
Inhibition of TNF effects can be achieved with a monoclonal antibody such as infliximab, adalimumab, certolizumab pegol, and golimumab, or with a circulating receptor fusion protein such as etanercept.
Thalidomide and its derivatives lenalidomide and pomalidomide are also active against TNF.
While most clinically useful TNF inhibitors are monoclonal antibodies, some are simple molecules such as xanthine derivatives (e.g. pentoxifylline) and bupropion.
Several 5-HT2A agonist hallucinogens including (R)-DOI, TCB-2, LSD and LA-SS-Az have unexpectedly also been found to act as potent inhibitors of TNF, with DOI being the most active, showing TNF inhibition in the picomolar range, an order of magnitude more potent than its action as a hallucinogen.
Medical uses
Rheumatoid arthritis
The role of TNF as a key player in the development of rheumatoid arthritis was originally demonstrated by Kollias and colleagues in proof of principle studies in transgenic animal models.
TNF levels have been shown to be raised in both the synovial fluid and synovium of patients with rheumatoid arthritis. This leads to local inflammation through the signalling of synovial cells to produce metalloproteinases and co
|
https://en.wikipedia.org/wiki/Cannibalism
|
Cannibalism is the act of consuming another individual of the same species as food. Cannibalism is a common ecological interaction in the animal kingdom and has been recorded in more than 1,500 species. Human cannibalism is well documented, both in ancient and in recent times.
The rate of cannibalism increases in nutritionally poor environments as individuals turn to members of their own species as an additional food source. Cannibalism regulates population numbers, whereby resources such as food, shelter and territory become more readily available with the decrease of potential competition. Although it may benefit the individual, it has been shown that the presence of cannibalism decreases the expected survival rate of the whole population and increases the risk of consuming a relative. Other negative effects may include the increased risk of pathogen transmission as the encounter rate of hosts increases. Cannibalism, however, does not—as once believed—occur only as a result of extreme food shortage or of artificial/unnatural conditions, but may also occur under natural conditions in a variety of species.
Cannibalism is prevalent in aquatic ecosystems, in which up to approximately 90% of the organisms engage in cannibalistic activity at some point in their life-cycle. Cannibalism is not restricted to carnivorous species: it also occurs in herbivores and in detritivores. Sexual cannibalism normally involves the consumption of the male by the female individual before, during or after copulation. Other forms of cannibalism include size-structured cannibalism and intrauterine cannibalism.
Behavioral, physiological and morphological adaptations have evolved to decrease the rate of cannibalism in individual species.
Benefits
In environments where food availability is constrained, individuals can receive extra nutrition and energy if they use members of their own species, also known as conspecifics, as an additional food source. This would, in turn, increase the surv
|
https://en.wikipedia.org/wiki/Lessepsian%20migration
|
The Lessepsian migration (also called Erythrean invasion) is the migration of marine species across the Suez Canal, usually from the Red Sea to the Mediterranean Sea, and more rarely in the opposite direction. When the canal was completed in 1869, fish, crustaceans, mollusks, and other marine animals and plants were exposed to an artificial passage between the two naturally separate bodies of water, and cross-contamination was made possible between formerly isolated ecosystems. The phenomenon is still occurring today. It is named after Ferdinand de Lesseps, the French diplomat in charge of the canal's construction.
The migration of invasive species through the Suez Canal from the Indo-Pacific region has been facilitated by many factors, both abiotic and anthropogenic, and presents significant implications for the ecological health and economic stability of the contaminated areas; of particular concern is the fisheries industry in the Eastern Mediterranean. Despite these threats, the phenomenon has allowed scientists to study an invasive event on a large scale in a short period of time, which usually takes hundreds of years in natural conditions.
In a wider context, the term Lessepsian migration is also used to describe any animal migration facilitated by man-made structures, i.e. one which would not have occurred had it not been for the presence of an artificial structure.
Background
The opening of the Suez Canal created the first saltwater passage between the Mediterranean Sea and the Red Sea. Constructed in 1869 to provide a more direct trade route from Europe to India and the Far East, the canal is long, with a depth of and a width varying between .
Because the surface of the Red Sea is slightly higher in elevation than the Eastern Mediterranean, the canal serves as a tidal strait by which Red Sea water pours into the Mediterranean. The Bitter Lakes, which are natural hypersaline lakes that form part of the canal, blocked the migration of Red Sea species i
|
https://en.wikipedia.org/wiki/Majewski%27s%20polydactyly%20syndrome
|
Majewski's polydactyly syndrome, also known as polydactyly with neonatal chondrodystrophy type I, short rib-polydactyly syndrome type II, and shorts rib-polydactyly syndrome, is a lethal form of neonatal dwarfism characterized by osteochondrodysplasia (skeletal abnormalities in the development of bone and cartilage) with a narrow thorax, polysyndactyly, disproportionately short tibiae, thorax dysplasia, hypoplastic lungs and respiratory insufficiency. Associated anomalies include protruding abdomen, brachydactyly, peculiar faces, hypoplastic epiglottis, cardiovascular defects, renal cysts, and also genital anomalies. Death occurs before or at birth.
The disease is inherited in an autosomal recessive pattern.
It was characterized in 1971.
|
https://en.wikipedia.org/wiki/Flat%20bone
|
Flat bones are bones whose principal function is either extensive protection or the provision of broad surfaces for muscular attachment. These bones are expanded into broad, flat plates, as in the cranium (skull), the ilium (pelvis), sternum and the rib cage.
The flat bones are: the occipital, parietal, frontal, nasal, lacrimal, vomer, sternum, ribs, and scapulae.
These bones are composed of two thin layers of compact bone enclosing between them a variable quantity of cancellous bone, which is the location of red bone marrow. In an adult, most red blood cells are formed in flat bones. In the cranial bones, the layers of compact tissue are familiarly known as the tables of the skull; the outer one is thick and tough; the inner is thin, dense, and brittle, and hence is termed the vitreous (glass-like) table. The intervening cancellous tissue is called the diploë, and this, in the nasal region of the skull, becomes absorbed so as to leave spaces filled with air–the paranasal sinuses between the two tables.
Ossification in flat bones
Ossification is started by the formation of layers of undifferentiated connective tissue that hold the area where the flat bone is to come. On a baby, those spots are known as fontanelles. The fontanelles contain connective tissue stem cells, which form into osteoblasts, which secrete calcium phosphate into a matrix of canals. They form a ring in between the membranes, and begin to expand outwards. As they expand they make a bony matrix.
This hardened matrix forms the body of the bone. Since flat bones are usually thinner than the long bones, they only have red bone marrow, rather than both red and yellow bone marrow (yellow bone marrow being made up of mostly fat). The bone marrow fills the space in the ring of osteoblasts, and eventually fills the bony matrix.
After the bone is completely ossified, the osteoblasts retract their calcium phosphate secreting tendrils, leaving tiny canals in the bony matrix, known as canaliculi. These
|
https://en.wikipedia.org/wiki/Ossification%20center
|
An ossification center is a point where ossification of the cartilage begins. The first step in ossification is that the cartilage cells at this point enlarge and arrange themselves in rows.
The matrix in which they are imbedded increases in quantity, so that the cells become further separated from each other.
A deposit of calcareous material now takes place in this matrix, between the rows of cells, so that they become separated from each other by longitudinal columns of calcified matrix, presenting a granular and opaque appearance.
Here and there the matrix between two cells of the same row also becomes calcified, and transverse bars of calcified substance stretch across from one calcareous column to another.
Thus, there are longitudinal groups of the cartilage cells enclosed in oblong cavities, the walls of which are formed of calcified matrix which cuts off all nutrition from the cells; the cells, in consequence, atrophy, leaving spaces called the primary areolæ.
Types of ossification centers
There are two types of ossification centers – primary and secondary.
A primary ossification center is the first area of a bone to start ossifying. It usually appears during prenatal development in the central part of each developing bone. In long bones the primary centers occur in the diaphysis/shaft and in irregular bones the primary centers occur usually in the body of the bone. Most bones have only one primary center (e.g. all long bones except clavicle) but some irregular bones such as the os coxa (hip) and vertebrae have multiple primary centers.
A secondary ossification center is the area of ossification that appears after the primary ossification center has already appeared – most of which appear during the postnatal and adolescent years. Most bones have more than one secondary ossification center. In long bones, the secondary centers appear in the epiphyses.
|
https://en.wikipedia.org/wiki/Occipital%20condyles
|
The occipital condyles are undersurface protuberances of the occipital bone in vertebrates, which function in articulation with the superior facets of the atlas vertebra.
The condyles are oval or reniform (kidney-shaped) in shape, and their anterior extremities, directed forward and medialward, are closer together than their posterior, and encroach on the basilar portion of the bone; the posterior extremities extend back to the level of the middle of the foramen magnum.
The articular surfaces of the condyles are convex from before backward and from side to side, and look downward and lateralward.
To their margins are attached the capsules of the atlanto-occipital joints, and on the medial side of each is a rough impression or tubercle for the alar ligament.
At the base of either condyle the bone is tunnelled by a short canal, the hypoglossal canal.
Clinical significance
Fracture of an occipital condyle may occur in isolation, or as part of a more extended basilar skull fracture. Isolated condyle fracture is a type of craniocervical injury. The classification of Anderson and Montesano distinguishes three types of occipital condyle fracture:
Type I: Isotated impaction fracture of the occipital condyle, due to compression by the atlas or dens. This injury is usually stable; significant displacement of fragments is rare.
Type II: Occipital basilar skull fracture extending into the condyle, resulting from direct trauma. The craniocervical junction usually stays stable, but neurologic injury may occur from the blow to the head.
Type III: Isolated avulsion of the condyle with displacement towards the alar ligament, due to forced rotation / lateral bending. This injury tends to be unstable and may co-occur with atlanto-occipital subluxation or dislocation. Neurological injury may occur and range from minor to instantly fatal.
Minimally displaced fractures are treated conservatively. Surgery may become necessary if there is significant compression of the brainstem,
|
https://en.wikipedia.org/wiki/Interspinous%20ligament
|
The interspinous ligaments (interspinal ligaments) are thin, membranous ligaments that connect adjoining spinous processes of the vertebra in the spine. They take the form of relatively weak sheets of fibrous tissue and are well developed only in the lumbar region.
They extend from the root to the apex of each spinous process. They meet the ligamenta flava anteriorly, and blend with the supraspinous ligament posteriorly at the apexes of the spinal processes. The function of the interspinous ligaments is to limit ventral flexion of the spine and sliding movement of the vertebrae.
The ligaments are narrow and elongated in the thoracic region. They are broader, thicker, and quadrilateral in form in the lumbar region. They are only slightly developed in the neck; in the neck, they are often considered part of the nuchal ligament.
|
https://en.wikipedia.org/wiki/Intramolecular%20force
|
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.