source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/POWER2
|
The POWER2, originally named RIOS2, is a processor designed by IBM that implemented the POWER instruction set architecture. The POWER2 was the successor of the POWER1, debuting in September 1993 within IBM's RS/6000 systems. When introduced, the POWER2 was the fastest microprocessor, surpassing the Alpha 21064. When the Alpha 21064A was introduced in 1993, the POWER2 lost the lead and became second. IBM claimed that the performance for a 62.5 MHz POWER2 was 73.3 SPECint92 and 134.6 SPECfp92.
The open source GCC compiler removed support for POWER1 (RIOS) and POWER2 (RIOS2) in the 4.5 release.
Description
Improvements over the POWER1 included enhancements to the POWER instruction set architecture (consisting of new user and system instructions and other system-related features), higher clock rates (55 to 71.5 MHz), an extra fixed point unit and floating point unit, a larger 32 KB instruction cache, and a larger 128 or 256 KB data cache. The POWER2 was a multi-chip design consisting of six or eight semi-custom integrated circuits, depending on the amount of data cache (the 256 KB configuration required eight chips). The partitioning of the design was identical to that of the POWER1: an instruction cache unit chip, a fixed-point unit chip, a floating-point unit chip, a storage control unit chip, and two or four data cache unit chips.
The eight-chip configuration contains a total of 23 million transistors and a total die area of 1,215 mm2. The chips are manufactured by IBM in its 0.72 μm CMOS process, which features a 0.45 μm effective channel length; and one layer of polysilicon and four layers of metal interconnect. The chips are packaged in a ceramic multi-chip module (MCM) that measures 64 mm by 64 mm.
POWER2+
An improved version of the POWER2 optimized for transaction processing was introduced in May 1994 as the POWER2+. Transaction processing workloads benefited from the addition of a L2 cache with capacities of 512 KB, 1 MB and 2 MB. This cache was implement
|
https://en.wikipedia.org/wiki/American%20Society%20for%20Cell%20Biology
|
The American Society for Cell Biology (ASCB) is a professional society that was founded in 1960.
History
On 6 April 1959 the United States National Academy of Sciences passed a resolution for the establishment of a "national society of cell biology to act as a national representative to the International Federation for Cell Biology".
The ASCB was first organized at an ad hoc meeting in the office of Keith R. Porter at Rockefeller University on May 28, 1960. In the 1940s, Porter was one of the first scientists in the world to use the then-revolutionary technique of electron microscopy (EM) to reveal the internal structure of cells. Other early ASCB leaders—George Palade, Don Fawcett, Hewson Swift, Arthur Solomon, and Hans Ris—also were EM pioneers. All early ASCB leaders were concerned that existing scientific societies and existing biology journals were not receptive to this emerging field that studied the cell as the fundamental unit of all life.
The ASCB was legally incorporated in New York State on July 31, 1961. A call for membership went out, enlisting ASCB's first 480 members. The first ASCB Annual Meeting was held November 2–4, 1961, in Chicago, where 844 attendees gathered for three days of lectures, slides, and movies showing cellular structure. The results of a mail ballot were read out and Fawcett was declared ASCB's first president.
The ASCB did not remain an EM society. New technologies and new discoveries in molecular biology, genetics, biochemistry, and light microscopy quickly widened the field. Cell biology has continued to expand ever since, extending its impact on clinical medicine and pharmacology while drawing on new technologies in bioengineering, high-resolution imaging, massive data handling, and genomic sequencing.
By 1963, the membership consisted of 9,000 scientists.
In 2008 it was reported that ASCB had 11,000 members worldwide. Today, 25% of ASCB members work outside the United States). Annual meetings now draw upwards of 5,000 peo
|
https://en.wikipedia.org/wiki/J.%20M.%20R.%20Parrondo
|
Juan Manuel Rodríguez Parrondo (born 9 January 1964) is a Spanish physicist. He is mostly popular for the invention of the Parrondo's paradox and his contributions in the thermodynamical study of information.
Biography
Juan Parrondo received his bachelors degree in 1987 and defended his Ph.D at Complutense University of Madrid in 1992. He started a permanent position at UCM at 1996. In the same year he invented the well-known Parrondo's Paradox, according to which 2 losing strategies may win while working together. Since then, the paradox has been widely used in biology and finances. He has also completed a lot of research in the field of Information Theory, mostly looking at information as a thermodynamic concept, which as a result of ergodicity breaking changed the entropy of the system.
Works by Juan M.R. Parrondo
"Noise-Induced Non-equilibrium Phase Transition" C. Van den Broeck, J. M. R. Parrondo and R. Toral, Physical Review Letters, vol. 73 p. 3395 (1994)
Notes
Further reading
"Game theory: Losing strategies can win by Parrondo's paradox" G. P. Harmer and D. Abbott, Nature vol. 402, p. 864 (1999)
External links
Home page of Juan M. R. Parrondo
1964 births
Living people
Scientists from Madrid
Spanish physicists
Probability theorists
Complutense University of Madrid alumni
|
https://en.wikipedia.org/wiki/Relational%20quantum%20mechanics
|
Relational quantum mechanics (RQM) is an interpretation of quantum mechanics which treats the state of a quantum system as being observer-dependent, that is, the state is the relation between the observer and the system. This interpretation was first delineated by Carlo Rovelli in a 1994 preprint, and has since been expanded upon by a number of theorists. It is inspired by the key idea behind special relativity, that the details of an observation depend on the reference frame of the observer, and uses some ideas from Wheeler on quantum information.
The physical content of the theory has not to do with objects themselves, but the relations between them. As Rovelli puts it:
"Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".
The essential idea behind RQM is that different observers may give different accurate accounts of the same system. For example, to one observer, a system is in a single, "collapsed" eigenstate. To a second observer, the same system is in a superposition of two or more states and the first observer is in a correlated superposition of two or more states. RQM argues that this is a complete picture of the world because the notion of "state" is always relative to some observer. There is no privileged, "real" account.
The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system.
The terms "observer" and "observed" apply to any arbitrary system, microscopic or macroscopic. The classical limit is a consequence of aggregate systems of very highly correlated subsystems.
A "measurement event" is thus described as an ordinary physical interaction where two systems become correlated to some degree with respect to each other.
The proponents of the relational interpretation argue that this approach resolves some of the traditional interpreta
|
https://en.wikipedia.org/wiki/Sacrococcygeal%20membrane
|
The sacrococcygeal membrane is a tough fibrous membrane about 10mm long which extends from the inferior tip of the sacrum to the body of the coccyx in humans. It covers the inferior limit of the epidural space and is analogous to the ligamentum flavum found at other levels in the spine.
It can be found at the apex of an equilateral triangle whose base is formed by the dimples overlying the sacro-iliac joints. The cornua of the sacrum may be palpated with a finger; the sacrococcygeal membrane lies between and inferior to these.
Thorax (human anatomy)
|
https://en.wikipedia.org/wiki/Posterior%20ligament%20of%20elbow
|
The posterior ligament is thin and membranous, and consists of transverse and oblique fibers.
Above, it is attached to the humerus immediately behind the capitulum and close to the medial margin of the trochlea, to the margins of the olecranon fossa, and to the back of the lateral epicondyle some little distance from the trochlea.
Below, it is fixed to the upper and lateral margins of the olecranon, to the posterior part of the annular ligament, and to the ulna behind the radial notch.
The transverse fibers form a strong band which bridges across the olecranon fossa; under cover of this band a pouch of synovial membrane and a pad of fat project into the upper part of the fossa when the joint is extended.
In the fat are a few scattered fibrous bundles, which pass from the deep surface of the transverse band to the upper part of the fossa.
This ligament is in relation, behind, with the tendon of the Triceps brachii and the Anconæus.
|
https://en.wikipedia.org/wiki/Anterior%20ligament%20of%20elbow
|
The anterior ligament of the elbow is a broad and thin fibrous layer covering the anterior surface of the joint.
It is attached to the front of the medial epicondyle and to the front of the humerus immediately above the coronoid and radial fossae below, to the anterior surface of the coronoid process of the ulna and to the annular ligament, being continuous on either side with the collateral ligaments.
Its superficial fibers pass obliquely from the medial epicondyle of the humerus to the annular ligament.
The middle fibers, vertical in direction, pass from the upper part of the coronoid fossa and become partly blended with the preceding, but are inserted mainly into the anterior surface of the coronoid process.
The deep or transverse set intersects these at right angles. This ligament is in relation, in front, with the brachialis muscle, except at its most lateral part.
|
https://en.wikipedia.org/wiki/Capitulum%20of%20the%20humerus
|
In human anatomy of the arm, the capitulum of the humerus is a smooth, rounded eminence on the lateral portion of the distal articular surface of the humerus. It articulates with the cup-shaped depression on the head of the radius, and is limited to the front and lower part of the bone.
In non-human tetrapods, the name capitellum is generally used, with "capitulum" limited to the anteroventral articular facet of the rib (in archosauromorphs).
Lepidosauromorpha
Lepidosaurs show a distinct capitellum and trochlea on the centre of the ventral (anterior in upright taxa) surface of the humerus at the distal end.
Archosauromorpha
In non-avian archosaurs, including crocodiles, the capitellum and the trochlea are no longer bordered by distinct etc.- and entepicondyles respectively, and the distal humerus consists two gently expanded condyles, one lateral and one medial, separated by a shallow groove and a supinator process. Romer (1976) homologizes the capitellum in Archosauromorphs with the groove separating the medial and lateral condyles.
In birds, where forelimb anatomy has an adaptation for flight, its functional if not ontogenetic equivalent is the dorsal condyle of the humerus.
Additional images
|
https://en.wikipedia.org/wiki/Ethnic%20bioweapon
|
An ethnic bioweapon (or a biogenetic weapon) is a hypothetical type of bioweapon which could preferentially target people of specific ethnicities or people with specific genotypes.
History
One of the first modern fictional discussions of ethnic weapons is in Robert A. Heinlein's 1942 novel Sixth Column (republished as The Day After Tomorrow), in which a race-specific radiation weapon is used against a so-called "Pan-Asian" invader.
Genetic weapons
In 1997, U.S. Secretary of Defense William Cohen referred to the concept of an ethnic bioweapon as a possible risk. In 1998 some biological weapon experts considered such a "genetic weapon" plausible, and believed the former Soviet Union had undertaken some research on the influence of various substances on human genes.
In its 2000 policy paper Rebuilding America's Defenses, think-tank Project for the New American Century (PNAC) described ethnic bioweapons as a "politically useful tool" that US could have incentive to develop and use.
The possibility of a "genetic bomb" is presented in Vincent Sarich's and Frank Miele's book, Race: The Reality of Human Differences, published in 2004. These authors view such weapons as technically feasible but not very likely to be used. (page 248 of paperback edition.)
In 2004, The Guardian reported that the British Medical Association (BMA) considered bioweapons designed to target certain ethnic groups as a possibility, and highlighted problems that advances in science for such things as "treatment to Alzheimer's and other debilitating diseases could also be used for malign purposes".
In 2005, the official view of the International Committee of the Red Cross was "The potential to target a particular ethnic group with a biological agent is probably not far off. These scenarios are not the product of the ICRC's imagination but have either occurred or been identified by countless independent and governmental experts."
In May 2007, it was reported that the Russian government banned a
|
https://en.wikipedia.org/wiki/Singular%20control
|
In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows.
The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control , i.e., is of the form: and the control is restricted to being between an upper and a lower bound: . To minimize , we need to make as big or as small as possible, depending on the sign of , specifically:
If is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control that switches from to at times when switches from negative to positive.
The case when remains at zero for a finite length of time is called the singular control case. Between and the maximization of the Hamiltonian with respect to gives us no useful information and the solution in that time interval is going to have to be found from other considerations. (One approach would be to repeatedly differentiate with respect to time until the control u again explicitly appears, though this is not guaranteed to happen eventually. One can then set that expression to zero and solve for u. This amounts to saying that between and the control is determined by the requirement that the singularity condition continues to hold. The resulting so-called singular arc, if it is optimal, will satisfy the Kelley condition:
Others refer to this condition as the generalized Legendre–Clebsch condition.
The term bang-singular control refers to a control that has a bang-bang portion as well as a singular portion.
|
https://en.wikipedia.org/wiki/Power%20iteration
|
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix , the algorithm will produce a number , which is the greatest (in absolute value) eigenvalue of , and a nonzero vector , which is a corresponding eigenvector of , that is, .
The algorithm is also known as the Von Mises iteration.
Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix by a vector, so it is effective for a very large sparse matrix with appropriate implementation.
The method
The power iteration algorithm starts with a vector , which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation
So, at every iteration, the vector is multiplied by the matrix and normalized.
If we assume has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence converges to an eigenvector associated with the dominant eigenvalue.
Without the two assumptions above, the sequence does not necessarily converge. In this sequence,
,
where is an eigenvector associated with the dominant eigenvalue, and . The presence of the term implies that does not converge unless . Under the two assumptions listed above, the sequence defined by
converges to the dominant eigenvalue (with Rayleigh quotient).
One may compute this with the following algorithm (shown in Python with NumPy):
#!/usr/bin/env python3
import numpy as np
def power_iteration(A, num_iterations: int):
# Ideally choose a random vector
# To decrease the chance that our vector
# Is orthogonal to the eigenvector
b_k = np.random.rand(A.shape[1])
for _ in range(num_iterations):
# calculate the matrix-by-vector product Ab
b_k1 =
|
https://en.wikipedia.org/wiki/World%20Agroforestry%20Centre
|
World Agroforestry (a brand name used by the International Centre for Research in Agroforestry, ICRAF) is an international institute headquartered in Nairobi, Kenya, and founded in 1978 as "International Council for Research in Agroforestry". The centre specializes in the sustainable management, protection and regulation of tropical rainforest and natural reserves. It is one of 15 agricultural research centres which makes up the global network known as the CGIAR.
Description
The centre conducts research in agroforestry, in partnership with national agricultural research systems with a view to developing more sustainable and productive land use. The focus of its research is countries/regions in the developing world, particular in the tropics of Central and South America, Southeast Asia, South Asia, West Africa, Eastern Africa and parts of central Africa.
In 2002, the centre acquired the World Agroforestry Centre brand name, although International Centre for Research in Agroforestry remains its legal name and it continues to use the acronym ICRAF.
In 2017, ICRAF released a study at the UN Climate Change Conference that centers on agroforestry and the emission of carbons from deforestation.
See also
Forest Day
Aster Gebrekirstos
|
https://en.wikipedia.org/wiki/Open-source%20video%20game
|
An open-source video game, or simply an open-source game, is a video game whose source code is open-source. They are often freely distributable and sometimes cross-platform compatible.
Definition and differentiation
Not all open-source games are free software; some open-source games contain proprietary non-free content. Open-source games that are free software and contain exclusively free content conform to DFSG, free culture, and open content and are sometimes called free games. Many Linux distributions require for inclusion that the game content is freely redistributable, freeware or commercial restriction clauses are prohibited.
Background
In general, open-source games are developed by relatively small groups of people in their free time, with profit not being the main focus. Many open-source games are volunteer-run projects, and as such, developers of free games are often hobbyists and enthusiasts. The consequence of this is that open-source games often take longer to mature, are less common and often lack the production value of commercial titles. In the 1900s a challenge to build high-quality content for games was the missing availability or the excessive price for tools like 3D modeller or toolsets for level design.
In recent years, this changed and availability of open-source tools like Blender, game engines and libraries drove open source and independent video gaming. FLOSS game engines, like the Godot game engine, as well as libraries, like SDL, are increasingly common in game development, even proprietary ones. Given that game art is not considered software, there is debate about the philosophical or ethical obstacles in selling a game where its art is proprietary but the entire source code is free software.
Some of the open-source game projects are based on formerly proprietary games, whose source code was released as open-source software, while the game content (such as graphics, audio and levels) may or may not be under a free license. Examples
|
https://en.wikipedia.org/wiki/Spiegelman%27s%20Monster
|
Spiegelman's Monster is an RNA chain of only 218 nucleotides that is able to be reproduced by the RNA replication enzyme RNA-dependent RNA polymerase, also called RNA replicase. It is named after its creator, Sol Spiegelman, of the University of Illinois at Urbana-Champaign who first described it in 1965.
Description
Spiegelman introduced RNA from a simple bacteriophage Qβ (Qβ) into a solution which contained Qβ's RNA replicase, some free nucleotides, and some salts. In this environment, the RNA started to be replicated. After a while, Spiegelman took some RNA and moved it to another tube with fresh solution. This process was repeated.
Shorter RNA chains were able to be replicated faster, so the RNA became shorter and shorter as selection favored speed. After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. This short RNA sequence replicated very quickly in these unnatural circumstances.
Further work
M. Sumper and R. Luce of Manfred Eigen's laboratory replicated the experiment, except without adding RNA, only RNA bases and Qβ replicase. They found that under the right conditions the Qβ replicase can spontaneously generate RNA which evolves into a form similar to Spiegelman's Monster.
Eigen built on Spiegelman's work and produced a similar system further degraded to just 48 or 54 nucleotides—the minimum required for the binding of the replication enzyme, this time a combination of HIV-1 reverse transcriptase and T7 RNA polymerase.
See also
Abiogenesis
RNA world hypothesis
PAH world hypothesis
Viroid
|
https://en.wikipedia.org/wiki/Automaton%20clock
|
An automaton clock or automata clock is a type of striking clock featuring automatons. Clocks like these were built from the 1st century BC through to Victorian times in Europe. A Cuckoo clock is a simple form of this type of clock.
The first known mention is of those created by the Roman engineer Vitruvius, describing early alarm clocks working with gongs or trumpets. Later automatons usually perform on the hour, half-hour or quarter-hour, usually to strike bells. Common figures in older clocks include Death (as a reference to human mortality), Old Father Time, saints and angels. In the Regency and Victorian eras, common figures also included royalty, famous composers or industrialists.
More recently constructed automaton clocks are widespread in Japan, where they are known as karakuri-dokei. Notable examples of such clocks include the Ni-Tele Really Big Clock, designed by Hayao Miyazaki to be affixed on the Nippon Television headquarters in Tokyo, touted to be the largest animated clock in the world. In the United Kingdom, Kit Williams produced a series of large automaton clocks for a handful of British shopping centres, featuring frogs, ducks and fish. Seiko and Rhythm Clock are known for their battery-powered musical clocks, which frequently feature flashing lights, automatons and other moving parts designed to attract attention while in motion.
|
https://en.wikipedia.org/wiki/International%20Conference%20on%20Information%20Processing%20in%20Sensor%20Networks
|
IPSN, the IEEE/ACM International Conference on Information Processing in Sensor Networks, is an academic conference on sensor networks with its main focus on information processing aspects of sensor networks. IPSN draws upon many disciplines including signal and image processing, information and coding theory, networking and protocols, distributed algorithms, wireless communications, machine learning, embedded systems design, and data bases and information management.
IPSN Events
IPSN started in 2001, and following is a list of IPSN events from 2001 to 2014:
13th IPSN 2014, Berlin, Germany, April 15–17, 2014
12th IPSN 2013, Philadelphia, PA, USA, April 8–11, 2013
11th IPSN 2012, Beijing, China, April 16–19, 2012
10th IPSN 2011, Chicago, IL, USA, April 12–14, 2011
9th IPSN 2010, Stockholm, Sweden, April 12–16, 2010
8th IPSN 2009, San Francisco, California, USA, April 13–16, 2009
7th IPSN 2008, (Washington U.) St. Louis, Missouri, USA, April 22–24, 2008
6th IPSN 2007, (MIT) Cambridge, MA, USA, April 25–27, 2007
5th IPSN 2006, (Vanderbilt) Nashville, Tennessee, USA, April 19–21, 2006
4th IPSN 2005, (UCLA) Los Angeles, CA, USA, April 25–27, 2005
3rd IPSN 2004, (UC Berkeley) Berkeley, CA, USA, April 26–27, 2004
2nd IPSN 2003, (Xerox PARC) Palo Alto, CA, USA, April 22–23, 2003
CSP Workshop 2001, (Xerox PARC) Palo Alto, CA (see history subsection for name explanation)
Ranking
Although there is no official ranking of academic conferences on wireless sensor networks,
IPSN is widely regarded by researchers as one of the two (along with SenSys) most prestigious conferences focusing on sensor network research. SenSys focuses more on system issues while IPSN on algorithmic and theoretical considerations. The acceptance rate for 2006 was 15.2% for oral presentations, 25% overall (25 papers +17 poster presentations, out of 165 submissions accepted).
History
IPSN started off as a workshop at Xerox Palo Alto Research Center in 2001, and it was initially called Colla
|
https://en.wikipedia.org/wiki/Kernel%20principal%20component%20analysis
|
In the field of multivariate statistics, kernel principal component analysis (kernel PCA)
is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space.
Background: Linear PCA
Recall that conventional PCA operates on zero-centered data; that is,
,
where is one of the multivariate observations.
It operates by diagonalizing the covariance matrix,
in other words, it gives an eigendecomposition of the covariance matrix:
which can be rewritten as
.
(See also: Covariance matrix as a linear operator)
Introduction of the Kernel to PCA
To understand the utility of kernel PCA, particularly for clustering, observe that, while N points cannot, in general, be linearly separated in dimensions, they can almost always be linearly separated in dimensions. That is, given N points, , if we map them to an N-dimensional space with
where ,
it is easy to construct a hyperplane that divides the points into arbitrary clusters. Of course, this creates linearly independent vectors, so there is no covariance on which to perform eigendecomposition explicitly as we would in linear PCA.
Instead, in kernel PCA, a non-trivial, arbitrary function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensional 's if we never have to actually evaluate the data in that space. Since we generally try to avoid working in the -space, which we will call the 'feature space', we can create the N-by-N kernel
which represents the inner product space (see Gramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in the -space (see Kernel trick). The N-elements in each column of K represent the dot product of one point of the tr
|
https://en.wikipedia.org/wiki/Time-evolving%20block%20decimation
|
The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.
Introduction
Considering the inherent difficulties of simulating general quantum many-body systems, the exponential increase in parameters with the size of the system, and correspondingly, the high computational costs, one solution would be to look for numerical methods that deal with special cases, where one can profit from the physics of the system. The raw approach, by directly dealing with all the parameters used to fully characterize a quantum many-body system is seriously impeded by the lavishly exponential buildup with the system size of the amount of variables needed for simulation, which leads, in the best cases, to unreasonably long computational times and extended use of memory. To get around this problem a number of various methods have been developed and put into practice in the course of time, one of the most successful ones being the quantum Monte Carlo method (QMC). Also the density matrix renormalization group (DMRG) method, next to QMC, is a very reliable method, with an expanding community of users and an increasing number of applications to physical systems.
When the first quantum computer is plugged in and functioning, the perspectives for the field of computational physics will look rather promising, but until that day one has to restrict oneself to the mundane tools offered by classical computers. While experimental physicists are putting a lot of effort in
|
https://en.wikipedia.org/wiki/Corvus%20Systems
|
Corvus Systems was a computer technology company that offered, at various points in its history, computer hardware, software, and complete PC systems.
History
Corvus was founded by Michael D'Addio and Mark Hahn in 1979. This San Jose, Silicon Valley company pioneered in the early days of personal computers, producing the first hard disk drives, data backup, and networking devices, commonly for the Apple II series. The combination of disk storage, backup, and networking was very popular in primary and secondary education. A classroom would have a single drive and backup with a full classroom of Apple II computers networked together. Students would log in each time they use the computer and access their work
via the Corvus Omninet network, which also supported eMail.
They went public in 1981 and were traded on the NASDAQ exchange. In 1985 Corvus acquired a company named Onyx & IMI. IMI (International Memories Incorporated) manufactured the hard disks used by Corvus.
The New York Times followed their financial fortunes. They were a modest success in the stock market during their first few years as a public company. The company's founders left Corvus in 1985 as the remaining board of directors made the decision to enter the PC clone market. D'Addio and Hahn went on to found Videonics in 1986, the same year Corvus discontinued hardware
manufacturing.
In 1987, Corvus filed for Chapter 11. That same year two top executives left. Its demise was partially caused by Ethernet establishing itself over Omninet as the local area network standard for PCs, and partially by the decision to become a PC clone company in a crowded and unprofitable market space.
Disk drives and backup
The company modified the Apple II's DOS operating system to enable using Corvuss 10 MB Winchester technology hard disk drives. Apple DOS normally was limited to the usage of 140 KB floppy disks. The Corvus disks not only increased the size of available storage but were also considerably faster than f
|
https://en.wikipedia.org/wiki/Elliptic%20unit
|
In mathematics, elliptic units are certain units of abelian extensions of imaginary quadratic fields constructed using singular values of modular functions, or division values of elliptic functions. They were introduced by Gilles Robert in 1973, and were used by John Coates and Andrew Wiles in their work on the Birch and Swinnerton-Dyer conjecture. Elliptic units are an analogue for imaginary quadratic fields of cyclotomic units. They form an example of an Euler system.
Definition
A system of elliptic units may be constructed for an elliptic curve E with complex multiplication by the ring of integers R of an imaginary quadratic field F. For simplicity we assume that F has class number one. Let a be an ideal of R with generator α. For a Weierstrass model of E, define
where P is a point on E, Δ is the discriminant, and x is the X-coordinate on the Weierstrass model. The function Θ is independent of the choice of model, and is defined over the field of definition of E.
Properties
Let b be an ideal of R coprime to a and Q an R-generator of the b-torsion. Then Θa(Q) is defined over the ray class field K(b), and if b is not a prime power then Θa(Q) is a global unit: if b is a power of a prime p then Θa(Q) is a unit away from p.
The function Θa satisfies a distribution relation for b = (β) coprime to a:
See also
Modular unit
|
https://en.wikipedia.org/wiki/Genetics%20of%20Down%20syndrome
|
Down syndrome is a chromosomal abnormality characterized by the presence of an extra copy of genetic material on chromosome 21, either in whole (trisomy 21) or part (such as due to translocations). The effects of the extra copy varies greatly from individual to individual, depending on the extent of the extra copy, genetic background, environmental factors, and random chance. Down syndrome can occur in all human populations, and analogous effects have been found in other species, such as chimpanzees and mice. In 2005, researchers have been able to create transgenic mice with most of human chromosome 21 (in addition to their normal chromosomes).
A typical human karyotype is shown here. Every chromosome has two copies. In the bottom right, there are chromosomal differences between males (XY) and females (XX), which do not concern us. A typical human karyotype is designated as 46,XX or 46,XY, indicating 46 chromosomes with an XX arrangement for females and 46 chromosomes with an XY arrangement for males. For this article, we will use females for the karyotype designation (46,XX).
Trisomy 21
Trisomy 21 (47,XY,+21) is caused by a meiotic nondisjunction event. A typical gamete (either egg or sperm) has one copy of each chromosome (23 total). When it is combined with a gamete from the other parent during conception, the child has 46 chromosomes. However, with nondisjunction, a gamete is produced with an extra copy of chromosome 21 (the gamete has 24 chromosomes). When combined with a typical gamete from the other parent, the child now has 47 chromosomes, with three copies of chromosome 21. The trisomy 21 karyotype figure shows the chromosomal arrangement, with the prominent extra chromosome 21.
Trisomy 21 is the cause of approximately 95% of observed Down syndrome, with 88% coming from nondisjunction in the maternal gamete and 8% coming from nondisjunction in the paternal gamete. Mitotic nondisjunction after conception would lead to mosaicism, and is discussed later.
|
https://en.wikipedia.org/wiki/E.B.%20Wilson%20Medal
|
The E.B. Wilson Medal is the American Society for Cell Biology's highest honor for science and is presented at the Annual Meeting of the Society for significant and far-reaching contributions to cell biology over the course of a career. It is named after Edmund Beecher Wilson.
Medalists
Source : ASCB
See also
List of medicine awards
|
https://en.wikipedia.org/wiki/Keith%20R.%20Porter%20Lecture
|
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
|
https://en.wikipedia.org/wiki/WICB%20Junior%20and%20Senior%20Awards
|
The Women In Cell Biology Committee of the American Society for Cell Biology (ASCB) recognizes outstanding achievements by women in cell biology by presenting three (previously only two) Career Recognition Awards at the ASCB Annual Meeting. The Junior Award is given to a woman in an early stage of her career (generally seven or eight years in an independent position) who has made exceptional scientific contributions to cell biology and exhibits the potential for continuing a high level of scientific endeavor while fostering the career development of damaged young scientists. The Mid-Career Award (introduced in 2012) is given to a woman at the mid-career level who has made exceptional scientific contributions to cell biology and/or has effectively translated cell biology across disciplines, and who exemplifies a high level of scientific endeavor and leadership. The Senior Award is given to a woman or man in a later career stage (generally full professor or equivalent) whose outstanding scientific achievements are coupled with a long-standing record of support for women in science and by mentorship of both men and women in scientific careers.
Senior awardees
Source: WICB
2020 Erika Holzbaur
2019 Rong Li
2018 Eva Nogales
2017 Harvey Lodish
2016 Susan Gerbi
2015 Angelika Amon
2014 Sandra L. Schmid
2013 Lucille Shapiro
2012 Marianne Bronner
2011 Susan Rae Wente
2010 Zena Werb
2009 Janet Rossant
2008 Fiona Watt
2007 Frances Brodsky
2006 Joseph Gall
2005 Elizabeth Blackburn
2004 Susan Lindquist
2003 Philip Stahl
2002 Natasha Raikhel
2001 Joan Brugge
2000 Shirley Tilghman
1999 Ursula Goodenough
1998 Christine Guthrie
1997 Elaine Fuchs
1996 Sarah C. R. Elgin
1995 Virginia Zakian
1994 Ann Hubbard
1993 Mina Bissell
1992 Helen Blau
1991 Hynda Kleinman
1990 Dorthea Wilson and Rosemary Simpson
1989 Dorothy Bainton
1988 No Awardees selected
1987 Dorothy M. Skinner
1986 Mary Clutter
Mid-Career awardees
Source: WICB
2020 Daniela Nicastro and Anne E. Carpenter
2019 Coleen T. Murp
|
https://en.wikipedia.org/wiki/Early%20Career%20Life%20Scientist%20Award
|
The ASCB Early Career Life Scientist Award is awarded by the American Society for Cell Biology to an outstanding scientist who earned his doctorate no more than 12 years earlier and who has served as an independent investigator for no more than seven years. The winner speaks at the ASCB Annual Meeting and receives a monetary prize.
Awardees
Source: American Society for Cell Biology
2020 James Olzmann
2019 Cignall Kadoch
2018 Sergiu Pasca
2017 Meng Wang
2016 Bo Huang;Valentina Greco
2015 Vladimir Denic
2014 Manuel Thery
2013 Douglas B. Weibel
2012 Iain Cheeseman
2012 Gia Voeltz
2011 Maxence V. Nachury
2010 Anna Kashina
2009 Martin W. Hetzer
2008 Arshad B. Desai
2007 Abby Dernburg
2006 Karsten Weis
2005 Eva Nogales
2004 No award this year
2003 Frank Gertler
2002 Kathleen Collins and Benjamin Cravatt
2001 Daphne Preuss
2000 Erin O'Shea
1999 Raymond Deshaies
See also
List of biology awards
|
https://en.wikipedia.org/wiki/Claire%20Voisin
|
Claire Voisin (born 4 March 1962) is a French mathematician known for her work in algebraic geometry. She is a member of the French Academy of Sciences and holds the chair of algebraic geometry at the Collège de France.
Work
She is noted for her work in algebraic geometry particularly as it pertains to variations of Hodge structures and mirror symmetry, and has written several books on Hodge theory. In 2002, Voisin proved that the generalization of the Hodge conjecture for compact Kähler varieties is false. The Hodge conjecture is one of the seven Clay Mathematics Institute Millennium Prize Problems which were selected in 2000, each having a prize of one million US dollars.
Voisin won the European Mathematical Society Prize in 1992 and the Servant Prize awarded by the Academy of Sciences in 1996. She received the Sophie Germain Prize in 2003 and the Clay Research Award in 2008 for her disproof of the Kodaira conjecture on deformations of compact Kähler manifolds. In 2007, she was awarded the Ruth Lyttle Satter Prize in Mathematics for, in addition to her work on the Kodaira conjecture, solving the generic case of Green's conjecture on the syzygies of the canonical embedding of an algebraic curve. This case of Green's conjecture had
received considerable attention from algebraic geometers for over two decades prior to its resolution by Voisin (the full conjecture for arbitrary curves is still partially open).
She was an invited speaker at the 1994 International Congress of Mathematicians in Zürich in the section 'Algebraic Geometry', and she was also invited as a plenary speaker at the 2010 International Congress of Mathematicians in Hyderabad.
In 2014, she was elected to the Academia Europaea.
She served on the Mathematical Sciences jury of the Infosys Prize from 2017 to 2019.
In 2009 she became a member of the German Academy of Sciences Leopoldina. In May 2016, she was elected as a foreign associate of the National Academy of Sciences. Also in 2016, she b
|
https://en.wikipedia.org/wiki/De%20Bruijn%20torus
|
In combinatorial mathematics, a De Bruijn torus, named after Dutch mathematician Nicolaas Govert de Bruijn, is an array of symbols from an alphabet (often just 0 and 1) that contains every possible matrix of given dimensions exactly once. It is a torus because the edges are considered wraparound for the purpose of finding matrices. Its name comes from the De Bruijn sequence, which can be considered a special case where (one dimension).
One of the main open questions regarding De Bruijn tori is whether a De Bruijn torus for a particular alphabet size can be constructed for a given and . It is known that these always exist when , since then we simply get the De Bruijn sequences, which always exist. It is also known that "square" tori exist whenever and even (for the odd case the resulting tori cannot be square).
The smallest possible binary "square" de Bruijn torus, depicted above right, denoted as de Bruijn torus (or simply as ), contains all binary matrices.
B2
Apart from "translation", "inversion" (exchanging 0s and 1s) and "rotation" (by 90 degrees), no other de Bruijn tori are possible – this can be shown by complete inspection of all 216 binary matrices (or subset fulfilling constrains such as equal numbers of 0s and 1s).
The torus can be unrolled by repeating n−1 rows and columns. All n×n submatrices without wraparound, such as the one shaded yellow, then form the complete set:
{| class="wikitable"
| 1 || style="background:#ccc;"|0 || 1 || 1 || rowspan="5" style="border-left:solid; padding:0;"| || 1
|-
| 1 || style="background:#ccc;"|0 || style="background:#ccc;"|0 || style="background:#ccc;"|0 || 1
|-
| style="background:#ccc;"|0 || style="background:#ccc;"|0 || style="background:#ccc;"|0 || 1 || style="background:#ccc;"|0
|-
| 1 || 1 || style="background:#ccc;"|0 || style="background:#ff0;"|1 || style="background:#ff0;"|1
|- style="border-top:solid;"
| 1 || style="background:#ccc;"|0 || 1 || style="background:#ff0;"|1 || style="background:#ff0;
|
https://en.wikipedia.org/wiki/Ravi%20Vakil
|
Ravi D. Vakil (born February 22, 1970) is a Canadian-American mathematician working in algebraic geometry.
Education and career
Vakil attended high school at Martingrove Collegiate Institute in Etobicoke, Ontario, where he won several mathematical contests and olympiads. After earning a BSc and MSc from the University of Toronto in 1992, he completed a PhD in mathematics at Harvard University in 1997 under Joe Harris. He has since been an instructor at both Princeton University and MIT. Since the fall of 2001, he has taught at Stanford University, becoming a full professor in 2007.
Contributions
Vakil is an algebraic geometer and his research work spans over enumerative geometry, topology, Gromov–Witten theory, and classical algebraic geometry. He has solved several old problems in Schubert calculus. Among other results, he proved that all Schubert problems are enumerative over the real numbers, a result that resolves an issue mathematicians have worked on for at least two decades.
Awards and honors
Vakil has received many awards, including an NSF CAREER Fellowship, a Sloan Research Fellowship, an American Mathematical Society Centennial Fellowship, a G. de B. Robinson prize for the best paper published (2000) in the Canadian Journal of Mathematics and the Canadian Mathematical Bulletin, and the André-Aisenstadt Prize from the Centre de Recherches Mathématiques at the Université de Montréal (2005), and the Chauvenet Prize (2014)..
In 2012 he became a fellow of the American Mathematical Society.
Mathematics contests
He was a member of the Canadian team in three International Mathematical Olympiads, winning silver, gold (perfect score), and gold in 1986, 1987, and 1988 respectively. He was also the fourth person to be a four-time Putnam Fellow in the history of the contest. Also, he has been the coordinator of weekly Putnam preparation seminars at Stanford.
|
https://en.wikipedia.org/wiki/United%20Cannery%2C%20Agricultural%2C%20Packing%2C%20and%20Allied%20Workers%20of%20America
|
The United Cannery, Agricultural, Packing, and Allied Workers of America (UCAPAWA) was a labor union formed in 1937 and incorporated large numbers of Mexican, black, Asian, and Anglo food processing workers under its banner. The founders envisioned a national decentralized labor organization with power flowing from the bottom up. Although it was short-lived, the UCAPAWA influenced the lives of many workers and had a major impact for both women and minority workers in the union.
UCAPAWA changed its name to Food, Tobacco, Agricultural, and Allied Workers (FTA) in 1944.
History
The United Cannery, Agricultural, Packing Allied Workers of America (or UCAPAWA) was an organization formed after the American Federation of Labor (AFL) ignored several delegate members plea to have better working conditions for farm and food processing workers. At its head stood an intense and energetic organizer named Donald Henderson who was a young economics instructor at Columbia University and a member of the Communist party. Henderson, who was also one of the founders of the People’s Congress, noted the importance this union placed on popularizing the conditions of black and Mexican American workers and organizing them as a way to improve their social and economic situation. Henderson declared that the “International Office was sufficiently concerned with the conditions facing . . . the Negro people and the Mexican and Spanish American peoples.” Henderson observed that both minority groups were deprived of civil rights, exploited to the point of starvation, kept in decayed housing, denied educational opportunities, and in Henderson’s view, “blocked from their own cultural development.” Henderson eventually, as President of the union, established it as the agricultural arm of Congress of Industrial Organizations (CIO) in 1937 after having been abandoned by the AFL.
Unable to persuade the AFL to charter an international union of agricultural workers and increasingly drawn to the Congre
|
https://en.wikipedia.org/wiki/Food%2C%20Tobacco%2C%20Agricultural%2C%20and%20Allied%20Workers
|
The United Cannery, Agricultural, Packing, and Allied Workers of America union (UCAPAWA) changed its name to Food, Tobacco, Agricultural, and Allied Workers (FTA) in 1944.
History
The FTA sought to further organize cannery units and realized the best way to do this would be through organizing women and immigrant workers and in 1945 started finding success to these ends. The FTA started to experience problems when the International Brotherhood of Teamsters (IBT) began interfering in its organizing efforts. The IBT was affiliated with the American Federation of Labor (AFL) and the FTA was affiliated with its rival, the radical, Congress of Industrial Organizations (CIO). The IBT union was more conservative in regards to women and immigrant workers. It did not have much interest in integrating them into the union. It was far more concerned with making sweetheart deals and collecting union dues. This willingness to maintain the status quo made the IBT a favorite among California Processors and growers. This meant that they signed more contracts with processors and growers than the FTA, which ultimately undermined the more radical FTA union. The Taft-Hartley Act of 1947 further damaged the FTA. This act stated that American labor unions could not have communist ties and the FTA had many. Because of this act many of the organizers either left the union or were deported. It was at this time that the FTA as a whole was expelled from the CIO. This put the IBT in the center of the California cannery industry and it remained there for the next two decades.
Shortly after expulsion from the CIO, FTA absorbed the International Fishermen and Allied Workers of America. It then merged with the United Office and Professional Workers of America and the Distributive Workers Union (formed by locals that had just left the Retail, Wholesale and Department Store Union) to create the Distributive, Processing, and Office Workers of America (DPOWA). Internal disputes and polit
|
https://en.wikipedia.org/wiki/Soft%20photon
|
In particle physics, soft photons are photons having photon energies much smaller than the energies of the particles participating in a particular scattering process, and they are not energetic enough to be detected. Such photons can be emitted from (or absorbed by) the external (incoming and outgoing) lines of charged particles of the Feynman diagram for the process. Even though soft photons are not detected, the possibility of their emission must be taken into account in the calculation of the scattering amplitude.
Taking the soft photons into account multiplies the rate of a given process by a factor which approaches zero. However, there exists also an infrared-divergent contribution to the scattering rate related to virtual soft photons that are emitted from one of the external lines and absorbed in another (or in the same line). The two factors cancel each other, leaving a finite correction which depends on the sensitivity with which photons can be detected in the experiment.
The Weinberg's soft phothon theorem simplify the calculation of the contribute of soft photons to the scattering of particles.
|
https://en.wikipedia.org/wiki/PLOS%20Computational%20Biology
|
PLOS Computational Biology is a monthly peer-reviewed open access scientific journal covering computational biology. It was established in 2005 by the Public Library of Science in association with the International Society for Computational Biology (ISCB) in the same format as the previously established PLOS Biology and PLOS Medicine. The founding editor-in-chief was Philip Bourne and the current ones are Feilim Mac Gabhann and Jason Papin.
Format
The journal publishes both original research and review articles. All articles are open access and licensed under the Creative Commons Attribution License.
Since its inception, the journal has published the Ten Simple Rules series of practical guides, which has subsequently become one of the journal's most read article series.
The Ten Simple Rules series then led to the Quick Tips collection, whose articles contain recommendations on computational practices and methods, such as dimensionality reduction for example.
In 2012, it launched the Topic Page review format, which dual-publishes peer-reviewed articles both in the journal and on Wikipedia. It was the first publication of its kind to publish in this way.
See also
PLOS
PLOS Biology
BMC Bioinformatics
|
https://en.wikipedia.org/wiki/BTeV%20experiment
|
The BTeV experiment — for B meson TeV (teraelectronvolt) — was an experiment in high-energy particle physics designed to challenge the Standard Model explanation of CP violation, mixing and rare decays of bottom and charm quark states. The Standard Model has been the baseline particle physics theory for several decades and BTeV aimed to find out what lies beyond the Standard Model. In doing so, the BTeV results could have contributed to shed light on phenomena associated with the early universe such as why the universe is made up of matter and not anti-matter.
The BTeV Collaboration was a group of about 170 physicists drawn from more than 30 universities and physics institutes from Belarus, Canada, China, Italy, Russia, and the United States of America. The BTeV experiment was designed to utilize the Tevatron proton-antiproton collider at the Fermi National Accelerator Laboratory, located in the far west suburbs of Chicago, Illinois in the USA. The experiment was scheduled to start in 2006, followed by commissioning in 2008, and data-taking in 2009.
The BTeV Project was terminated by the United States Department of Energy on 2005-02-07 after being removed from the President's Budget for the 2006 fiscal year.
External links
BTeV website
Record of BTeV experiment on INSPIRE-HEP
Particle experiments
|
https://en.wikipedia.org/wiki/Star%20domain
|
In geometry, a set in the Euclidean space is called a star domain (or star-convex set, star-shaped set or radially convex set) if there exists an such that for all the line segment from to lies in This definition is immediately generalizable to any real, or complex, vector space.
Intuitively, if one thinks of as a region surrounded by a wall, is a star domain if one can find a vantage point in from which any point in is within line-of-sight. A similar, but distinct, concept is that of a radial set.
Definition
Given two points and in a vector space (such as Euclidean space ), the convex hull of is called the and it is denoted by
where for every vector
A subset of a vector space is said to be if for every the closed interval
A set is and is called a if there exists some point such that is star-shaped at
A set that is star-shaped at the origin is sometimes called a . Such sets are closed related to Minkowski functionals.
Examples
Any line or plane in is a star domain.
A line or a plane with a single point removed is not a star domain.
If is a set in the set obtained by connecting all points in to the origin is a star domain.
Any non-empty convex set is a star domain. A set is convex if and only if it is a star domain with respect to any point in that set.
A cross-shaped figure is a star domain but is not convex.
A star-shaped polygon is a star domain whose boundary is a sequence of connected line segments.
Properties
The closure of a star domain is a star domain, but the interior of a star domain is not necessarily a star domain.
Every star domain is a contractible set, via a straight-line homotopy. In particular, any star domain is a simply connected set.
Every star domain, and only a star domain, can be "shrunken into itself"; that is, for every dilation ratio the star domain can be dilated by a ratio such that the dilated star domain is contained in the original star domain.
The union and intersection of
|
https://en.wikipedia.org/wiki/Sandoricum%20koetjape
|
Sandoricum koetjape, the santol, sentul or cotton fruit, is a tropical fruit native to maritime Southeast Asia (Malesia).
Origin and distribution
The santol is native to the Malesian floristic region, but have been introduced to Indochina, Sri Lanka, India, northern Australia, Mauritius, and Seychelles. It is commonly cultivated throughout these regions and the fruits are seasonally abundant in the local and international markets.
Botanical description
There are two varieties of santol fruit, previously considered two different species, the yellow variety and the red. The difference is in the color that the older leaves turn before falling. The red appears to be more common and the reddish leaves mixed with the green ones add to the distinction and attractiveness of the tree. The fruits are often the size, shape and slightly fuzzy texture of peaches, with a reddish tinge. Both types have a skin that may be a thin peel to a thicker rind, according to the variety. It is often edible and in some cultivars may contain a milky juice. The central pulp near the seeds may be sweet or sour and contains inedible brown seeds. In some varieties the outer rind is thicker and is the main edible portion, with a mild peachy taste combined with some taste and the pulpy texture of apples. In others the outer rind is thinner and harder and the inner whitish pulp around the seeds is eaten. This may be rather sour in many cultivars, which has reduced the general acceptance of the tree. Most improved varieties have increased thickness of the edible outer rind, which can be eaten with a spoon leaving just the outer skin, and should increase the acceptance of the santol worldwide.
The fruit grows on a fast-growing tree that may reach 150 feet in height. It bears ribbed leaves and pink or yellow-green flowers about 1 centimeter long.
Uses
Culinary
The ripe fruits are harvested by climbing the tree and plucking by hand, alternatively a long stick with a forked end may be used t
|
https://en.wikipedia.org/wiki/Ho%E2%80%93Kaufman%E2%80%93Mcalister%20syndrome
|
Ho–Kaufman–Mcalister syndrome is a rare congenital malformation syndrome where infants are born with a cleft palate, micrognathia, Wormian bones, congenital heart disease, dislocated hips, bowed fibulae, preaxial polydactyly of the feet, abnormal skin patterns, and most prominently, missing tibia. The etiology is unknown. Ho–Kaufman–Mcalister syndrome is named after Chen-Kung Ho, R.L. Kaufman, and W.H. Mcalister who first described the syndrome in 1975 at Washington University in St. Louis. It is considered a rare disease by the Office of Rare Diseases (ORD) of the National Institutes of Health (NIH).
|
https://en.wikipedia.org/wiki/Cohn%27s%20irreducibility%20criterion
|
Arthur Cohn's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in —that is, for it to be unfactorable into the product of lower-degree polynomials with integer coefficients.
The criterion is often stated as follows:
If a prime number is expressed in base 10 as (where ) then the polynomial
is irreducible in .
The theorem can be generalized to other bases as follows:
Assume that is a natural number and is a polynomial such that . If is a prime number then is irreducible in .
The base 10 version of the theorem is attributed to Cohn by Pólya and Szegő in one of their books while the generalization to any base b is due to Brillhart, Filaseta, and Odlyzko.
In 2002, Ram Murty gave a simplified proof as well as some history of the theorem in a paper that is available online.
A further generalization of the theorem allowing coefficients larger than digits was given by Filaseta and Gross. In particular, let be a polynomial with non-negative integer coefficients such that is prime. If all coefficients are 49598666989151226098104244512918, then is irreducible over . Moreover, they proved that this bound is also sharp. In other words, coefficients larger than 49598666989151226098104244512918 do not guarantee irreducibility. The method of Filaseta and Gross was also generalized to provide similar sharp bounds for some other bases by Cole, Dunn, and Filaseta.
The converse of this criterion is that, if p is an irreducible polynomial with integer coefficients that have greatest common divisor 1, then there exists a base such that the coefficients of p form the representation of a prime number in that base; this is the Bunyakovsky conjecture and its truth or falsity remains an open question.
Historical notes
Polya and Szegő gave their own generalization but it has many side conditions (on the locations of the roots, for instance) so it lacks the elegance of Brillhart's, Filaseta's, and Odlyzko's generalization.
It is clear from c
|
https://en.wikipedia.org/wiki/Gallic%20rooster
|
The Gallic rooster () is a national symbol of France as a nation, as opposed to Marianne representing France as a state and its values: the Republic. The rooster is also the symbol of the Wallonia region and the French Community of Belgium.
France
During the times of Ancient Rome, Suetonius, in The Twelve Caesars, noticed that, in Latin, rooster (gallus) and Galli (Gallus) were homonyms. However, the association of the Gallic rooster as a national symbol is apocryphal, as the rooster was neither regarded as a national personification nor as a sacred animal by the Gauls in their mythology and because there was no "Gallic nation" at the time, but a loose confederation of Gallic nations instead. But a closer review within that religious scheme indicates that "Mercury" was often portrayed with the cockerel, a sacred animal among the Continental Celts. Julius Caesar in De Bello Gallico identified some gods worshipped in Gaul by using the names of their nearest Roman god rather than their Gaulish name, with Caesar saying "Mercury" was the god most revered in Gaul. The Irish god Lug identified as samildánach led to the widespread identification of Caesar's Mercury as Lugus and thus also to the sacred cockerel, the Gallic rooster, as an emblem of France.
Its association with France dates back from the Middle Ages and is due to the play on words in Latin between Gallus, meaning an inhabitant of Gaul, and gallus, meaning rooster, or cockerel. Its use, by the enemies of France, dates to this period, originally a pun to make fun of the French, the association between the rooster and the Gauls/French was developed by the kings of France for the strong Christian symbol that the rooster represents: prior to being arrested, Jesus predicted that Peter would deny him three times before the rooster crowed on the following morning. At the rooster's crowing, Peter remembered Jesus's words. Its crowing at the dawning of each new morning made it a symbol of the daily victory of light o
|
https://en.wikipedia.org/wiki/Sign-value%20notation
|
A sign-value notation represents numbers using a sequence of numerals which each represent a distinct quantity, regardless of their position in the sequence. Sign-value notations are typically additive, subtractive, or multiplicative depending on their conventions for grouping signs together to collectively represent numbers.
Although the absolute value of each sign is independent of its position, the value of the sequence as a whole may depend on the order of the signs, as with numeral systems which combine additive and subtractive notation, such as Roman numerals. There is no need for zero in sign-value notation.
Additive notation
Additive notation represents numbers by a series of numerals that added together equal the value of the number represented, much as tally marks are added together to represent a larger number. To represent multiples of the sign value, the same sign is simply repeated. In Roman numerals, for example, means ten and means fifty, so means eighty (50 + 10 + 10 + 10).
Although signs may be written in a conventional order the value of each sign does not depend on its place in the sequence, and changing the order does not affect the total value of the sequence in an additive system. Frequently used large numbers are often expressed using unique symbols to avoid excessive repetition. Aztec numerals, for example, use a tally of dots for numbers less than twenty alongside unique symbols for powers of twenty, including 400 and 8,000.
Subtractive notation
Subtractive notation represents numbers by a series of numerals in which signs representing smaller values are typically subtracted from those representing larger values to equal the value of the number represented. In Roman numerals, for example, means one and means ten, so means nine (10 − 1). The consistent use of the subtractive system with Roman numerals was not standardised until after the widespread adoption of the printing press in Europe.
History
Sign-value notation was the
|
https://en.wikipedia.org/wiki/Expansion%20of%20the%20universe
|
The expansion of the universe is the increase in distance between gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion; the universe does not expand "into" anything and does not require space to exist "outside" it. To any observer in the universe, it appears that all but the nearest galaxies (which are bound to each other by gravity) recede at speeds that are proportional to their distance from the observer, on average. While objects cannot move faster than light, this limitation only applies with respect to local reference frames and does not limit the recession rates of cosmologically distant objects.
Cosmic expansion is a key feature of Big Bang cosmology. It can be modeled mathematically with the Friedmann–Lemaître–Robertson–Walker metric (FLRW), where it corresponds to an increase in the scale of the spatial part of the universe's spacetime metric tensor (which governs the size and geometry of spacetime). Within this framework, the separation of objects over time is associated with the expansion of space itself. However, this is not a generally covariant description but rather only a choice of coordinates. Contrary to common misconception, it is equally valid to adopt a description in which space does not expand and objects simply move apart while under the influence of their mutual gravity. Although cosmic expansion is often framed as a consequence of general relativity, it is also predicted by Newtonian gravity.
According to inflation theory, during the inflationary epoch about 10−32 of a second after the Big Bang, the universe suddenly expanded, and its volume increased by a factor of at least 1078 (an expansion of distance by a factor of at least 1026 in each of the three dimensions). This would be equivalent to expanding an object 1 nanometer (10−9 m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 1017 m or 62 trillion miles) long. Cosmic expansion subsequently
|
https://en.wikipedia.org/wiki/Palmitoylcarnitine
|
Palmitoylcarnitine is an ester derivative of carnitine involved in the metabolism of fatty acids. During the tricarboxylic acid cycle (TCA), fatty acids undergo a process known as β-oxidation to produce energy in the form of ATP. β-oxidation occurs primarily within mitochondria, however the mitochondrial membrane prevents the entry of long chain fatty acids (>C10), so the conversion of fatty acids such as palmitic acid is key. Palmitic acid is brought to the cell and once inside the cytoplasm is first converted to Palmitoyl-CoA. Palmitoyl-CoA has the ability to freely pass the outer mitochondrial membrane, but the inner membrane is impermeable to the Acyl-CoA and thioester forms of various long-chain fatty acids such as palmitic acid. The palmitoyl-CoA is then enzymatically transformed into palmitoylcarnitine via the Carnitine O-palmitoyltransferase family. The palmitoylcarnitine is then actively transferred into the inner membrane of the mitochondria via the carnitine-acylcarnitine translocase. Once inside the inner mitochondrial membrane, the same Carnitine O-palmitoyltransferase family is then responsible for transforming the palmitoylcarnitine back to the palmitoyl-CoA form.
Structure
Palmitoylcarnitine contains the saturated fatty acid known as palmitic acid (C16:0) which is bound to the β-hydroxy group of the carnitine. The core carnitine structure, consisting of butanoate with a quaternary ammonium attached to C4 and hydroxy group at C3, is a common molecular backbone for the transfer of multiple long chain fatty acids in the TCA cycle.
Function
Energy Generation
Palmitoylcarnitine is one molecule in a family of ester derivatives of carnitine that are utilized in the TCA cycle to generate energy. The beta oxidation yields 7 NADH, 7 FADH2, and 8 Acetyl-CoA chains. This Acetyl-CoA generates 3 NADH, 1 FADH2, and 1 GTP for every molecule in the Kreb's cycle. Each NADH generates 2.5 ATP in the ETC and FADH2 generates 1.5 ATP. This totals to 108 ATP, but 2 AT
|
https://en.wikipedia.org/wiki/Recursive%20function
|
Recursive function may refer to:
Recursive function (programming), a function which references itself
General recursive function, a computable partial function from natural numbers to natural numbers
Primitive recursive function, a function which can be computed with loops of bounded length
Another name for computable function
See also
Recurrence relation, an equation which defines a sequence from initial values
Recursion theory, the study of computability
Recursion
|
https://en.wikipedia.org/wiki/Xybots
|
Xybots is a 1987 third-person shooter arcade game by Atari Games. In Xybots, up to two players control "Major Rock Hardy" and "Captain Ace Gunn", who must travel through a 3D maze and fight against a series of robots known as the Xybots whose mission is to destroy all mankind. The game features a split screen display showing the gameplay on the bottom half of the screen and information on player status and the current level on the top half. Designed by Ed Logg, it was originally conceived as a sequel to his previous title, Gauntlet. The game was well received, with reviewers lauding the game's various features, particularly the cooperative multiplayer aspect. Despite this, it was met with limited financial success, which has been attributed to its unique control scheme that involves rotating the joystick to turn the player character.
Xybots was ported to various personal computers and the Atari Lynx handheld. Versions for the Nintendo Entertainment System and Sega Genesis/Mega Drive were announced, but never released. Emulated versions of the arcade game were later released as part of various compilations, starting with Midway Arcade Treasures 2 in 2004.
Gameplay
One or two players navigate through corridors as either Rock Hardy or Ace Gunn, battling enemy Xybots with a laser gun, seeking cover from enemy fire behind various objects and attempting to reach the level's exit. In certain levels, players face off against a large boss Xybot. Players move using the joystick, which also rotates to turn the player character. The lower half the screen shows the gameplay area for both players while the upper half is split between the map for the current level and the status display for each player. The display shows the player's remaining energy, which can be replenished by collecting energy pods within the levels. Energy can also be purchased at shops between levels, using coins dropped by defeated Xybots. The player can also purchase power-ups at these shops, including e
|
https://en.wikipedia.org/wiki/Hamiltonian%20%28control%20theory%29
|
The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.
Problem statement and definition of the Hamiltonian
Consider a dynamical system of first-order differential equations
where denotes a vector of state variables, and a vector of control variables. Once initial conditions and controls are specified, a solution to the differential equations, called a trajectory , can be found. The problem of optimal control is to choose (from some set ) so that maximizes or minimizes a certain objective function between an initial time and a terminal time (where may be infinity). Specifically, the goal is to optimize over a performance index defined at each point in time,
, with
subject to the above equations of motion of the state variables. The solution method involves defining an ancillary function known as the control Hamiltonian
which combines the objective function and the state equations much like a Lagrangian in a static optimization problem, only that the multipliers —referred to as costate variables—are functions of time rather than constants.
The goal is to find an optimal control policy function and, with it, an optimal trajectory of the state variable , which by Pontryagin's maximum principle are the arguments that maximize the Hamiltonian,
for all
The first-order necessary conditions for a maximum are given by
which is the maximum principle,
which generates the state transition function ,
which generates the costate equations
|
https://en.wikipedia.org/wiki/Uncertainty%20quantification
|
Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification.
Sources
Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:
Parameter This comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods. Some examples of this are the local free-fall acceleration in a falling object experiment, various material properties in a finite element analysis for engineering, and multiplier uncertainty in the context of macroeconomic policy optimization.
Parametric This comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance.
Structural uncertainty Also known as model inadequacy, model bias, or model discrepancy, this comes from the lack of knowledge of the underlying physics in the problem. It depends on how accurately a mathematical model describes the true sys
|
https://en.wikipedia.org/wiki/Solute%20carrier%20family
|
The solute carrier (SLC) group of membrane transport proteins include over 400 members organized into 66 families. Most members of the SLC group are located in the cell membrane. The SLC gene nomenclature system was originally proposed by the HUGO Gene Nomenclature Committee (HGNC) and is the basis for the official HGNC names of the genes that encode these transporters. A more general transmembrane transporter classification can be found in TCDB database.
Solutes that are transported by the various SLC group members are extremely diverse and include both charged and uncharged organic molecules as well as inorganic ions and the gas ammonia.
As is typical of integral membrane proteins, SLCs contain a number of hydrophobic transmembrane alpha helices connected to each other by hydrophilic intra- and extra-cellular loops. Depending on the SLC, these transporters are functional as either monomers or obligate homo- or hetero-oligomers. Many SLC families are members of the major facilitator superfamily.
Scope
By convention of the nomenclature system, members within an individual SLC family have greater than 20-25% sequence identity to each other. In contrast, the homology between SLC families is very low to non-existent. Hence, the criteria for inclusion of a family into the SLC group is not evolutionary relatedness to other SLC families but rather functional (i.e., an integral membrane protein that transports a solute).
The SLC group include examples of transport proteins that are:
facilitative transporters (allow solutes to flow downhill with their electrochemical gradients)
secondary active transporters (allow solutes to flow uphill against their electrochemical gradient by coupling to transport of a second solute that flows downhill with its gradient such that the overall free energy change is still favorable)
The SLC series does not include members of transport protein families that have previously been classified by other widely accepted nomenclature systems
|
https://en.wikipedia.org/wiki/Secalin
|
Secalin is a prolamin glycoprotein found in the grain rye, Secale cereale.
Secalin is one of the forms of gluten proteins that people with coeliac disease cannot tolerate, and thus rye should be avoided by people with this disease. It is generally recommended that such people follow a gluten free diet.
|
https://en.wikipedia.org/wiki/Calmodulin-binding%20proteins
|
Calmodulin-binding proteins are, as their name implies, proteins which bind calmodulin. Calmodulin can bind to a variety of proteins through a two-step binding mechanism, namely "conformational and mutually induced fit", where typically two domains of calmodulin wrap around an emerging helical calmodulin binding domain from the target protein.
Examples include:
Gap-43 protein (presynaptic)
Neurogranin (postsynaptic)
Caldesmon
Ca2+ Activation
A variety of different ions, including Calcium (Ca2+), play a vital role in the regulation of cellular functions. Calmodulin, a Calcium-binding protein, that mediates Ca2+ signaling is involved in all types of cellular mechanisms, including metabolism, synaptic plasticity, nerve growth, smooth muscle contraction, etc. Calmodulin allows for a number of proteins to aid in the progression of these pathways using their interactions with CaM in its Ca2+-free or Ca2+-bound state. Proteins each have their own unique affinities for calmodulin, that can be manipulated by Ca2+ concentrations to allow for the desired release or binding to calmodulin that determines its ability to carry out its cellular function. Proteins that get activated upon binding to Ca2+-bound state, include Myosin light-chain kinase, Phosphatase, Ca2+/calmodulin-dependent protein kinase II, etc. Proteins, such as neurogranin that plays a vital role in postsynaptic function, however, can bind to calmodulin in Ca2+-free or Ca2+-bound state via their IQ calmodulin-binding motifs. Since these interactions are exceptionally specific, they can be regulated through post-translational modifications by enzymes like kinases and phosphatases to affect their cellular functions. In the case of neurogranin, it's the synaptic function can be inhibited by the PKC-mediated phosphorylation of its IQ calmodulin-binding motif that impedes its interaction with calmodulin.
Cellular functions can be indirectly regulated by calmodulin, as it acts as a mediator for enzymes that requ
|
https://en.wikipedia.org/wiki/Overconvergent%20modular%20form
|
In mathematics, overconvergent modular forms are special p-adic modular forms that are elements of certain p-adic Banach spaces (usually infinite dimensional)
containing classical spaces of modular forms as subspaces. They were introduced by Nicholas M. Katz in 1972.
|
https://en.wikipedia.org/wiki/Thyroxine-binding%20proteins
|
A thyroxine-binding protein is any of several transport proteins that bind thyroid hormone and carry it around the bloodstream. Examples include:
Thyroxine-binding globulin
Transthyretin
Serum albumin
External links
Human proteins
Blood proteins
|
https://en.wikipedia.org/wiki/Logical%20machine
|
A logical machine or logical abacus is a tool containing a set of parts that uses energy to perform formal logic operations through the use of truth tables. Early logical machines were mechanical devices that performed basic operations in Boolean logic. The principal examples of such machines are those of William Stanley Jevons (logic piano), John Venn, and Allan Marquand.
Contemporary logical machines are computer-based electronic programs that perform proof assistance with theorems in mathematical logic. In the 21st century, these proof assistant programs have given birth to a new field of study called mathematical knowledge management.
Origins
The earliest logical machines were mechanical constructs built in the late 19th century. William Stanley Jevons invented the first logical machine in 1869, the logic piano. In 1883, Allan Marquand invented a new logical machine that performed the same operations as Jevons' logic piano but with improvements in design simplification, portability, and input-output controls.
A logical abacus is constructed to show all the possible combinations of a set of logical terms with their negatives, and, further, the way in which these combinations are affected by the addition of attributes or other limiting words, i.e., to simplify mechanically the solution of logical problems. These instruments are all more or less elaborate developments of the "logical slate", on which were written in vertical columns all the combinations of symbols or letters which could be made logically out of a definite number of terms. These were compared with any given premises, and those which were incompatible were crossed off. In the abacus the combinations are inscribed each on a single slip of wood or similar substance, which is moved by a key; incompatible combinations can thus be mechanically removed at will, in accordance with any given series of premises.
See also
Allan Marquand
William Stanley Jevons
Logics for computability
|
https://en.wikipedia.org/wiki/Complement%20component%209
|
Complement component 9 (C9) is a MACPF protein involved in the complement system, which is part of the innate immune system. Once activated, about 12-18 molecules of C9 polymerize to form pores in target cell membranes, causing lysis and cell death. C9 is one member of the complement membrane attack complex (MAC), which also includes complement components C5b, C6, C7 and C8. The formation of the MAC occurs through three distinct pathways: the classical, alternative, and lectin pathways. Pore formation by C9 is an important way that bacterial cells are killed during an infection, and the target cell is often covered in multiple MACs. The clinical impact of a deficiency in C9 is an infection with the gram-negative bacterium Neisseria meningitidis.
Structure
C9 genes include 11 exons and 10 introns when found in fish. In fish, the liver is the site where the majority of complement components are produced and expressed, but C9 can also be found in other tissues. It is a single-chain glycoprotein with a four domain structure arranged in a globular bundle.
Pore formation
MAC formation starts with the assembly of a tetrameric complex with the complement components C6, C7, C8, and C5b. The final step of MAC on target cell surfaces involves the polymerization of C9 molecules bound to C5b8 forming C5b-9. C9 molecules allow cylindrical, asymmetrical transmembrane pores to form. The overall complex belongs to MAC/perforin-like (MACPF)/CDC superfamily. Pore formation involves binding the C9 molecules to the target membrane, membrane molecules forming a pre-pore shape, and conformational change in the TMH1, the first transmembrane region, and TMH2, the second transmembrane region. The formations of pores leads to the killing of foreign pathogens and infected host cells.
Relation to aging process
C9 was found to be the most strongly under expressed serum protein in men who achieved longevity, compared to men who did not.
|
https://en.wikipedia.org/wiki/Maximum%20weight%20matching
|
In computer science and graph theory, the maximum weight matching problem is the problem of finding, in a weighted graph, a matching in which the sum of weights is maximized.
A special case of it is the assignment problem, in which the input is restricted to be a bipartite graph, and the matching constrained to be have cardinality that of the smaller of the two partitions. Another special case is the problem of finding a maximum cardinality matching on an unweighted graph: this corresponds to the case where all edge weights are the same.
Algorithms
There is a time algorithm to find a maximum matching or a maximum weight matching in a graph that is not bipartite; it is due to Jack Edmonds, is called the paths, trees, and flowers method or simply Edmonds' algorithm, and uses bidirected edges. A generalization of the same technique can also be used to find maximum independent sets in claw-free graphs.
More elaborate algorithms exist and are reviewed by Duan and Pettie (see Table III). Their work proposes an approximation algorithm for the maximum weight matching problem, which runs in linear time for any fixed error bound.
|
https://en.wikipedia.org/wiki/One%20woodland%20terminal%20model
|
The ITU terrestrial model for one terminal in woodland is a radio propagation model belonging to the class of foliage models. This model is a successor of the early ITU model.
Applicable to/under conditions
Applicable to the scenario where one terminal of a link is inside foliage and the other end is free.
Coverage
Frequency: below 5 GHz
Depth of foliage: unspecified
Mathematical formulation
The mathematical formulation of the model is:
Where,
Av = Attenuation due to vegetation. Unit: decibel (dB)
A = Maximum attenuation for one terminal caused by a certain foliage. Unit: decibel (dB)
d = Depth of Foliage along the path. Unit: Meter(m)
= Specific attenuation for short vegetations. Unit: decibel/meter (dB/m)
Points to note
The value of is dependent on frequency and is an empirical constant.
The model assumes that exactly one of the terminals is located inside some forest or plantation and the term depth applies to the distance from the terminal inside the plantation to the end of plantation along the link.
See also
Early ITU model
Weissberger's model
Single vegetative obstruction model
Further reading
Introduction to RF propagation, John S. Seybold, 2005, John Wiley and Sons.
Radio frequency propagation model
|
https://en.wikipedia.org/wiki/Earcon
|
An earcon is a brief, distinctive sound that represents a specific event or conveys other information. Earcons are a common feature of computer operating systems and applications, ranging from a simple beep to indicate an error, to the customizable sound schemes of modern operating systems that indicate startup, shutdown, and other events.
The name is a pun on the more familiar term icon in computer interfaces. Icon sounds like "eye-con" and is visual, which inspired D.A. Sumikawa to coin "earcon" as the auditory equivalent in a 1985 article, 'Guidelines for the integration of audio cues into computer user interfaces.'
The term is most commonly applied to sound cues in a computer interface, but examples of the concept occur in broadcast media such as radio and television:
The alert signal that indicates a message from the Emergency Broadcast System
The signature three-tone melody that identifies NBC in radio and television broadcasts
Earcons are generally synthesized tones or sound patterns. The similar term auditory icon refers to recorded everyday sounds that serve the same purpose.
Use in assistive technologies
Assistive technologies for computing devices—such as screen readers including ChromeOS's ChromeVox, Android's TalkBack and Apple's VoiceOver—use earcons as a convenient and fast means of conveying to blind or visually impaired users contextual information about the interface they are navigating. Earcons in screen readers largely serve as auditory cues to inform the user that they have selected a particular type of interface element, such as a button, hyperlink or text input field. They can also provide context about the current document or mode, such as whether a web page is loading.
Earcons provide an enhancement to screen reader usage due to their brevity and subtleness, which is an improvement over using much longer spoken cues to provide context: using a short, distinctive beep when an interface's button is selected can be much faster and ther
|
https://en.wikipedia.org/wiki/Ain%20al-Yaqeen
|
Ain al Yaqeen (Heart of the Matter in English) is an Arabic news magazine published weekly, focusing on political topics.
Profile
Ain al Yaqeen also has an English edition. It is published online. The magazine is seen as a government publication or as a semi-official weekly political magazine.
Contents
After it was revealed that a member of the royal family had indirectly funded one of the hijackers in the September 11 attacks, Prince Nayef in an article published in the English edition of the weekly on 29 November 2002 claimed that the Jews were behind the attacks.
See also
List of magazines in Saudi Arabia
|
https://en.wikipedia.org/wiki/Cyclotomic%20unit
|
In mathematics, a cyclotomic unit (or circular unit) is a unit of an algebraic number field which is the product of numbers of the form (ζ − 1) for ζ an nth root of unity and 0 < a < n.
Properties
The cyclotomic units form a subgroup of finite index in the group of units of a cyclotomic field. The index of this subgroup of real cyclotomic units (those cyclotomic units in the maximal real subfield) within the full real unit group is equal to the class number of the maximal real subfield of the cyclotomic field.
If is the power of a prime, then is not a unit; however the numbers for , and ±ζ generate the group of cyclotomic units.
If is a composite number having two or more distinct prime factors, then is a unit. The subgroup of cyclotomic units generated by with is not of finite index in general.
The cyclotomic units satisfy distribution relations. Let be a rational number prime to and let denote . Then for we have
Using these distribution relations and the symmetry relation a basis Bn of the cyclotomic units can be constructed with the property that for .
See also
Elliptic unit
Modular unit
Notes
|
https://en.wikipedia.org/wiki/Japanese%20rice%20fish
|
The Japanese rice fish (Oryzias latipes), also known as the medaka, is a member of genus Oryzias (ricefish), the only genus in the subfamily Oryziinae. This small (up to about ) native of Japan is a denizen of rice paddies, marshes, ponds, slow-moving streams and tide pools. It is euryhaline, occurring in both brackish and freshwater. It became popular as an aquarium fish because of its hardiness and pleasant coloration: its coloration varies from creamy-white to yellowish in the wild to white, creamy-yellow, or orange in aquarium-bred individuals. Bright yellow, red or green transgenic populations, similar to GloFish, have also been developed, but are banned from sale in the EU. The medaka has been a popular pet since the 17th century in Japan. After fertilization, the female carries her eggs attached anterior to the anal fin for a period before depositing them on plants or similar things.
Ecology
Medaka live in small ponds, shallow rivers, and rice fields. They can survive in a wide range of water temperatures (), but they prefer a water temperature of . Since they eat juvenile mosquitoes and small plankton, they are known as a beneficial organism for humans. They produce 10–20 eggs per birth, and they can produce eggs every day in laboratory conditions. They are seasonal breeding animals and usually lay eggs between spring and summer. They prefer to lay eggs around water grass and often prefer living in rice fields. The egg usually requires 4–10 days to hatch. They have an advanced renal function, which enables them to live in saltwater and brackish water. The average life span of this species in the wild is estimated to be 2 years, though in laboratory conditions they can survive 3–5 years. They live in schools, and they can recognize the faces of other individual medaka.
Taxonomy and range
As originally defined, O. latipes was native to much of east and mainland southeast Asia, but in recent decades most of these populations have been split off as separate
|
https://en.wikipedia.org/wiki/The%20Shadow%20%281994%20film%29
|
The Shadow is a 1994 American superhero film from Universal Pictures, produced by Martin Bregman, Willi Bear, and Michael Scott Bregman, and directed by Russell Mulcahy. It stars Alec Baldwin, supported by John Lone, Penelope Ann Miller, Peter Boyle, Ian McKellen, Jonathan Winters, and Tim Curry. The film is based on the pulp fiction character of the same name created in 1931 by Walter B. Gibson.
The film was released to theaters on July 1, 1994, received mixed reviews, and was a commercial failure.
Plot
Following the First World War, Lamont Cranston sets himself up as a drug kingpin and warlord in Tibet. The Tulku, a holy man who exhibits otherworldly powers, abducts Cranston and offers him a chance to become a force for good. Cranston initially refuses and is attacked by the Tulku's Phurba, a mystical flying dagger. Ultimately, Cranston becomes the Tulku's student and learns how to hypnotize others and bend their perceptions so that he becomes invisible, save for his shadow.
Cranston returns to New York City seven years later and resumes his former life as a wealthy playboy, while secretly operating as The Shadow—a vigilante who terrorizes the city's underworld. He recruits some of those he saves from criminals to act as his agents, providing him with information and specialist knowledge. His identity is largely unknown, especially to his Uncle Wainwright, who happens to be the Police Commissioner of New York, who he has to regularly hypnotize in order to keep the police from interfering with him. Cranston's secret identity is endangered upon meeting Margo Lane, a socialite who is also telepathic.
Shiwan Khan, a powerful rogue protégé of the Tulku, arrives in New York inside Genghis Khan's sarcophagus. As Khan's last descendant, Shiwan plans to fulfill his ancestor's ambitions of world domination. He proposes an alliance to Cranston, who refuses. After acquiring a rare coin from Khan, Cranston learns that it is made of bronzium, a metal that could be used for
|
https://en.wikipedia.org/wiki/Bondi%20k-calculus
|
Bondi k-calculus is a method of teaching special relativity popularised by Sir Hermann Bondi, that has been used in university-level physics classes (e.g. at the University of Oxford), and in some relativity textbooks.
The usefulness of the k-calculus is its simplicity. Many introductions to relativity begin with the concept of velocity and a derivation of the Lorentz transformation. Other concepts such as time dilation, length contraction, the relativity of simultaneity, the resolution of the twins paradox and the relativistic Doppler effect are then derived from the Lorentz transformation, all as functions of velocity.
Bondi, in his book Relativity and Common Sense, first published in 1964 and based on articles published in The Illustrated London News in 1962, reverses the order of presentation. He begins with what he calls "a fundamental ratio" denoted by the letter (which turns out to be the radial Doppler factor). From this he explains the twins paradox, and the relativity of simultaneity, time dilation, and length contraction, all in terms of . It is not until later in the exposition that he provides a link between velocity and the fundamental ratio . The Lorentz transformation appears towards the end of the book.
History
The k-calculus method had previously been used by E. A. Milne in 1935. Milne used the letter to denote a constant Doppler factor, but also considered a more general case involving non-inertial motion (and therefore a varying Doppler factor). Bondi used the letter instead of and simplified the presentation (for constant only), and introduced the name "k-calculus".
Bondi's k-factor
Consider two inertial observers, Alice and Bob, moving directly away from each other at constant relative velocity. Alice sends a flash of blue light towards Bob once every seconds, as measured by her own clock. Because Alice and Bob are separated by a distance, there is a delay between Alice sending a flash and Bob receiving a flash. Furthermore, the sepa
|
https://en.wikipedia.org/wiki/Koide%20formula
|
The Koide formula is an unexplained empirical equation discovered by Yoshio Koide in 1981. In its original form, it is not fully empirical but a set of guesses for a model for masses of quarks and leptons, as well as CKM angles. From this model it survives the observation about the masses of the three charged leptons; later authors have extended the relation to neutrinos, quarks, and other families of particles.
Formula
The Koide formula is
where the masses of the electron, muon, and tau are measured respectively as = , = , and = ; the digits in parentheses are the uncertainties in the last digits. This gives = .
No matter what masses are chosen to stand in place of the electron, muon, and tau, The upper bound follows from the fact that the square roots are necessarily positive, and the lower bound follows from the Cauchy–Bunyakovsky–Schwarz inequality. The experimentally determined value, , lies at the center of the mathematically allowed range. But note that removing the requirement of positive roots it is possible to fit an extra tuple in the quark sector (the one with strange, charm and bottom).
The mystery is in the physical value. Not only is the result peculiar, in that three ostensibly arbitrary numbers give a simple fraction, but also in that in the case of electron, muon, and tau, is exactly halfway between the two extremes of all possible combinations: (if the three masses were equal) and 1 (if one mass dominates). Importantly, the relation holds regardless of which unit is used to express the masses.
Robert Foot also interpreted the Koide formula as a geometrical relation, in which the value is the squared cosine of the angle between the vector and the vector (see dot product). That angle is almost exactly 45 degrees:
When the formula is assumed to hold exactly ( = ), it may be used to predict the tau mass from the (more precisely known) electron and muon masses; that prediction is = . Please note that solving the Koide formula can also
|
https://en.wikipedia.org/wiki/Quantum%20relative%20entropy
|
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
Motivation
For simplicity, it will be assumed that all objects in the article are finite-dimensional.
We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distribution P = {p1...pn}, but somehow we mistakenly assumed it to be Q = {q1...qn}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about the j-th event, or equivalently, the amount of information provided after observing the j-th event, is
The (assumed) average uncertainty of all possible events is then
On the other hand, the Shannon entropy of the probability distribution p, defined by
is the real amount of uncertainty before observation. Therefore the difference between these two quantities
is a measure of the distinguishability of the two probability distributions p and q. This is precisely the classical relative entropy, or Kullback–Leibler divergence:
Note
In the definitions above, the convention that 0·log 0 = 0 is assumed, since . Intuitively, one would expect that an event of zero probability to contribute nothing towards entropy.
The relative entropy is not a metric. For example, it is not symmetric. The uncertainty discrepancy in mistaking a fair coin to be unfair is not the same as the opposite situation.
Definition
As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions to density matrices. Let ρ be a density matrix. The von Neumann entropy of ρ, which is the quantum mechanical analog of the Shannon entropy, is given by
For two density matrices ρ and σ, the quantum relative entropy of ρ with respect to σ is defined by
We see that, when the states are classically related, i
|
https://en.wikipedia.org/wiki/Crazy%20Ray
|
Wilford Jones (January 22, 1931 – March 17, 2007), better known as Crazy Ray, was the unofficial mascot of the Dallas Cowboys. By some accounts, he was also the team's original mascot, who attended almost every home game since the team's inception.
History
He started selling pennants at games in 1962 and quickly endeared himself to the Cowboys fans with his western outfits, magic tricks, trademark whistle, and galloping along with a hobby horse.
He was never officially employed by the Cowboys, but was given a Special Parking Pass and All-access for home games. He was also known as the "Whistling Vendor" at Dallas Tornado soccer games, Texas Rangers baseball games, and at the Dallas Black Hawks minor-league professional ice hockey team at State Fair Coliseum. He could be seen at the State Fair of Texas and various concerts entertaining the public.
Crazy Ray also had a special friendship with rival Zema Williams (i.e. *Chief Zee), the Washington Redskins' unofficial mascot. In some photographs, Crazy Ray and Chief Zee were seen pretending to fight with each other during games.
Ray died on March 17, 2007, from heart disease and diabetes, aged 76, in Dallas. He missed only three games in 46 seasons.
Honors
Crazy Ray has a place in the Visa Hall of Fans Exhibit at the Pro Football Hall of Fame as he was selected as the fan choice for the Dallas Cowboys.
See also
The Barrel Man
Chief Zee
Fireman Ed
Hogettes
License Plate Guy
|
https://en.wikipedia.org/wiki/Ussing%20chamber
|
An Ussing chamber is an apparatus for measuring epithelial membrane properties. It can detect and quantify transport and barrier functions of living tissue. The Ussing chamber was invented by the Danish zoologist and physiologist Hans Henriksen Ussing in 1946.
The technique is used to measure the short-circuit current as an indicator of net ion transport taking place across an epithelium. Ussing chambers are used to measure ion transport in native tissue, such as gut mucosa, and in a monolayer of cells grown on permeable supports.
Function
The Ussing chamber provides a system to measure the transport of ions, nutrients, and drugs across various epithelial tissues, (although can generate false-negative results for lipophilic substances). It consists of two halves separated by the epithelia (sheet of mucosa or monolayer of epithelial cells grown on permeable supports). Epithelia are polar in nature, i.e., they have an apical or mucosal side and a basolateral or serosal side. An Ussing chamber can isolate the apical side from the basolateral side. The two half chambers are filled with equal amounts of symmetrical Ringer solution to remove chemical, mechanical or electrical driving forces. Ion transport takes place across any epithelium. Transport may be in either direction. Ion transport produces a potential difference (voltage difference) across the epithelium. The voltage is measured using two voltage electrodes placed near the tissue/epithelium. This voltage is cancelled out by injecting current, using two other current electrodes placed away from the epithelium. This short-circuit current (Isc) is the measure of net ion transport.
Measuring epithelial ion transport is helped by Ussing chambers. The voltage result from this ion transport is easy to accurately measure. The epithelium pumps ions from one side to the other and the ions leak back through so-called tight junctions that are situated between the epithelial cells. To measure ion transport, an external
|
https://en.wikipedia.org/wiki/MBC%20Paper%20of%20the%20Year
|
Chosen by Molecular Biology of the Cell Associate Editors, the MBoC Paper of the Year is awarded to the first author of the paper judged to be the best of the year in the field of molecular biology, from June to May.
Awardees
Source: MBoC
See also
List of biology awards
|
https://en.wikipedia.org/wiki/Molecular%20Biology%20of%20the%20Cell
|
Molecular Biology of the Cell is a biweekly peer-reviewed scientific journal published by the American Society for Cell Biology. It covers research on the molecular basis of cell structure and function. According to the Journal Citation Reports, the journal has a 2012 impact factor of 4.803. It was originally established as Cell Regulation in 1989.
The Editor-in-Chief is Matthew Welch (University of California, Berkeley). Previous Editors-in-Chief include: Erkki Ruoslahti (of Cell Regulation) and David Botstein and Keith Yamamoto (of MBoC) and their successors Sandra Schmid and David Drubin.
|
https://en.wikipedia.org/wiki/Merton%20Bernfield%20Memorial%20Award
|
The Merton Bernfield Memorial Award, formerly known as the Member Memorial Award
For Graduate Students and Postdoctoral Fellows, was established in memory of deceased colleagues donations from members of the American Society for Cell Biology. The winner is selected on merit and is invited to speak in Minisymposium at the ASCB Annual Meeting. The winner also receives financial support.
Awardees
Source:
2019 Veena Padmanaban
2018 Kelsie Eichel
2017 Lawrence Kazak
2016 Kara McKinley
2015 Shigeki Watanabe
2014 Prasanna Satpute-Krishnan
2013 Panteleimon Rompolas
2012 Ting Chen and Gabriel Lander
2011 Dylan Tyler Burnette
2010 Hua Jin
2009 Chad G. Pearson
2008 Kenneth Campellone
2007 Ethan Garner
2006 Lloyd Trotman
2005 Stephanie Gupton
2004 Chun Han
2003 Erik Dent
2002 Christina Hull
2001 Sarah South and James Wohlschlegel
See also
List of biology awards
|
https://en.wikipedia.org/wiki/ASCB%20Public%20Service%20Award
|
The American Society for Cell Biology's highest honor for Public Service, the ASCB Public Service Award is for outstanding national leadership in support of biomedical research. The awardees are selected by the ASCB Public Policy Committee.
Awardees
Source: ASCB
2022 George Langford
2021 Raynard Kington and Donna Ginther
2020 Anthony Fauci
2019 James F. Deatherage
2018 Senator Roy Blunt (R-Mo) and Representative Tom Cole (R-OK)
2016 Senator Richard Durbin
2014 Rush Holt Jr.
2013 Jeremy Berg
2012 Keith Yamamoto
2010 Tom Pollard
2009 Larry Goldstein
2008 Maxine Singer
2007 Representative Michael N. Castle (R-DE)
2006 Barbara Forrest and Ken Miller
2005 Senator Arlen Specter (R-PA)
2004 Elizabeth Blackburn
2003 Paul Berg
2002 Matthew Meselson
2001 Christopher Reeve
2000 Donna Shalala, US Health & Human Services Secretary
1999 Harold Varmus
1998 J. Michael Bishop
1997 Representative George Gekas (R-PA)
1996 Marc Kirschner
1995 Representative John Porter (R-IL)
1994 Senator Tom Harkin (D-IA)
See also
List of biomedical science awards
External links
ASCB Public Service Award
|
https://en.wikipedia.org/wiki/Energy%20content%20of%20biofuel
|
The energy content of biofuel is the chemical energy contained in a given biofuel, measured per unit mass of that fuel, as specific energy, or per unit of volume of the fuel, as energy density.
A biofuel is a fuel produced from recently living organisms. Biofuels include bioethanol, an alcohol made by fermentation—often used as a gasoline additive, and biodiesel, which is usually used as a diesel additive. Specific energy is energy per unit mass, which is used to describe the chemical energy content of a fuel, expressed in SI units as joule per kilogram (J/kg) or equivalent units. Energy density is the amount of chemical energy per unit volume of the fuel, expressed in SI units as joule per litre (J/L) or equivalent units.
Energy and CO2 output of common biofuels
The table below includes entries for popular substances already used for their energy, or being discussed for such use.
The second column shows specific energy, the energy content in megajoules per unit of mass in kilograms, useful in understanding the energy that can be extracted from the fuel.
The third column in the table lists energy density, the energy content per liter of volume, which is useful for understanding the space needed for storing the fuel.
The final two columns deal with the carbon footprint of the fuel. The fourth column contains the proportion of CO2 released when the fuel is converted for energy, with respect to its starting mass, and the fifth column lists the energy produced per kilogram of CO2 produced. As a guideline, a higher number in this column is better for the environment. But these numbers do not account for other green house gases released during burning, production, storage, or shipping. For example, methane may have hidden environmental costs that are not reflected in the table.
Notes
Yields of common crops associated with biofuels production
Notes
See also
Eichhornia crassipes#Bioenergy
Syngas
Conversion of units
Energy density
Heat of combustion
|
https://en.wikipedia.org/wiki/Joseph%20Nechvatal
|
Joseph Nechvatal (born January 15, 1951) is an American post-conceptual digital artist and art theoretician who creates computer-assisted paintings and computer animations, often using custom-created computer viruses.
Life and work
Joseph Nechvatal was born in Chicago. He studied fine art and philosophy at Southern Illinois University Carbondale, Cornell University and Columbia University. He earned a Doctor of Philosophy in Philosophy of Art and Technology at the Planetary Collegium at University of Wales, Newport and has taught art theory and art history at the School of Visual Arts. He has had many solo exhibitions, including one in Berlin
His work in the early 1980s chiefly consisted of postminimalist gray graphite drawings that were often photomechanically enlarged. Beginning in 1979 he became associated with the artist group Colab, organized the Public Arts International/Free Speech series, and helped established the non-profit group ABC No Rio. In 1983 he co-founded the avant-garde electronic art music audio project Tellus Audio Cassette Magazine. In 1984, Nechvatal began work on an opera called XS: The Opera Opus (1984-6) with the no wave musical composer Rhys Chatham.
He began using computers and robotics to make post-conceptual paintings in 1986 and later, in his signature work, began to employ self-created computer viruses. From 1991 to 1993, he was artist-in-residence at the Louis Pasteur Atelier in Arbois, France and at the Saline Royale/Ledoux Foundation's computer lab. There he worked on The Computer Virus Project, his first artistic experiment with computer viruses and computer virus animation. He exhibited computer-robotic paintings at Documenta 8 in 1987.
In 2002 he extended his experimentation into viral artificial life through a collaboration with the programmer Stephane Sikora of music2eye in a work called the Computer Virus Project II.
Nechvatal has also created a noise music work called viral symphOny, a collaborative sound symphony
|
https://en.wikipedia.org/wiki/Neurturin
|
Neurturin (NRTN) is a protein that is encoded in humans by the NRTN gene. Neurturin belongs to the glial cell line-derived neurotrophic factor (GDNF) family of neurotrophic factors, which regulate the survival and function of neurons. Neurturin’s role as a growth factor places it in the transforming growth factor beta (TGF-beta) subfamily along with its homologs persephin, artemin, and GDNF. It shares a 42% similarity in amino acid sequence with mature GDNF. It is also considered a trophic factor and critical in the development and growth of neurons in the brain. Neurotrophic factors like neurturin have been tested in several clinical trial settings for the potential treatment of neurodegenerative diseases, specifically Parkinson's disease.
Function
Neurturin is encoded for by the NRTN gene located on chromosome 19 in humans and has been shown to promote potent effects on survival and function of developing and mature midbrain dopaminergic neurons (DA) in vitro. In vivo the direct administration of neurturin into substantia nigra of mice models also shows mature DA neuron protection. In addition, neurturin has also been shown to support the survival of several other neurons including sympathetic and sensory neurons of the dorsal root ganglia. Knockout mice have shown that neurturin does not appear essential for survival. However, evidence shows retarded growth of enteric, sensory and parasympathetic neurons in mice upon the removal of neurturin receptors.
Mechanism of activation
Neurturin signaling is mediated by the activation of a multi-component receptor system including the ret tyrosine kinase (RET), a cell-surface bound GDNF family receptor-α (GFRα) protein, and a glycosyl phosphatidylinositol (GPI)-linked protein. Neurturin preferentially binds to the GFRα2 co-receptor. Upon assembly of the complex, specific tyrosine residues are phosphorylated within two molecules of RET that are brought together to initiate signal transduction and the MAP kinase signa
|
https://en.wikipedia.org/wiki/Kelvin%20transform
|
The Kelvin transform is a device used in classical potential theory to extend the concept of a harmonic function, by allowing the definition of a function which is 'harmonic at infinity'. This technique is also used in the study of subharmonic and superharmonic functions.
In order to define the Kelvin transform f* of a function f, it is necessary to first consider the concept of inversion in a sphere in Rn as follows.
It is possible to use inversion in any sphere, but the ideas are clearest when considering a sphere with centre at the origin.
Given a fixed sphere S(0,R) with centre 0 and radius R, the inversion of a point x in Rn is defined to be
A useful effect of this inversion is that the origin 0 is the image of , and is the image of 0. Under this inversion, spheres are transformed into spheres, and the exterior of a sphere is transformed to the interior, and vice versa.
The Kelvin transform of a function is then defined by:
If D is an open subset of Rn which does not contain 0, then for any function f defined on D, the Kelvin transform f* of f with respect to the sphere S(0,R) is
One of the important properties of the Kelvin transform, and the main reason behind its creation, is the following result:
Let D be an open subset in Rn which does not contain the origin 0. Then a function u is harmonic, subharmonic or superharmonic in D if and only if the Kelvin transform u* with respect to the sphere S(0,R) is harmonic, subharmonic or superharmonic in D*.
This follows from the formula
See also
William Thomson, 1st Baron Kelvin
Inversive geometry
Spherical wave transformation
|
https://en.wikipedia.org/wiki/Tisserand%27s%20parameter
|
Tisserand's parameter (or Tisserand's invariant) is a value calculated from several orbital elements (semi-major axis, orbital eccentricity and inclination) of a relatively small object and a larger "perturbing body". It is used to distinguish different kinds of orbits. The term is named after French astronomer Félix Tisserand, and applies to restricted three-body problems in which the three objects all differ greatly in mass.
Definition
For a small body with semi-major axis , orbital eccentricity , and orbital inclination , relative to the orbit of a perturbing larger body with semimajor axis , the parameter is defined as follows:
The quasi-conservation of Tisserand's parameter is a consequence of Tisserand's relation.
Applications
TJ, Tisserand's parameter with respect to Jupiter as perturbing body, is frequently used to distinguish asteroids (typically ) from Jupiter-family comets (typically ).
The minor planet group of damocloids are defined by a Jupiter Tisserand's parameter of 2 or less ().
The roughly constant value of the parameter before and after the interaction (encounter) is used to determine whether or not an observed orbiting body is the same as one previously observed in Tisserand's criterion.
The quasi-conservation of Tisserand's parameter constrains the orbits attainable using gravity assist for outer Solar System exploration.
TN, Tisserand's parameter with respect to Neptune, has been suggested to distinguish near-scattered (affected by Neptune) from extended-scattered trans-Neptunian objects (not affected by Neptune; e.g. 90377 Sedna).
Tisserand's parameter could be used to infer the presence of an intermediate-mass black hole at the center of the Milky Way using the motions of orbiting stars.
Related notions
The parameter is derived from one of the so-called Delaunay standard variables, used to study the perturbed Hamiltonian in a three-body system. Ignoring higher-order perturbation terms, the following value is conserved:
Consequ
|
https://en.wikipedia.org/wiki/Elmer%20McCollum
|
Elmer Verner McCollum (March 3, 1879 – November 15, 1967) was an American biochemist known for his work on the influence of diet on health. McCollum is also remembered for starting the first rat colony in the United States to be used for nutrition research. His reputation has suffered from posthumous controversy. Time magazine called him Dr.Vitamin. His rule was, "Eat what you want after you have eaten what you should."
Living at a time when vitamins were unknown, he asked and tried to answer the questions, "How many dietary essentials are there, and what are they?" He and Marguerite Davis discovered the first vitamin, namedA, in 1913. McCollum also helped to discover vitaminB and vitaminD and worked out the effect of trace elements in the diet.
As a worker in Wisconsin and later at Johns Hopkins, McCollum acted partly at the request of the dairy industry. When he said that milk was "the greatest of all protective foods", milk consumption in the United States doubled between 1918 and 1928. McCollum also promoted leafy greens, which had no industry advocates.
McCollum wrote in his 1918 textbook that lacto vegetarianism is, "when the diet is properly planned, the most highly satisfactory plan which can be adopted in the nutrition of man".
Family and education
McCollum's ancestors immigrated to the United States from Scotland in 1763. McCollum was born in 1879 to Cornelius Armstrong McCollum and Martha Catherine Kidwell McCollum on a farm from Redfield, Kansas, usually reported to have been Fort Scott, Kansas, which was away. His parents had little education but became relatively well-off by local standards.
He spent his first seventeen years on this farm and attended a one-room school. He had one brother, Burton, and three sisters. At some point he had surgery for a detached retina, which the doctors were unable to "glue back again". His father suffered from tuberculosis. His mother, who had only two winters of schooling but was devoted to her children's educat
|
https://en.wikipedia.org/wiki/Pittsburgh%20Supercomputing%20Center
|
The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.
In addition to providing a family of Big Data-optimized supercomputers with unique shared memory architectures, PSC features the National Institutes of Health-sponsored National Resource for Biomedical Supercomputing, an Advanced Networking Group that conducts research on network performance and analysis, and a STEM education and outreach program supporting K-20 education. In 2012, PSC established a new Public Health Applications Group that will apply supercomputing resources to problems in preventing, monitoring and responding to epidemics and other public health needs.
Mission
The Pittsburgh Supercomputing Center provides university, government, and industrial researchers with access to several of the most powerful systems for high-performance computing, communications and data-handling and analysis available nationwide for unclassified research. As a resource provider in the Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation's network of integrated advanced digital resources, PSC works with its XSEDE partners to harness the full range of information technologies to enable discovery in U.S. science and engineering.
Partnerships
PSC is a leading partner in XSEDE. PSC-scientific co-director Ralph Roskies is a co-principal investigator of XSEDE and co-leads its Extended Collaborative Support Services. Other PSC staff lead XSEDE efforts in Networking, Incident Response, Systems & Software Engineering, Outreach, Allocations Coordination, and Novel & Innovative Projects. This NSF-funded program provides U.S. academic researchers with support for and access to leadership-class computing infrastructure and research.
The Natio
|
https://en.wikipedia.org/wiki/Causes%20of%20gender%20incongruence
|
Gender incongruence is the state of having a gender identity that does not correspond to one's sex assigned at birth. This is experienced by people who identify as transgender or transsexual, and often results in gender dysphoria. The causes of gender incongruence have been studied for decades.
Transgender brain studies, especially those on trans women attracted to women (gynephilic), and those on trans men attracted to men (androphilic), are limited, as they include only a small number of tested individuals. Studies conducted on twins suggest that there are likely genetic causes of gender incongruence, although the precise genes involved are not known or fully understood.
Biological factors
Genetics
A 2008 study compared the genes of 112 trans women who were mostly already undergoing hormone treatment, with 258 cisgender male controls. Trans women were more likely than cisgender males to have a longer version of a receptor gene (longer repetitions of the gene) for the sex hormone androgen, which reduced its effectiveness at binding testosterone. The androgen receptor (NR3C4) is activated by the binding of testosterone or dihydrotestosterone, where it plays a critical role in the forming of primary and secondary male sex characteristics. The research weakly suggests reduced androgen and androgen signaling contributes to trans women's identity. The authors say that a decrease in testosterone levels in the brain during development might prevent complete masculinization of trans women's brains, thereby causing a more feminized brain and a female gender identity.
A variant genotype for the CYP17 gene, which acts on the sex hormones pregnenolone and progesterone, has been found to be linked to transsexuality in trans men but not in trans women. Most notably, transmasculine subjects not only had the variant genotype more frequently, but had an allele distribution equivalent to cisgender male controls, unlike the cisgender female controls. The paper concluded that the
|
https://en.wikipedia.org/wiki/Apparent%20temperature
|
Apparent temperature, also known as "feels like", is the temperature equivalent perceived by humans, caused by the combined effects of air temperature, relative humidity and wind speed. The measure is most commonly applied to the perceived outdoor temperature. Apparent temperature was invented by Robert Steadman who published a paper about it in 1984. However, it also applies to indoor temperatures, especially saunas, and when houses and workplaces are not sufficiently heated or cooled.
The heat index and humidex measure the effect of humidity on the perception of temperatures above . In humid conditions, the air feels much hotter, because less perspiration evaporates from the skin.
The wind chill factor measures the effect of wind speed on cooling of the human body below . As airflow increases over the skin, more heat will be removed. Standard models and conditions are used.
The wet-bulb globe temperature (WBGT) combines the effects of radiation (typically sunlight), humidity, temperature and wind speed on the perception of temperature. It is not often used, since its measurement requires the use of a globe thermometer exposed to the sun, which is not included in standard meteorological equipment used in official weather conditions reporting (nor are, in most cases, any other explicit means of measuring solar radiation; temperature measurement takes place entirely in a shade box to avoid direct solar effects). It also does not have an explicit relationship with the perceived temperature a person feels; when used for practical purposes, the WBGT is linked to a category system to estimate the threat of heat-related illness.
Since there is no direct measurement of solar radiation in U.S. observation systems, and solar radiation can add up to to the apparent temperature, commercial weather companies have attempted to develop their own proprietary apparent temperature systems, including The Weather Company's "FeelsLike" and AccuWeather's "RealFeel". These systems,
|
https://en.wikipedia.org/wiki/The%20Sleuth%20Kit
|
The Sleuth Kit (TSK) is a library and collection of Unix- and Windows-based utilities for extracting data from disk drives and other storage so as to facilitate the forensic analysis of computer systems. It forms the foundation for Autopsy, a better known tool that is essentially a graphical user interface to the command line utilities bundled with The Sleuth Kit.
The collection is open source and protected by the GPL, the CPL and the IPL. The software is under active development and it is supported by a team of developers. The initial development was done by Brian Carrier who based it on The Coroner's Toolkit. It is the official successor platform.
The Sleuth Kit is capable of parsing NTFS, FAT/ExFAT, UFS 1/2, Ext2, Ext3, Ext4, HFS, ISO 9660 and YAFFS2 file systems either separately or within disk images stored in raw (dd), Expert Witness or AFF formats. The Sleuth Kit can be used to examine most Microsoft Windows, most Apple Macintosh OSX, many Linux and some other UNIX computers.
The Sleuth Kit can be used via the included command line tools, or as a library embedded within a separate digital forensic tool such as Autopsy or log2timeline/plaso.
Tools
Some of the tools included in The Sleuth Kit include:
ils lists all metadata entries, such as an Inode.
blkls displays data blocks within a file system (formerly called dls).
fls lists allocated and unallocated file names within a file system.
fsstat displays file system statistical information about an image or storage medium.
ffind searches for file names that point to a specified metadata entry.
mactime creates a timeline of all files based upon their MAC times.
disk_stat (currently Linux-only) discovers the existence of a Host Protected Area.
Applications
The Sleuth Kit can be used
for use in forensics, its main purpose
for understanding what data is stored on a disk drive, even if the operating system has removed all metadata.
for recovering deleted image files
summarizing all deleted f
|
https://en.wikipedia.org/wiki/Tag%20SNP
|
A tag SNP is a representative single nucleotide polymorphism (SNP) in a region of the genome with high linkage disequilibrium that represents a group of SNPs called a haplotype. It is possible to identify genetic variation and association to phenotypes without genotyping every SNP in a chromosomal region. This reduces the expense and time of mapping genome areas associated with disease, since it eliminates the need to study every individual SNP. Tag SNPs are useful in whole-genome SNP association studies in which hundreds of thousands of SNPs across the entire genome are genotyped.
Introduction
Linkage Disequilibrium
Two loci are said to be in linkage equilibrium (LE) if their inheritance is an independent event. If the alleles at those loci are non-randomly inherited then we say that they are at linkage disequilibrium (LD). LD is most commonly caused by physical linkage of genes. When two genes are inherited on the same chromosome, depending on their distance and the likelihood of recombination between the loci they can be at high LD. However, LD can be also observed due to functional interactions where even genes from different chromosomes can jointly confer an evolutionarily selected phenotype or can affect the viability of potential offspring.
In families LD is highest because of the lowest numbers of recombination events (fewest meiosis events). This is especially true between inbred lines. In populations LD exists because of selection, physical closeness of the genes that causes low recombination rates or due to recent crossing or migration. On a population level, processes that influence linkage disequilibrium include genetic linkage, epistatic natural selection, rate of recombination, mutation, genetic drift, random mating, genetic hitchhiking and gene flow.
When a group of SNPs are inherited together because of high LD there tends to be redundant information. The selection of a tag SNP as a representative of these groups reduces the amount of redundan
|
https://en.wikipedia.org/wiki/History%20of%20string%20theory
|
The history of string theory spans several decades of intense research including two superstring revolutions. Through the combined efforts of many researchers, string theory has developed into a broad and varied subject with connections to quantum gravity, particle and condensed matter physics, cosmology, and pure mathematics.
1943–1959: S-matrix theory
String theory represents an outgrowth of S-matrix theory, a research program begun by Werner Heisenberg in 1943 following John Archibald Wheeler's 1937 introduction of the S-matrix. Many prominent theorists picked up and advocated S-matrix theory, starting in the late 1950s and throughout the 1960s. The field became marginalized and discarded in the mid 1970s and disappeared in the 1980s. Physicists neglected it because some of its mathematical methods were alien, and because quantum chromodynamics supplanted it as an experimentally better-qualified approach to the strong interactions.
The theory presented a radical rethinking of the foundations of physical laws. By the 1940s it had become clear that the proton and the neutron were not pointlike particles like the electron. Their magnetic moment differed greatly from that of a pointlike spin-½ charged particle, too much to attribute the difference to a small perturbation. Their interactions were so strong that they scattered like a small sphere, not like a point. Heisenberg proposed that the strongly interacting particles were in fact extended objects, and because there are difficulties of principle with extended relativistic particles, he proposed that the notion of a space-time point broke down at nuclear scales.
Without space and time, it becomes difficult to formulate a physical theory. Heisenberg proposed a solution to this problem: focusing on the observable quantities—those things measurable by experiments. An experiment only sees a microscopic quantity if it can be transferred by a series of events to the classical devices that surround the experimental c
|
https://en.wikipedia.org/wiki/R-factor%20%28crystallography%29
|
In crystallography, the R-factor (sometimes called residual factor or reliability factor or the R-value or RWork) is a measure of the agreement between the crystallographic model and the experimental X-ray diffraction data. In other words, it is a measure of how well the refined structure predicts the observed data. The value is also sometimes called the discrepancy index, as it mathematically describes the difference between the experimental observations and the ideal calculated values. It is defined by the following equation:
where F is the so-called structure factor and the sum extends over all the reflections of X-rays measured and their calculated counterparts respectively. The structure factor is closely related to the intensity of the reflection it describes:
.
The minimum possible value is zero, indicating perfect agreement between experimental observations and the structure factors predicted from the model. There is no theoretical maximum, but in practice, values are considerably less than one even for poor models, provided the model includes a suitable scale factor. Random experimental errors in the data contribute to even for a perfect model, and these have more leverage when the data are weak or few, such as for a low-resolution data set. Model inadequacies such as incorrect or missing parts and unmodeled disorder are the other main contributors to , making it useful to assess the progress and final result of a crystallographic model refinement. For large molecules, the R-factor usually ranges between 0.6 (when computed for a random model and against an experimental data set) and 0.2 (for example for a well refined macro-molecular model at a resolution of 2.5 Ångström). Small molecules (up to ca. 1000 atoms) usually form better-ordered crystals than large molecules, and thus it is possible to attain lower R-factors. In the Cambridge Structural Database of small-molecule structures, more than 95% of the 500,000+ crystals have an R-factor lower tha
|
https://en.wikipedia.org/wiki/RSS%20Advisory%20Board
|
The RSS Advisory Board is a group founded in July 2003 that publishes the RSS 0.9, RSS 0.91 and RSS 2.0 specifications and helps developers create RSS applications.
Dave Winer, the lead author of several RSS specifications and a longtime evangelist of syndication, created the board to maintain the RSS 2.0 specification in cooperation with Harvard's Berkman Center.
In January 2006, RSS Advisory Board chairman Rogers Cadenhead announced that eight new members had joined the group, continuing the development of the RSS format and resolving ambiguities in the RSS 2.0 specification. Netscape developer Christopher Finke joined the board in March 2007, the company's first involvement in RSS since the publication of RSS 0.91.
In June 2007, the board revised its version of the specification to confirm that namespaces may extend core elements with namespace attributes, as Microsoft has done in Internet Explorer 7. In its view, a difference of interpretation left publishers unsure of whether this was permitted or forbidden.
In January 2008, Netscape announced that the RSS 0.9 and RSS 0.91 specifications, document type definitions and related documentation that it had published since their creation in 1999 were moving to the board.
Yahoo transferred the Media RSS specification to the board in December 2009.
Current members
Rogers Cadenhead
Sterling Camden
Simone Carletti
James Holderness
Jenny Levine
Eric Lunt
Randy Charles Morin
Ryan Parman
Paul Querna
Jake Savin
Jason Shellen
|
https://en.wikipedia.org/wiki/Helimagnetism
|
Helimagnetism is a form of magnetic ordering where spins of neighbouring magnetic moments arrange themselves in a spiral or helical pattern, with a characteristic turn angle of somewhere between 0 and 180 degrees. It results from the competition between ferromagnetic and antiferromagnetic exchange interactions. It is possible to view ferromagnetism and antiferromagnetism as helimagnetic structures with characteristic turn angles of 0 and 180 degrees respectively. Helimagnetic order breaks spatial inversion symmetry, as it can be either left-handed or right-handed in nature.
Strictly speaking, helimagnets have no permanent magnetic moment, and as such are sometimes considered a complicated type of antiferromagnet. This distinguishes helimagnets from conical magnets, (e.g. Holmium below 20 K) which have spiral modulation in addition to a permanent magnetic moment.
Helimagnetism was first proposed in 1959, as an explanation of the magnetic structure of manganese dioxide. Initially applied to neutron diffraction, it has since been observed more directly by Lorentz electron microscopy. Some helimagnetic structures are reported to be stable up to room temperature. Like how ordinary ferromagnets have domain walls that separate individual magnetic domains, helimagnets have their own classes of domain walls which are characterized by topological charge.
Many helimagnets have a chiral cubic structure, such as the FeSi (B20) crystal structure type. In these materials, the combination of ferromagnetic exchange and the Dzyaloshinskii–Moriya interaction leads to helixes with relatively long periods. Since the crystal structure is noncentrosymetric even in the paramagnetic state, the magnetic transition to a helimagnetic state does not break inversion symmetry, and the direction of the spiral is locked to the crystal structure.
On the other hand, helimagnetism in other materials can also be based on frustrated magnetism or the RKKY interaction. The result is that centrosymmet
|
https://en.wikipedia.org/wiki/Mictomagnetism
|
Mictomagnetism is a spin system in which various exchange interactions are mixed. It is observed in several kinds of alloys, including Cu-Mn, Fe-Al and Ni-Mn alloys. Cooled in zero magnetic field, these materials have low remanence and coercivity. Cooled in a magnetic field, they have much larger remanence and the hysteresis loop is shifted in the direction opposite to the field (an effect similar to exchange bias).
|
https://en.wikipedia.org/wiki/AOAC%20International
|
AOAC International is a 501(c) non-profit scientific association with headquarters in Rockville, Maryland. It was founded in 1884 as the Association of Official Agricultural Chemists (AOAC) and became AOAC International in 1991. It publishes standardized, chemical analysis methods designed to increase confidence in the results of chemical and microbiological analyses. Government agencies and civil organizations often require that laboratories use official AOAC methods. AOAC is headquartered in Rockville, Maryland, and has approximately 3,000 members based in over 90 countries.
History
AOAC International, informally the AOAC, was founded September 8, 1884, as the Association of Official Agricultural Chemists, by the United States Department of Agriculture (USDA), to establish uniform chemical analysis methods for analyzing fertilizers. In 1927, sponsorship was moved to the newly formed Food, Drug and Insecticide organization which become the Food and Drug Administration (FDA) in 1930.
From its initial scope of analyzing fertilizer, the organization expanded the contents of its methods book to cover dairy products, pesticides, microbiological contamination and animal feeds, among others. In 1965, due to its increasing area of focus for analytical work, the name was changed to the Association of Official Analytical Chemists. The name was changed again to the Association of Analytical Communities to reflect the growing international involvement, and then in 1991 it became AOAC International, with AOAC no longer having any legal meaning. Control of the organization remained with the FDA until 1979 when it became completely independent, although it still has close links to both the FDA and the USDA.
Full membership was limited to government analytical chemists until 1987 when membership was extended to industrial scientists. As well as government agencies, members, volunteers and partners now also include people from academia, other international organizations, private
|
https://en.wikipedia.org/wiki/Setcontext
|
setcontext is one of a family of C library functions (the others being getcontext, makecontext and swapcontext) used for context control. The setcontext family allows the implementation in C of advanced control flow patterns such as iterators, fibers, and coroutines. They may be viewed as an advanced version of setjmp/longjmp; whereas the latter allows only a single non-local jump up the stack, setcontext allows the creation of multiple cooperative threads of control, each with its own stack.
Specification
setcontext was specified in POSIX.1-2001 and the Single Unix Specification, version 2, but not all Unix-like operating systems provide them. POSIX.1-2004 obsoleted these functions, and in POSIX.1-2008 they were removed, with POSIX Threads indicated as a possible replacement. Citing IEEE Std 1003.1, 2004 Edition: With the incorporation of the ISO/IEC 9899:1999 standard into this specification it was found that the ISO C standard (Subclause 6.11.6) specifies that the use of function declarators with empty parentheses is an obsolescent feature. Therefore, using the function prototype:
void makecontext(ucontext_t *ucp, void (*func)(), int argc, ...);
is making use of an obsolescent feature of the ISO C standard. Therefore, a strictly conforming POSIX application cannot use this form. Therefore, use of getcontext(), makecontext(), and swapcontext() is marked obsolescent.
There is no way in the ISO C standard to specify a non-obsolescent function prototype indicating that a function will be called with an arbitrary number (including zero) of arguments of arbitrary types (including integers, pointers to data, pointers to functions, and composite types).
Definitions
The functions and associated types are defined in the ucontext.h system header file. This includes the ucontext_t type, with which all four functions operate:
typedef struct {
ucontext_t *uc_link;
sigset_t uc_sigmask;
stack_t uc_stack;
mcontext_t uc_mcontext;
...
} ucontext_t;
uc_link poin
|
https://en.wikipedia.org/wiki/TuVox
|
TuVox is a company that produces VXML-based telephone speech-recognition applications to replace DTMF touch-tone systems for their clients.
History
TuVox was founded in 2001 by Steven S. Pollock and Ashok Khosla, formerly of Apple Computer Corporation and Claris Corporation. Since then, TuVox has grown to over 150 employees and has US offices in Cupertino, California and Boca Raton, Florida as well as international offices in London, Vancouver and Sydney. In 2005, TuVox acquired the customers and hosting facilities of Net-By-Tel. In 2007, the company raised $20m for its speech recognition, and phone menu software.
On July 22, 2010, West Interactive — a subsidiary of West Corporation — announced its acquisition of TuVox.
Customers
TuVox clients include 1-800-Flowers.com, AMC Entertainment, American Airlines, British Airways, M&T Bank, Canon Inc., Gateway, Inc., Motorola, Progress Energy Inc., Telecom New Zealand, Time, Inc., BECU, Virgin America and USAA.
|
https://en.wikipedia.org/wiki/Stable%20isotope%20labeling%20by%20amino%20acids%20in%20cell%20culture
|
Stable Isotope Labeling by/with Amino acids in Cell culture (SILAC) is a technique based on mass spectrometry that detects differences in protein abundance among samples using non-radioactive isotopic labeling. It is a popular method for quantitative proteomics.
Procedure
Two populations of cells are cultivated in cell culture. One of the cell populations is fed with growth medium containing normal amino acids. In contrast, the second population is fed with growth medium containing amino acids labeled with stable (non-radioactive) heavy isotopes. For example, the medium can contain arginine labeled with six carbon-13 atoms (13C) instead of the normal carbon-12 (12C). When the cells are growing in this medium, they incorporate the heavy arginine into all of their proteins. Thereafter, all peptides containing a single arginine are 6 Da heavier than their normal counterparts. Alternatively, uniform labeling with 13C or 15N can be used. Proteins from both cell populations are combined and analyzed together by mass spectrometry as pairs of chemically identical peptides of different stable-isotope composition can be differentiated in a mass spectrometer owing to their mass difference. The ratio of peak intensities in the mass spectrum for such peptide pairs reflects the abundance ratio for the two proteins.
Applications
A SILAC approach involving incorporation of tyrosine labeled with nine carbon-13 atoms (13C) instead of the normal carbon-12 (12C) has been utilized to study tyrosine kinase substrates in signaling pathways. SILAC has emerged as a very powerful method to study cell signaling, post translation modifications such as phosphorylation, protein–protein interaction and regulation of gene expression. In addition, SILAC has become an important method in secretomics, the global study of secreted proteins and secretory pathways. It can be used to distinguish between proteins secreted by cells in culture and serum contaminants. Standardized protocols of SILAC for
|
https://en.wikipedia.org/wiki/Nuphar%20advena
|
Nuphar advena (spatterdock or cow lily or yellow pond-lily) is a species of Nuphar native throughout the eastern United States and in some parts of Canada, such as Nova Scotia. It is similar to the Eurasian species N. lutea, and is treated as a subspecies of it by some botanists, though differing significantly in genetics.
It is locally naturalized in Britain.
Uses
Spatterdock was long used in traditional medicine, with the root applied to the skin and/or both the root and seeds eaten for a variety of conditions. The seeds are edible, and can be ground into flour. The root is edible too, but can prove to be incredibly bitter in some plants.
|
https://en.wikipedia.org/wiki/Pro-simplicial%20set
|
In mathematics, a pro-simplicial set is an inverse system of simplicial sets.
A pro-simplicial set is called pro-finite if each term of the inverse system of simplicial sets has finite homotopy groups.
Pro-simplicial sets show up in shape theory, in the study of localization and completion in homotopy theory, and in the study of homotopy properties of schemes (e.g. étale homotopy theory).
|
https://en.wikipedia.org/wiki/Salicin
|
Salicin is an alcoholic β-glucoside. Salicin is produced in (and named after) willow (Salix) bark. It is a biosynthetic precursor to salicylaldehyde.
Medicinal aspects
Salicin is found in the bark of and leaves of willows, poplars and various other plants. Derivates are found in castoreum. Salicin from meadowsweet was used in the synthesis of aspirin (acetylsalicylic acid), in 1899 by scientists at Bayer. Salicin tastes bitter like quinine.
Salicin may cause an allergic skin reaction (skin sensitization; category 1).
Mild side effects are standard, with rare occurrences of nausea, vomiting, rash, dizziness and breathing problems. Overdose from high quantities of salicin can be toxic, damaging kidneys, causing stomach ulcers, diarrhea, bleeding or digestive discomfort. Some people may be allergic or sensitive to salicylates and suffer reactions similar to those produced by aspirin. People should not take salicin if they have asthma, diabetes, gout, gastritis, hemophilia, stomach ulcers; also contraindicated are children under 16, and pregnant and breastfeeding women.
|
https://en.wikipedia.org/wiki/Interstellar%20%28film%29
|
Interstellar is a 2014 epic science fiction film co-written, directed, and produced by Christopher Nolan. It stars Matthew McConaughey, Anne Hathaway, Jessica Chastain, Bill Irwin, Ellen Burstyn, Matt Damon, and Michael Caine. Set in a dystopian future where humanity is embroiled in a catastrophic blight and famine, the film follows a group of astronauts who travel through a wormhole near Saturn in search of a new home for humankind.
Brothers Christopher and Jonathan Nolan wrote the screenplay, which had its origins in a script Jonathan developed in 2007 and was originally set to be directed by Steven Spielberg. Kip Thorne, a Caltech theoretical physicist and 2017 Nobel laureate in Physics, was an executive producer, acted as a scientific consultant, and wrote a tie-in book, The Science of Interstellar. Cinematographer Hoyte van Hoytema shot it on 35 mm movie film in the Panavision anamorphic format and IMAX 70 mm. Principal photography began in late 2013 and took place in Alberta, Iceland, and Los Angeles. Interstellar uses extensive practical and miniature effects, and the company Double Negative created additional digital effects.
Interstellar premiered in Los Angeles on October 26, 2014. In the United States, it was first released on film stock, expanding to venues using digital projectors. The film received generally positive reviews from critics and grossed over $677 million worldwide ($715 million after subsequent re-releases), making it the tenth-highest-grossing film of 2014. It has been praised by astronomers for its scientific accuracy and portrayal of theoretical astrophysics. Interstellar was nominated for five awards at the 87th Academy Awards, winning Best Visual Effects, and received numerous other accolades.
Plot
In 2067, humanity is facing extinction following a global famine caused by ecocide. As a result, ex-NASA pilot, Joseph Cooper, is forced to work as a farmer, with his son Tom, daughter Murph and father-in-law Donald.
After odd occurren
|
https://en.wikipedia.org/wiki/VMware%20Workstation%20Player
|
VMware Workstation Player, formerly VMware Player, is a virtualization software package for x64 computers running Microsoft Windows or Linux, supplied free of charge by VMware, Inc. VMware Player can run existing virtual appliances and create its own virtual machines (which require that an operating system be installed to be functional). It uses the same virtualization core as VMware Workstation, a similar program with more features, which is not free of charge. VMware Player is available for personal non-commercial use, or for distribution or other use by written agreement. VMware, Inc. does not formally support Player, but there is an active community website for discussing and resolving issues, as well as a knowledge base.
The free VMware Player was distinct from VMware Workstation until Player v7, Workstation v11. In 2015 the two packages were combined as VMware Workstation 12, with a free for non-commercial use Player version which, on purchase of a license code, either became the higher-specification VMware Workstation Pro, or allowed commercial use of Player.
Features
VMware claimed in 2011 that the Player offered better graphics, faster performance, and tighter integration for running Windows XP under Windows Vista or Windows 7 than Microsoft's Windows XP Mode running on Windows Virtual PC, which is free of charge for all purposes.
Versions earlier than 3 of VMware Player were unable to create virtual machines (VMs), which had to be created by an application with the capability, or created manually by statements stored in a text file with extension ".vmx"; later versions can create VMs. The features of Workstation not available in Player are "developer-centric features such as Teams, multiple Snapshots and Clones, and Virtual Rights Management features for end-point security", and support by VMware. Player allows a complete virtual machine to be copied at any time by copying a directory; while not a fully featured snapshot facility, this allows a copy of
|
https://en.wikipedia.org/wiki/Bioreactor%20landfill
|
Landfills are the primary method of waste disposal in many parts of the world, including United States and Canada. Bioreactor landfills are expected to reduce the amount of and costs associated with management of leachate, to increase the rate of production of methane (natural gas) for commercial purposes and reduce the amount of land required for land-fills. Bioreactor landfills are monitored and manipulate oxygen and moisture levels to increase the rate of decomposition by microbial activity.
Traditional landfills and associated problems
Landfills are the oldest known method of waste disposal. Waste is buried in large dug out pits (unless naturally occurring locations are available) and covered. Bacteria and archaea decompose the waste over several decades producing several by-products of importance, including methane gas (natural gas), leachate, and volatile organic compounds (such as hydrogen sulfide (H2S), N2O2, etc.).
Methane gas, a strong greenhouse gas, can build up inside the landfill leading to an explosion unless released from the cell. Leachate are fluid metabolic products from decomposition and contain various types of toxins and dissolved metallic ions. If leachate escapes into the ground water it can cause health problems in both animals and plants. The volatile organic compounds (VOCs) are associated with causing smog and acid rain. With the increasing amount of waste produced, appropriate places to safely store it have become difficult to find.
Working of a bioreactor landfill
There are three types of bioreactor: aerobic, anaerobic and a hybrid (using both aerobic and anaerobic method). All three mechanisms involve the reintroduction of collected leachate supplemented with water to maintain moisture levels in the landfill. The micro-organisms responsible for decomposition are thus stimulated to decompose at an increased rate with an attempt to minimise harmful emissions.
In aerobic bioreactors air is pumped into the landfill using either ve
|
https://en.wikipedia.org/wiki/Equal-cost%20multi-path%20routing
|
Equal-cost multi-path routing (ECMP) is a routing strategy where packet forwarding to a single destination can occur over multiple best paths with equal routing priority. Multi-path routing can be used in conjunction with most routing protocols because it is a per-hop local decision made independently at each router. It can substantially increase bandwidth by load-balancing traffic over multiple paths; however, there may be significant problems in deploying it in practice.
History
Load balancing by per-packet multipath routing was generally disfavored due to the impact of rapidly changing latency, packet reordering and maximum transmission unit (MTU) differences within a network flow, which could disrupt the operation of many Internet protocols, most notably TCP and path MTU discovery. RFC 2992 analyzed one particular multipath routing strategy involving the assignment of flows through hashing flow-related data in the packet header. This solution is designed to avoid these problems by sending all packets from any particular network flow through the same path while balancing multiple flows over multiple paths in general.
See also
Link aggregation
Shortest Path Bridgingestablishes multiple forward and reverse paths on Ethernet networks.
Source routing
TRILLenables per flow pair-wise load splitting without configuration and user intervention.
|
https://en.wikipedia.org/wiki/Quantum%20mutual%20information
|
In quantum information theory, quantum mutual information, or von Neumann mutual information, after John von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog of Shannon mutual information.
Motivation
For simplicity, it will be assumed that all objects in the article are finite-dimensional.
The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variables p(x, y), the two marginal distributions are
The classical mutual information I(X:Y) is defined by
where S(q) denotes the Shannon entropy of the probability distribution q.
One can calculate directly
So the mutual information is
Where the logarithm is taken in basis 2 to obtain the mutual information in bits. But this is precisely the relative entropy between p(x, y) and p(x)p(y). In other words, if we assume the two variables x and y to be uncorrelated, mutual information is the discrepancy in uncertainty resulting from this (possibly erroneous) assumption.
It follows from the property of relative entropy that I(X:Y) ≥ 0 and equality holds if and only if p(x, y) = p(x)p(y).
Definition
The quantum mechanical counterpart of classical probability distributions are modeled with density matrices.
Consider a quantum system that can be divided into two parts, A and B, such that independent measurements can be made on either part. The state space of the entire quantum system is then the tensor product of the spaces for the two parts.
Let ρAB be a density matrix acting on states in HAB. The von Neumann entropy of a density matrix S(ρ), is the quantum mechanical analogy of the Shannon entropy.
For a probability distribution p(x,y), the marginal distributions are obtained by integrating away the variables x or y. The corresponding operation for density matrices is the partial trace. So one can assign to ρ a state on the subsystem A by
where TrB is partial trace with respect to system B. This
|
https://en.wikipedia.org/wiki/History%20of%20biotechnology
|
Biotechnology is the application of scientific and engineering principles to the processing of materials by biological agents to provide goods and services. From its inception, biotechnology has maintained a close relationship with society. Although now most often associated with the development of drugs, historically biotechnology has been principally associated with food, addressing such issues as malnutrition and famine. The history of biotechnology begins with zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I, however, zymotechnology would expand to tackle larger industrial issues, and the potential of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and gasohol projects failed to progress due to varying issues including public resistance, a changing economic scene, and shifts in political power.
Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront of science in society, and the intimate relationship between the scientific community, the public, and the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference, where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology. By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a media event designed to capture public support, and by the 1980s, biotechnology grew into a promising real industry. In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s.
The field of genetic engineering remains a heated topic of discussion in today's society with the advent of gene therapy, stem cell research, cloning, and genetically modified food. W
|
https://en.wikipedia.org/wiki/Digital%20photo%20frame
|
A digital photo frame (also called a digital media frame) is a picture frame that displays digital photos without the need of a computer or printer. The introduction of digital photo frames predates tablet computers, which can serve the same purpose in some situations; however, digital photo frames are generally designed specifically for the stationary, aesthetic display of photographs and therefore usually provide a nicer-looking frame and a power system designed for continuous use.
Digital photo frames come in a variety of different shapes and sizes with a range of features. Some may even play videos as well as display photographs. Owners can choose a digital photo frame that utilizes a WiFi connection or not, comes with cloud storage, and/or USB and SD card hub.
Features
Digital photo frames range in size from tiny keychain-sized units to large wall-mounted frames spanning several feet. The most common sizes range from to . Some digital photo frames can only display JPEG pictures. Most digital photo frames display the photos as a slideshow and usually with an adjustable time interval. They may also be able to send photos to a printer, or have hybrid features. Examples are the Sony S-Frame F800, that has an integrated printer on its back, or the Epson PictureMate Show.
Digital photo frames typically allow the display of pictures directly from a camera's memory card, and may provide internal memory storage. Some allow users to upload pictures to the frame's memory via a USB connection, or wirelessly via Bluetooth technology. Others include support for wireless (802.11) connections or use cellular technology to transfer and share files. Some frames allow photos to be shared from a frame to another.
Certain frames provide specific application support such as loading images over the Internet from RSS feeds, photo sharing sites such as Flickr, Picasa and from e-mail.
Built-in speakers are common for playing video content with sound, and many frames also feature
|
https://en.wikipedia.org/wiki/Regular%20homotopy
|
In the mathematical field of topology, a regular homotopy refers to a special kind of homotopy between immersions of one manifold in another. The homotopy must be a 1-parameter family of immersions.
Similar to homotopy classes, one defines two immersions to be in the same regular homotopy class if there exists a regular homotopy between them. Regular homotopy for immersions is similar to isotopy of embeddings: they are both restricted types of homotopies. Stated another way, two continuous functions are homotopic if they represent points in the same path-components of the mapping space , given the compact-open topology. The space of immersions is the subspace of consisting of immersions, denoted by . Two immersions are regularly homotopic if they represent points in the same path-component of .
Examples
Any two knots in 3-space are equivalent by regular homotopy, though not by isotopy.
The Whitney–Graustein theorem classifies the regular homotopy classes of a circle into the plane; two immersions are regularly homotopic if and only if they have the same turning number – equivalently, total curvature; equivalently, if and only if their Gauss maps have the same degree/winding number.
Stephen Smale classified the regular homotopy classes of a k-sphere immersed in – they are classified by homotopy groups of Stiefel manifolds, which is a generalization of the Gauss map, with here k partial derivatives not vanishing. More precisely, the set of regular homotopy classes of embeddings of sphere in is in one-to-one correspondence with elements of group . In case we have . Since is path connected, and and due to Bott periodicity theorem we have and since then we have . Therefore all immersions of spheres and in euclidean spaces of one more dimension are regular homotopic. In particular, spheres embedded in admit eversion if . A corollary of his work is that there is only one regular homotopy class of a 2-sphere immersed in . In particular, this means t
|
https://en.wikipedia.org/wiki/Pierre%20Robin%20sequence
|
Pierre Robin sequence (; abbreviated PRS) is a congenital defect observed in humans which is characterized by facial abnormalities. The three main features are micrognathia (abnormally small mandible), which causes glossoptosis (downwardly displaced or retracted tongue), which in turn causes breathing problems due to obstruction of the upper airway. A wide, U-shaped cleft palate is commonly also present. PRS is not merely a syndrome, but rather it is a sequence—a series of specific developmental malformations which can be attributed to a single cause.
Signs and symptoms
PRS is characterized by an unusually small mandible, posterior displacement or retraction of the tongue, and upper airway obstruction. Cleft palate (incomplete closure of the roof of the mouth) is present in the majority of patients. Hearing loss and speech difficulty are often associated with PRS.
Causes
Mechanical basis
The physical craniofacial deformities of PRS may be the result of a mechanical problem in which intrauterine growth of certain facial structures is restricted, or mandibular positioning is altered. One theory for the etiology of PRS is that, early in the first trimester of gestation, some mechanical factor causes the neck to be abnormally flexed such that the tip of the mandible becomes compressed against the sternoclavicular joint. This compression of the chin interferes with development of the body of the mandible, resulting in micrognathia. The concave space formed by the body of the hypoplastic mandible is too small to accommodate the tongue, which continues to grow unimpeded. With nowhere else to go, the base of the tongue is downwardly displaced, which causes the tip of the tongue to be interposed between the left and right palatal shelves. This in turn may result in failure of the left and right palatal shelves to fuse in the midline to form the hard palate. This condition manifests as a cleft palate. Later in gestation (at around 12 to 14 weeks), extension of the neck of
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.