source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/AI%20winter
|
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. Three years later the billion-dollar AI industry began to collapse.
There were two major winters approximately 1974–1980 and 1987–2000 and several smaller episodes, including the following:
1966: failure of machine translation
1969: criticism of perceptrons (early, single-layer artificial neural networks)
1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
1973–74: DARPA's cutbacks to academic AI research in general
1987: collapse of the LISP machine market
1988: cancellation of new spending on AI by the Strategic Computing Initiative
1990s: many expert systems were abandoned
1990s: end of the Fifth Generation computer project's original goals
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporat
|
https://en.wikipedia.org/wiki/VLSI%20Project
|
The VLSI Project was a DARPA-program initiated by Robert Kahn in 1978 that provided research funding to a wide variety of university-based teams in an effort to improve the state of the art in microprocessor design, then known as Very Large Scale Integration (VLSI).
The VLSI Project is one of the most influential research projects in modern computer history. Its offspring include Berkeley Software Distribution (BSD) Unix, the reduced instruction set computer (RISC) processor concept, many computer-aided design (CAD) tools still in use today, 32-bit graphics workstations, fabless manufacturing and design houses, and its own semiconductor fabrication plant (fab), MOSIS, starting in 1981. A similar DARPA project partnering with industry, VHSIC had little or no impact.
The VLSI Project was central in promoting the Mead and Conway revolution throughout industry.
Project
New design rules
In 1975, Carver Mead, Tom Everhart and Ivan Sutherland of Caltech wrote a report for ARPA on the topic of microelectronics. Over the previous few years, Mead had coined the term "Moore's law" to describe Gordon Moore's 1965 prediction for the growth rate of complexity, and in 1974, Robert Dennard of IBM noted that the scale shrinking that formed the basis of Moore's law also affected the performance of the systems. These combined effects implied a massive increase in computing power was about to be unleashed on the industry. The report, published in 1976, suggested that ARPA fund development across a number of fields in order to deal with the complexity that was about to appear due to these "very-large-scale integrated circuits".
Later that year, Sutherland wrote a letter to his brother Bert who was at that time working at Xerox PARC. He suggested a joint effort between PARC and Caltech to begin studying these issues. Bert agreed to form a team, inviting Lynn Conway and Doug Fairbairn to join. Conway had previously worked at IBM on a supercomputer project known as ACS-1. After consid
|
https://en.wikipedia.org/wiki/Stanford%20MIPS
|
MIPS, an acronym for Microprocessor without Interlocked Pipeline Stages, was a research project conducted by John L. Hennessy at Stanford University between 1981 and 1984. MIPS investigated a type of instruction set architecture (ISA) now called reduced instruction set computer (RISC), its implementation as a microprocessor with very large scale integration (VLSI) semiconductor technology, and the effective exploitation of RISC architectures with optimizing compilers. MIPS, together with the IBM 801 and Berkeley RISC, were the three research projects that pioneered and popularized RISC technology in the mid-1980s. In recognition of the impact MIPS made on computing, Hennessey was awarded the IEEE John von Neumann Medal in 2000 by the Institute of Electrical and Electronics Engineers (IEEE) (shared with David A. Patterson), the Eckert–Mauchly Award in 2001 by the Association for Computing Machinery, the Seymour Cray Computer Engineering Award in 2001 by the IEEE Computer Society, and, again with David Patterson, the Turing Award in 2017 by the ACM.
The project was initiated in 1981 in response to reports of similar projects at IBM (the 801) and the University of California, Berkeley (the RISC). MIPS was conducted by Hennessy and his graduate students until its conclusion in 1984. Hennessey founded MIPS Computer Systems in the same year to commercialize the technology developed by the project. In 1985, MIPS Computer Systems announced a new ISA, also called MIPS, and its first implementation, the R2000 microprocessor. The commercial MIPS ISA, and its implementations went on to be widely used, appearing in embedded computers, personal computers, workstations, servers, and supercomputers. As of May 2017, the commercial MIPS ISA is owned by Imagination Technologies, and is used mainly in embedded computers. In the late 1980s, a follow-up project called MIPS-X was conducted by Hennessy at Stanford.
The MIPS ISA was based on a 32-bit word. It supported 32-bit addressing,
|
https://en.wikipedia.org/wiki/Louis%20Alan%20Hazeltine
|
Louis Alan Hazeltine (August 7, 1886 – May 24, 1964) was an engineer and physicist, the inventor of the Neutrodyne circuit, and the Hazeltine-Fremodyne Superregenerative circuit. He was the founder of the Hazeltine Corporation.
Biography
Louis Alan Hazeltine was born in Morristown, New Jersey, in 1886 and attended the Stevens Institute of Technology in Hoboken, New Jersey, majoring in electrical engineering. He graduated in 1906 and accepted a job with General Electric corporation.
Hazeltine returned to Stevens to teach, eventually becoming chair of the electrical engineering department in 1917.
The following year he became a consultant for the United States Navy. The Navy job eventually parlayed into a position as an advisor to the U.S. government on radio broadcasting regulation, and later, a position on the National Defense Research Committee during World War II.
Hazeltine was president of the Institute of Radio Engineers in 1936.
|
https://en.wikipedia.org/wiki/EMedicine
|
eMedicine is an online clinical medical knowledge base founded in 1996 by doctors Scott Plantz and Jonathan Adler, and computer engineer Jeffrey Berezin. The eMedicine website consists of approximately 6,800 medical topic review articles, each of which is associated with a clinical subspecialty "textbook". The knowledge base includes over 25,000 clinically multimedia files.
Each article is authored by board certified specialists in the subspecialty to which the article belongs and undergoes three levels of physician peer-review, plus review by a Doctor of Pharmacy. The article's authors are identified with their current faculty appointments. Each article is updated yearly, or more frequently as changes in practice occur, and the date is published on the article. eMedicine.com was sold to WebMD in January, 2006 and is available as the Medscape Reference.
History
Plantz, Adler and Berezin evolved the concept for eMedicine.com in 1996 and deployed the initial site via Boston Medical Publishing, Inc., a corporation in which Plantz and Adler were principals. A Group Publishing System 1 (GPS 1) was developed that allowed large numbers of contributors to collaborate simultaneously. That system was first used to create a knowledge base in emergency medicine with 600 contributing MDs creating over 630 chapters in just over a year. In 1997 eMedicine.com, Inc. was legally spun off from Boston Medical Publishing. eMedicine attracted angel-level investment from Tenet Healthcare in 1999 and a significant VC investment in 2000 (Omnicom Group, HIG Capital).
Several years were spent creating the tables of contents, recruiting expert physicians and in the creation of the additional 6,100+ medical and surgical articles. The majority of operations were based out of the Omaha, Nebraska, office.
In the early 2000s Plantz and Lorenzo also spearheaded an alliance with the University of Nebraska Medical Center to accredit eMedicine content for physician, nursing, and pharmacy con
|
https://en.wikipedia.org/wiki/Layer%20four%20traceroute
|
Layer Four Traceroute (LFT) is a fast, multi-protocol traceroute engine, that also implements numerous other features including AS number lookups through regional Internet registries and other reliable sources, Loose Source Routing, firewall and load balancer detection, etc. LFT is best known for its use by network security practitioners to trace a route to a destination host through many configurations of packet-filters / firewalls, and to detect network connectivity, performance or latency problems.
How it works
LFT sends various TCP SYN and FIN probes (differing from Van Jacobson's UDP-based method) or UDP probes utilizing the IP protocol time to live field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host. LFT also listens for various TCP, UDP, and ICMP messages along the way to assist network managers in ascertaining per-protocol heuristic routing information, and can optionally retrieve various information about the networks it traverses. The operation of layer four traceroute is described in detail in several prominent security books.
Origins
The lft command first appeared in 1998 as fft. Renamed as a result of confusion with fast Fourier transforms, lft stands for layer four traceroute. Results are often referred to as a layer four trace.
See also
Prefix WhoIs
Sources
External links
Layer Four Traceroute Project
Network analyzers
Free network management software
|
https://en.wikipedia.org/wiki/1mdc
|
1mdc was a digital gold currency (DGC) that existed from 2001 to 2007 in which users traded digital currency backed by reserves of e-gold, rather than physical bullion reserves.
The website appeared to switch between various offshore hosting locations, and used software designed by Interesting Software Ltd, an Anguilla company.
As of April 27, 2007, a US court order has forced e-gold to liquidate a large number of e-gold accounts totalling some 10 to 20 million US dollars' worth of gold. A small part of this seizure was 1mdc's accounts and assets . If the court order in the USA is reversed, a user's e-gold grams remaining in 1mdc will "unbail" normally to the user's e-gold account. Ultimately e-gold is owned and operated by US citizens, so, 1mdc users must respect the decisions of US courts and the US authorities regarding the disposition of e-gold and the safety and security of US citizens. Even though 1mdc has no connection whatsoever to the US, and most 1mdc users are non-USA, ultimately e-gold is operated from the USA.
Features
As with any digital gold currency, one used 1mdc to keep assets away from fiat currencies and avoid inflationary risks associated with them. To open an account, 1mdc required the user to have a functioning e-mail address, an e-gold account, a password, initials and a PIN.
1mdc charged 0.05 gold grams per spend for accounts that receive 100 or more spends (total over 500 grams) to their account in any given calendar month. There were no spend fees for accounts that receive 99 or less spends in a calendar month, and no storage fees on all accounts. This was in sharp contrast to e-gold, which charges a storage fee of 1% per annum. Coupled with the quick and easy transfer of funds between e-gold and 1mdc accounts, 1mdc was attractive to persons with large amounts of e-gold, whose balances gradually shrink due to e-gold's storage fees. 1mdc also offered virtually fee free exchange from Pecunix gold to 1mdc, and a 5% fee to exchange fro
|
https://en.wikipedia.org/wiki/Robert%20Lawson%20Vaught
|
Robert Lawson Vaught (April 4, 1926 – April 2, 2002) was a mathematical logician and one of the founders of model theory.
Life
Vaught was a musical prodigy in his youth, in his case playing the piano. He began his university studies at Pomona College, at age 16. When World War II broke out, he enlisted into the US Navy, which assigned him to the University of California's V-12 program. He graduated in 1945 with an AB in physics.
In 1946, he began a Ph.D. in mathematics at Berkeley. He initially worked under the supervision of the topologist John L. Kelley, writing on C* algebras. In 1950, in response to McCarthyite pressures, Berkeley required all staff to sign a loyalty oath. Kelley declined and moved his career to Tulane University for three years. Vaught then began afresh under the supervision of Alfred Tarski, completing in 1954 a thesis on mathematical logic, titled Topics in the Theory of Arithmetical Classes and Boolean Algebras. After spending four years at the University of Washington, Vaught returned to Berkeley in 1958, where he remained until his 1991 retirement.
In 1957, Vaught married Marilyn Maca; they had two children.
Work
Vaught's work is primarily focused on model theory. In 1957, he and Tarski introduced elementary submodels and the Tarski–Vaught test characterizing them. In 1962, he and Michael D. Morley pioneered the concept of a saturated structure. His investigations on countable models of first-order theories led him to the Vaught conjecture stating that the number of countable models of a complete first-order theory (in a countable language) is always either finite, or countably infinite, or equinumerous with the real numbers. Vaught's "Never 2" theorem states that a complete first-order theory cannot have exactly two nonisomorphic countable models.
He considered his best work was his paper "Invariant sets in topology and logic", introducing the Vaught transform. He is known for the Tarski–Vaught test for elementary substructures, the
|
https://en.wikipedia.org/wiki/Sergey%20Nikolsky
|
Sergey Mikhailovich Nikolsky (; 30 April 1905 – 9 November 2012) was a Soviet and Russian mathematician.
Biography
Nikolsky was born in Talitsa, which was at that time located in Kamyshlovsky Uyezd of the Russian Empire. He had been an Academician since 28 November 1972. He also had won many scientific awards. At the age of 92 he was still actively giving lectures in Moscow Institute of Physics and Technology. In 2005, he was only giving talks at scientific conferences, but was still working in MIPT, at the age of 100. He died in Moscow in November 2012 at the age of 107.
Scientific activities
Nikolsky made fundamental contributions to functional analysis, approximation of functions, quadrature formulas, enclosed functional spaces and their applications to variational solutions of partial differential equations. He created a large scientific school of functions' theory and its applications. He authored over 100 scientific publications, including 3 monographs, 2 college textbooks and 7 school textbooks.
Selected publications
Approximation of functions of several variables and imbedding theorems, Springer Verlag 1975 (Russian original, Nauka, Moscow 1969)
Treatise on the shift operator, Springer Verlag 1986 (Russian original, Moscow 1980)
Operators, functions and systems. An easy reading. Volume 1: Hardy, Hankel and Toeplitz, American Mathematical Society 2002
with Valentin Petrovich Ilyin and Oleg Besov: Integral representation of functions and embedding theorems, 2 vols., Wiley 1978, 1979
as editor: Theory and applications of differentiable functions of several variables, American Mathematical Society 1967
Quadrature formulae, Delhi, Hindustan Publ. Corp. 1964
Курс математического анализа (Course in mathematical analysis, Russian), 2 vols., Nauka 1975
with : Differential equations, multiple integrals, series, theory of functions of a complex variable, Mir Publ., Moscow 1983
|
https://en.wikipedia.org/wiki/Line%20of%20Property
|
The Line of Property is the name commonly given to the line dividing Indian lands from those of the Thirteen Colonies, which were established in the 1768 Treaty of Fort Stanwix between British officials and the Iroquois tribes. In western Pennsylvania, it is referred to as the Purchase line.
Treaty description of the line
As written with original spellings and place names; (modern names in parentheses):
"Beginning at the Mouth of the Cherokee or Hogohee River" (Tennessee River) "where it empties into the River Ohio" (at Paducah, Kentucky)
"& running from thence upwards along the South side of said River to Kittanning, which is above Fort Pitt" (Pittsburgh, Pennsylvania)
"from thence a direct Line to the nearest Fork of the west branch of Susquehanna
"thence through the Allegany Mountains along the south side of the said West Branch until it comes opposite to the mouth of a creek called Tiadaghton" (Pine Creek just west of Jersey Shore, Pennsylvania)
"thence across the West Branch along the South Side of that Creek"
"and along the North Side of Burnetts Hills to a Creek called Awandae" (Towanda Creek)
"thence down the same to the East Branch of Sasquehanna" (at Towanda, Pennsylvania)
"& across the same and up the East side of that River to Oswegy" (Owego, New York)
"from thence East to Delawar River" (Delaware River)
"and up that River to opposite where Tianaderha" (Unadilla River) "falls into Sasquehanna" (Susquehanna River)
"thence to Tianaderha" (New Berlin, New York?) "and up the West side of the West branch" (Beaver Creek) "to the head thereof"
"& thence by a direct Line to Canada Creek, where it empties into the Wood Creek at the West of the Carrying Place beyond Fort Stanwix" (Rome, New York).
Actual line
The starting point is the confluence of the Tennessee River with the Ohio River at Paducah, Kentucky. The line follows the south bank of the Ohio almost the entire length of Kentucky and all of West Virginia to Pittsburgh where the Allegheny
|
https://en.wikipedia.org/wiki/Noise%20control
|
Noise control or noise mitigation is a set of strategies to reduce noise pollution or to reduce the impact of that noise, whether outdoors or indoors.
Overview
The main areas of noise mitigation or abatement are: transportation noise control, architectural design, urban planning through zoning codes, and occupational noise control. Roadway noise and aircraft noise are the most pervasive sources of environmental noise. Social activities may generate noise levels that consistently affect the health of populations residing in or occupying areas, both indoor and outdoor, near entertainment venues that feature amplified sounds and music that present significant challenges for effective noise mitigation strategies.
Multiple techniques have been developed to address interior sound levels, many of which are encouraged by local building codes. In the best case of project designs, planners are encouraged to work with design engineers to examine trade-offs of roadway design and architectural design. These techniques include design of exterior walls, party walls, and floor and ceiling assemblies; moreover, there are a host of specialized means for damping reverberation from special-purpose rooms such as auditoria, concert halls, entertainment and social venues, dining areas, audio recording rooms, and meeting rooms.
Many of these techniques rely upon material science applications of constructing sound baffles or using sound-absorbing liners for interior spaces. Industrial noise control is a subset of interior architectural control of noise, with emphasis on specific methods of sound isolation from industrial machinery and for protection of workers at their task stations.
Sound masking is the active addition of noise to reduce the annoyance of certain sounds, the opposite of soundproofing.
Standards, recommendations, and guidelines
Organizations each have their own standards, recommendations/guidelines, and directives for what levels of noise workers are permitted to be ar
|
https://en.wikipedia.org/wiki/Advanced%20planning%20and%20scheduling
|
Advanced planning and scheduling (APS, also known as advanced manufacturing) refers to a manufacturing management process by which raw materials and production capacity are optimally allocated to meet demand. APS is especially well-suited to environments where simpler planning methods cannot adequately address complex trade-offs between competing priorities. Production scheduling is intrinsically very difficult due to the (approximately) factorial dependence of the size of the solution space on the number of items/products to be manufactured.
Difficulty of production planning
Traditional production planning and scheduling systems (such as manufacturing resource planning) use a stepwise procedure to allocate material and production capacity. This approach is simple but cumbersome, and does not readily adapt to changes in demand, resource capacity or material availability. Materials and capacity are planned separately, and many systems do not consider material or capacity constraints, leading to infeasible plans. However, attempts to change to the new system have not always been successful, which has called for the combination of management philosophy with manufacturing.
Unlike previous systems, APS simultaneously plans and schedules production based on available materials, labor and plant capacity.
APS has commonly been applied where one or more of the following conditions are present:
make to order (as distinct from make to stock) manufacturing
capital-intensive production processes, where plant capacity is constrained
products 'competing' for plant capacity: where many different products are produced in each facility
products that require a large number of components or manufacturing tasks
production necessitates frequent schedule changes which cannot be predicted before the event
Advanced planning & scheduling software enables manufacturing scheduling and advanced scheduling optimization within these environments.
Further reading
|
https://en.wikipedia.org/wiki/Oxygen%E2%80%93hemoglobin%20dissociation%20curve
|
The oxygen–hemoglobin dissociation curve, also called the oxyhemoglobin dissociation curve or oxygen dissociation curve (ODC), is a curve that plots the proportion of hemoglobin in its saturated (oxygen-laden) form on the vertical axis against the prevailing oxygen tension on the horizontal axis. This curve is an important tool for understanding how our blood carries and releases oxygen. Specifically, the oxyhemoglobin dissociation curve relates oxygen saturation (SO2) and partial pressure of oxygen in the blood (PO2), and is determined by what is called "hemoglobin affinity for oxygen"; that is, how readily hemoglobin acquires and releases oxygen molecules into the fluid that surrounds it.
Background
Hemoglobin (Hb) is the primary vehicle for transporting oxygen in the blood. Each hemoglobin molecule has the capacity to carry four oxygen molecules. These molecules of oxygen bind to the iron of the heme prosthetic group.
When hemoglobin has no bound oxygen, nor bound carbon dioxide, it has the unbound conformation (shape). The binding of the first oxygen molecule induces change in the shape of the hemoglobin that increases its ability to bind to the other three oxygen molecules.
In the presence of dissolved carbon dioxide, the pH of the blood changes; this causes another change in the shape of hemoglobin, which increases its ability to bind carbon dioxide and decreases its ability to bind oxygen. With the loss of the first oxygen molecule, and the binding of the first carbon dioxide molecule, yet another change in shape occurs, which further decreases the ability to bind oxygen, and increases the ability to bind carbon dioxide. The oxygen bound to the hemoglobin is released into the blood's plasma and absorbed into the tissues, and the carbon dioxide in the tissues is bound to the hemoglobin.
In the lungs the reverse of this process takes place. With the loss of the first carbon dioxide molecule the shape again changes and makes it easier to release the other th
|
https://en.wikipedia.org/wiki/Premelting
|
Premelting (also surface melting) refers to a quasi-liquid film that can occur on the surface of a solid even below melting point (). The thickness of the film is temperature () dependent. This effect is common for all crystalline materials.
Premelting shows its effects in frost heave, and, taking grain boundary interfaces into account, maybe even in the movement of glaciers.
Considering a solid-vapour interface, complete and incomplete premelting can be distinguished. During a temperature rise from below to above , in the case of complete premelting, the solid melts homogeneously from the outside to the inside; in the case of incomplete premelting, the liquid film stays very thin during the beginning of the melting process, but droplets start to form on the interface. In either case, the solid always melts from the outside inwards, never from the inside.
History
The first to mention premelting might have been Michael Faraday in 1842 for ice surfaces. He compared the effect which holds a snowball together to that which makes buildings from moistured sand stable. Another interesting thing he mentioned is that two blocks of ice can freeze together. Later Tammann (1910) and Stranski (1942) suggested that all crystals might, due to the reduction of surface energy, start melting at their surfaces. Frenkel strengthened this by noting that, in contrast to liquids, no overheating can be found for solids. After extensive studies on many materials, it can be concluded that it is a common attribute of the solid state that the melting process begins at the surface.
Theoretical explanations
There are several ways to approach the topic of premelting, the most figurative way might be thermodynamically. A more detailed or abstract view on what physics is important for premelting is given by the Lifshitz and the Landau theories.
One always starts with looking at a crystalline solid phase (fig. 1: (1) solid) and another phase. This second phase (fig. 1: (2)) can either be vap
|
https://en.wikipedia.org/wiki/Ensoniq%20AudioPCI
|
The Ensoniq AudioPCI is a Peripheral Component Interconnect (PCI)-based sound card released in 1997. It was Ensoniq's last sound card product before they were acquired by Creative Technology. The card represented a shift in Ensoniq's market positioning. Whereas the Soundscape line had been made up primarily of low-volume high-end products full of features, the AudioPCI was designed to be a very simple, low-cost product to appeal to system OEMs and thus hopefully sell in mass quantities.
Low cost
Towards the end of the 1990s, Ensoniq was struggling financially. Their cards were very popular with PC OEMs, but their costs were too high and their musical instrument division was fading in revenue. Pressure from intense competition, especially with the dominant Creative Labs, was forcing audio card makers to try to keep their prices low.
The AudioPCI, released in July 1997, was designed primarily to be cheap. In comparison to the wide variety of chips on and sheer size of the older Soundscape boards, the highly integrated two chip design of the AudioPCI is an obvious shift in design philosophy. The board consists only of a very small software-driven audio chip (one of the following: S5016, ES1370, ES 1371) and a companion digital-to-analog converter (DAC). In another cost-cutting move, the previously typical ROM chip used for storage of samples for sample-based synthesis was replaced with the facility to use system RAM as storage for this audio data. This was made possible by the move to the PCI bus, with its far greater bandwidth and more efficient bus mastering interface when compared to the older ISA bus standard.
Features
AudioPCI, while designed to be cheap, is still quite functional. It offers many of the audio capabilities of the Soundscape ELITE card, including several digital effects (reverb, chorus, and spatial enhancement) when used with Microsoft Windows 95 and later versions of Windows.
AudioPCI was one of the first cards to have Microsoft DirectSound3
|
https://en.wikipedia.org/wiki/Newtonian%20motivations%20for%20general%20relativity
|
Some of the basic concepts of general relativity can be outlined outside the relativistic domain. In particular, the idea that mass–energy generates curvature in space and that curvature affects the motion of masses can be illustrated in a Newtonian setting. We use circular orbits as our prototype. This has the advantage that we know the kinetics of circular orbits. This allows us to calculate curvature of orbits in space directly and compare the results with dynamical forces.
The equivalence of gravitational and inertial mass
A unique feature of the gravitational force is that all massive objects accelerate in the same manner in a gravitational field. This is often expressed as "The gravitational mass is equal to the inertial mass." This allows us to think of gravity as a curvature of spacetime.
Test for flatness in spacetime
If initially parallel paths of two particles on nearby geodesics remain parallel within some accuracy, then spacetime is flat to within that accuracy. [Ref. 2, p. 30]
Two nearby particles in a radial gravitational field
Newtonian mechanics for circular orbits
The geodesic and field equations for circular orbits
Consider the situation in which there are two particles in nearby circular polar orbits of the Earth at radius and speed . Since the orbits are circular, the gravitational force on the particles must equal the centripetal force,
where G is the gravitational constant and is the mass of the earth.
The particles execute simple harmonic motion about the earth and with respect to each other. They are at their maximum distance from each other as they cross the equator. Their trajectories intersect at the poles.
From Newton's Law of Gravitation the separation vector can be shown to be given by the "geodesic equation"
where is the curvature of the trajectory and is the speed of light c times the time.
The curvature of the trajectory is generated by the mass of the earth . This is represented by the "field equation"
In this
|
https://en.wikipedia.org/wiki/Surface%20freezing
|
Surface freezing is the appearance of long-range crystalline order in a near-surface layer of a liquid. The surface freezing effect is opposite to a far more common surface melting, or premelting. Surface Freezing was experimentally discovered in melts of alkanes and related chain molecules in the early 1990s independently by two groups. John Earnshaw and his group (Queen's University of Belfast) used light scattering, which did not allow a determination of the frozen layer's thickness, and whether or not it is laterally ordered. A group led by Ben Ocko (Brookhaven National Laboratory), Eric Sirota (Exxon) and Moshe Deutsch (Bar-Ilan University, Israel) discovered independently the same effect, using x-ray surface diffraction which allowed them to show that the frozen layer is a crystalline monolayer, with molecules oriented roughly along the surface normal, and ordered in an hexagonal lattice. A related effect, the existence of a smectic phase at the surface of a nematic liquid bulk was observed in liquid crystals by Jens Als-Nielsen (Risø National Laboratory, Denmark) and Peter Pershan (Harvard University) in the early 1980s. However, the surface layer there was neither ordered, nor confined to a single layer. Surface freezing has since been found in a wide range of chain molecules and at various interfaces: liquid-air, liquid-solid and liquid-liquid.
Phases of matter
|
https://en.wikipedia.org/wiki/IOM%20soybeans
|
IOM soybeans is an industrial designation for soybeans from the U.S. states of Indiana, Ohio, and Michigan. Beans grown in those states have a high protein content that is valued by processors, in particular in Japan. IOM soybeans are traded on the following Japanese commodity exchanges:
Kansai Commodities Exchange (KEX)
Tokyo Grain Exchange (TGE) (including other US state origins);
and in the past were traded on:
Central Japan Commodity Exchange (C-COM)
Fukuoka Futures Exchange (FFE)
The Japanese contracts called "IOM soybeans" are unsegregated, meaning any mixture of genetically modified and not. Non-GM IOM soybeans in Japan are usually just called "Non-GM soybeans".
|
https://en.wikipedia.org/wiki/List%20of%20permutation%20topics
|
This is a list of topics on mathematical permutations.
Particular kinds of permutations
Alternating permutation
Circular shift
Cyclic permutation
Derangement
Even and odd permutations—see Parity of a permutation
Josephus permutation
Parity of a permutation
Separable permutation
Stirling permutation
Superpattern
Transposition (mathematics)
Unpredictable permutation
Combinatorics of permutations
Bijection
Combination
Costas array
Cycle index
Cycle notation
Cycles and fixed points
Cyclic order
Direct sum of permutations
Enumerations of specific permutation classes
Factorial
Falling factorial
Permutation matrix
Generalized permutation matrix
Inversion (discrete mathematics)
Major index
Ménage problem
Permutation graph
Permutation pattern
Permutation polynomial
Permutohedron
Rencontres numbers
Robinson–Schensted correspondence
Sum of permutations:
Direct sum of permutations
Skew sum of permutations
Stanley–Wilf conjecture
Symmetric function
Szymanski's conjecture
Twelvefold way
Permutation groups and other algebraic structures
Groups
Alternating group
Automorphisms of the symmetric and alternating groups
Block (permutation group theory)
Cayley's theorem
Cycle index
Frobenius group
Galois group of a polynomial
Jucys–Murphy element
Landau's function
Oligomorphic group
O'Nan–Scott theorem
Parker vector
Permutation group
Place-permutation action
Primitive permutation group
Rank 3 permutation group
Representation theory of the symmetric group
Schreier vector
Strong generating set
Symmetric group
Symmetric inverse semigroup
Weak order of permutations
Wreath product
Young symmetrizer
Zassenhaus group
Zolotarev's lemma
Other algebraic structures
Burnside ring
Mathematical analysis
Conditionally convergent series
Riemann series theorem
Lévy–Steinitz theorem
Mathematics applicable to physical sciences
Antisymmetrizer
Identical particles
Levi-Civita symbol
Number theory
Permutable prime
Algorithms and information processing
Bit-reversal permutation
Claw-
|
https://en.wikipedia.org/wiki/Identity%20theorem
|
In real analysis and complex analysis, branches of mathematics, the identity theorem for analytic functions states: given functions f and g analytic on a domain D (open and connected subset of or ), if f = g on some , where has an accumulation point in D, then f = g on D.
Thus an analytic function is completely determined by its values on a single open neighborhood in D, or even a countable subset of D (provided this contains a converging sequence together with its limit). This is not true in general for real-differentiable functions, even infinitely real-differentiable functions. In comparison, analytic functions are a much more rigid notion. Informally, one sometimes summarizes the theorem by saying analytic functions are "hard" (as opposed to, say, continuous functions which are "soft").
The underpinning fact from which the theorem is established is the expandability of a holomorphic function into its Taylor series.
The connectedness assumption on the domain D is necessary. For example, if D consists of two disjoint open sets, can be on one open set, and on another, while is on one, and on another.
Lemma
If two holomorphic functions and on a domain D agree on a set S which has an accumulation point in , then on a disk in centered at .
To prove this, it is enough to show that for all .
If this is not the case, let be the smallest nonnegative integer with . By holomorphy, we have the following Taylor series representation in some open neighborhood U of :
By continuity, is non-zero in some small open disk around . But then on the punctured set . This contradicts the assumption that is an accumulation point of .
This lemma shows that for a complex number , the fiber is a discrete (and therefore countable) set, unless .
Proof
Define the set on which and have the same Taylor expansion:
We'll show is nonempty, open, and closed. Then by connectedness of , must be all of , which implies on .
By the lemma, in a disk centered at
|
https://en.wikipedia.org/wiki/Sakurai%20Prize
|
The J. J. Sakurai Prize for Theoretical Particle Physics, is presented by the American Physical Society at its annual April Meeting, and honors outstanding achievement in particle physics theory. The prize consists of a monetary award (US$10,000), a certificate citing the contributions recognized by the award, and a travel allowance for the recipient to attend the presentation. The award is endowed by the family and friends of particle physicist J. J. Sakurai. The prize has been awarded annually since 1985.
Prize recipients
The following have won this prize:
2023 Heinrich Leutwyler: "For fundamental contributions to the effective field theory of pions at low energies, and for proposing that the gluon is a color octet."
2022 Nima Arkani-Hamed: "For the development of transformative new frameworks for physics beyond the standard model with novel experimental signatures, including work on large extra dimensions, the little Higgs, and more generally for new ideas connected to the origin of the electroweak scale."
2021 Vernon Barger: "For pioneering work in collider physics contributing to the discovery and characterization of the W boson, top quark, and Higgs boson, and for the development of incisive strategies to test theoretical ideas with experiments."
2020 Pierre Sikivie: "For seminal work recognizing the potential visibility of the invisible axion, devising novel methods to detect it, and for theoretical investigations of its cosmological implications."
2019 Lisa Randall and Raman Sundrum: "For creative contributions to physics beyond the Standard Model, in particular the discovery that warped extra dimensions of space can solve the hierarchy puzzle, which has had a tremendous impact on searches at the Large Hadron Collider."
2018 Ann Nelson and Michael Dine: "For groundbreaking explorations of physics beyond the standard model of particle physics, including their seminal joint work on dynamical super-symmetry breaking, and for their innovative contributio
|
https://en.wikipedia.org/wiki/Therapeutic%20irrigation
|
In medicine, therapeutic irrigation or lavage ( or ) is cleaning or rinsing.
Types
Specific types include:
Antiseptic lavage
Bronchoalveolar lavage
Gastric lavage
Peritoneal lavage
Arthroscopic lavage
Ductal lavage
Nasal irrigation
Ear lavage
Pulsed lavage is delivering an irrigant (usually normal saline) under direct pressure that is produced by an electrically powered device, and is useful in cleaning e.g. chronic wounds.
See also
Douche
Enema
|
https://en.wikipedia.org/wiki/Quad-edge
|
A quad-edge data structure is a computer representation of the topology of a two-dimensional or three-dimensional map, that is, a graph drawn on a (closed) surface. It was first described by Jorge Stolfi and Leonidas J. Guibas. It is a variant of the earlier winged edge data structure.
Overview
The fundamental idea behind the quad-edge structure is the recognition that a single edge, in a closed polygonal mesh topology, sits between exactly two faces and exactly two vertices.
The Quad-Edge Data Structure
The quad-edge data structure represents an edge, along with the edges it is connected to around the adjacent vertices and faces to encode the topology of the graph.
An example implementation of the quad-edge data-type is as follows
typedef struct {
quadedge_ref e[4];
} quadedge;
typedef struct {
quadedge *next;
unsigned int rot;
} quadedge_ref;
Each quad-edge contains four references to adjacent quad-edges. Each of the four references points to the next edge counter-clockwise around either a vertex or a face. Each of these references represent either the origin vertex of the edge, the right face, the destination vertex, or the left face. Each quad-edge reference points to a quad-edge and the rotation (from 0 to 3) of the 'arm' it points at.
Due to this representation, the quad-edge:
represents a graph, its dual, and its mirror image.
the dual of the graph can be obtained simply by reversing the convention on what is a vertex and what is a face; and
can represent the most general form of a map, admitting vertices and faces of degree 1 and 2.
Details
The quad-edge structure gets its name from the general mechanism by which they are stored. A single Edge structure conceptually stores references to up to two faces, two vertices, and 4 edges. The four edges stored are the edges starting with the two vertices that are attached to the two stored faces.
Uses
Much like Winged Edge, quad-edge structures are used in programs to store the topology of a 2D or
|
https://en.wikipedia.org/wiki/Tap%20code
|
The tap code, sometimes called the knock code, is a way to encode text messages on a letter-by-letter basis in a very simple way. The message is transmitted using a series of tap sounds, hence its name.
The tap code has been commonly used by prisoners to communicate with each other. The method of communicating is usually by tapping either the metal bars, pipes or the walls inside a cell.
Design
The tap code is based on a Polybius square using a 5×5 grid of letters representing all the letters of the Latin alphabet, except for K, which is represented by C.
Each letter is communicated by tapping two numbers, the first designating the row and the second (after a pause) designating the column. For example, to specify the letter "B", one taps once, pauses, and then taps twice. The listener only needs to discriminate the timing of the taps to isolate letters.
To communicate the word "hello", the cipher would be the following (with the pause between each number in a pair being shorter than the pause between letters):
The letter "X" is used to break up sentences, and "K" for acknowledgements.
Because of the difficulty and length of time required for specifying a single letter, prisoners often devise abbreviations and acronyms for common items or phrases, such as "GN" for Good night, or "GBU" for God bless you.
By comparison, Morse code is harder to send by tapping or banging because a single tap will fade out and thus has no discernible length. Morse code, however, requires the ability to create two distinguishable lengths (or types) of taps. To simulate Morse by tapping therefore requires either two different sounds (pitch, volume), or very precise timing, so that a dash within a character (e.g. the character N, ) remains distinguishable from a dot at the end of a character (e.g. E-E, ). Morse code also takes longer to learn. Learning the tap system simply requires one to know the alphabet and the short sequence "AFLQV" (the initial letter of each row), without mem
|
https://en.wikipedia.org/wiki/Valuation%20of%20options
|
In finance, a price (premium) is paid or received for purchasing or selling options. This article discusses the calculation of this premium in general. For further detail, see: for discussion of the mathematics; Financial engineering for the implementation; as well as generally.
Premium components
This price can be split into two components: intrinsic value, and time value (also called "extrinsic value").
Intrinsic value
The intrinsic value is the difference between the underlying spot price and the strike price, to the extent that this is in favor of the option holder. For a call option, the option is in-the-money if the underlying spot price is higher than the strike price; then the intrinsic value is the underlying price minus the strike price. For a put option, the option is in-the-money if the strike price is higher than the underlying spot price; then the intrinsic value is the strike price minus the underlying spot price. Otherwise the intrinsic value is zero.
For example, when a DJI call (bullish/long) option is 18,000 and the underlying DJI Index is priced at $18,050 then there is a $50 advantage even if the option were to expire today. This $50 is the intrinsic value of the option.
In summary, intrinsic value:call option
= current stock price − strike price (call option)
= strike price − current stock price (put option)
Extrinsic (Time) value
The option premium is always greater than the intrinsic value up to the expiration event. This extra money is for the risk which the option writer/seller is undertaking. This is called the time value.
Time value is the amount the option trader is paying for a contract above its intrinsic value, with the belief that prior to expiration the contract value will increase because of a favourable change in the price of the underlying asset. The longer the length of time until the expiry of the contract, the greater the time value. So,
Time value = option premium − intrinsic value
Other factors affecting prem
|
https://en.wikipedia.org/wiki/Rosenbrock%20function
|
In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function.
The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult.
The function is defined by
It has a global minimum at , where . Usually, these parameters are set such that and . Only in the trivial case where the function is symmetric and the minimum is at the origin.
Multidimensional generalizations
Two variants are commonly encountered.
One is the sum of uncoupled 2D Rosenbrock problems, and is defined only for even s:
This variant has predictably simple solutions.
A second, more involved variant is
has exactly one minimum for (at ) and exactly two minima for —the global minimum at and a local minimum near . This result is obtained by setting the gradient of the function equal to zero, noticing that the resulting equation is a rational function of . For small the polynomials can be determined exactly and Sturm's theorem can be used to determine the number of real roots, while the roots can be bounded in the region of . For larger this method breaks down due to the size of the coefficients involved.
Stationary points
Many of the stationary points of the function exhibit a regular pattern when plotted. This structure can be exploited to locate them.
Optimization examples
The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization by
adaptive coordinate descent from starting point . The solution with the function va
|
https://en.wikipedia.org/wiki/Weak%20solution
|
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions.
Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions). Somewhat surprisingly, a differential equation may have solutions which are not differentiable; and the weak formulation allows one to find such solutions.
Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough.
A concrete example
As an illustration of the concept, consider the first-order wave equation:
where u = u(t, x) is a function of two real variables. To indirectly probe the properties of a possible solution u, one integrates it against an arbitrary smooth function of compact support, known as a test function, taking
For example, if is a smooth probability distribution concentrated near a point , the integral is approximately . Notice that while the integrals go from to , they are essentially over a finite box where is non-zero.
Thus, assume a solution u is continuously differentiable on the Euclidean space R2, multiply the equation () by a test function (smooth of compact sup
|
https://en.wikipedia.org/wiki/Kappa%20effect
|
The kappa effect or perceptual time dilation is a temporal perceptual illusion that can arise when observers judge the elapsed time between sensory stimuli applied sequentially at different locations. In perceiving a sequence of consecutive stimuli, subjects tend to overestimate the elapsed time between two successive stimuli when the distance between the stimuli is sufficiently large, and to underestimate the elapsed time when the distance is sufficiently small.
In different sensory modalities
The kappa effect can occur with visual (e.g., flashes of light), auditory (e.g., tones), or tactile (e.g. taps to the skin) stimuli. Many studies of the kappa effect have been conducted using visual stimuli. For example, suppose three light sources, X, Y, and Z, are flashed successively in the dark with equal time intervals between each of the flashes. If the light sources are placed at different positions, with X and Y closer together than Y and Z, the temporal interval between the X and Y flashes is perceived to be shorter than that between the Y and Z flashes. The kappa effect has also been demonstrated with auditory stimuli that move in frequency. However, in some experimental paradigms the auditory kappa effect has not been observed. For example, Roy et al. (2011) found that, opposite to the prediction of the kappa effect, "Increasing the distance between sound sources marking time intervals leads to a decrease of the perceived duration". In touch, the kappa effect was first described as the "S-effect" by Suto (1952). Goldreich (2007) refers to the kappa effect as "perceptual time dilation" in analogy with the physical time dilation of the theory of relativity.
Theories based in velocity expectation
Physically, traversed space and elapsed time are linked by velocity. Accordingly, several theories regarding the brain's expectations about stimulus velocity have been put forward to account for the kappa effect.
Constant velocity expectation
According to the constant velo
|
https://en.wikipedia.org/wiki/Akoustolith
|
Akoustolith is a porous ceramic material resembling stone. Akoustolith was a patented product of a collaboration between Rafael Guastavino Jr. (the son of Rafael Guastavino) and Harvard professor Wallace Sabine over a period of years starting in 1911. It was used to limit acoustic reflection and noise in large vaulted ceilings. Akoustolith was bonded as an additional layer to the structural tile of the Tile Arch System ceilings built by the Rafael Guastavino Company of New Jersey. The most prevalent use was to aid speech intelligibility in cathedrals and churches prior to the widespread use of public address systems.
History
Akoustolith was first introduced by the Guastavino Fireproof Construction Company, in collaboration with Wallace Sabine of Harvard University, in 1915. The founder of the Guastavino Company, Rafael Guastavino Sr., had immigrated to the United States from Spain in 1881, bringing with him the method of timbrel-vault construction, also known as cohesive construction. The Raphael Guastavino Company's vaulting technique created monolithic assemblies by layering thin bricks and structural tiles with fast-drying mortar. The Guastavino Technique, as it came to be known, consisted of multiple layers of plaster and tile in the construction of masonry vaulting; the first course of tile was set in its position with quick setting mortar creating form-work for the subsequent layers. Tiles were placed in concentric circles in the construction of domes, while in ribbed vaults, ribs served as the general form-work. Upon Guastavino Sr.'s death in 1908, his son, Rafael Guastavino Jr. took over the Guastavino Fireproof Construction Company; he was largely responsible for the company's development of acoustical finishes, including the incorporation and development of Rumford and Akoustolith tiles.
Raphael Guastavino Jr. and Wallace Sabine patented Akoustolith in 1916, to be used as a facing for Guastavino's timbrel vaults. The two had previously collaborated in
|
https://en.wikipedia.org/wiki/Calcium%20propanoate
|
Calcium propanoate or calcium propionate has the formula Ca(C2H5COO)2. It is the calcium salt of propanoic acid.
Uses
As a food additive, it is listed as E number 282 in the Codex Alimentarius. Calcium propionate is used as a preservative in a wide variety of products, including: bread, other baked goods, processed meat, whey, and other dairy products. In agriculture, it is used, amongst other things, to prevent milk fever in cows and as a feed supplement. Propionates prevent microbes from producing the energy they need, like benzoates do. However, unlike benzoates, propionates do not require an acidic environment.
Calcium propionate is used in bakery products as a mold inhibitor, typically at 0.1-0.4% (though animal feed may contain up to 1%). Mold contamination is considered a serious problem amongst bakers, and conditions commonly found in baking present near-optimal conditions for mold growth.
A few decades ago, Bacillus mesentericus (rope), was a serious problem, but today's improved sanitary practices in the bakery, combined with rapid turnover of the finished product, have virtually eliminated this form of spoilage. Calcium propionate and sodium propionate are effective against both B. mesentericus rope and mold.
Metabolism of propionate begins with its conversion to propionyl coenzyme A (propionyl-CoA), the usual first step in the metabolism of carboxylic acids. Since propanoic acid has three carbons, propionyl-CoA cannot directly enter the beta oxidation or the citric acid cycles. In most vertebrates, propionyl-CoA is carboxylated to D-methylmalonyl-CoA, which is isomerised to L-methylmalonyl-CoA. A vitamin B12-dependent enzyme catalyzes rearrangement of L-methylmalonyl-CoA to succinyl-CoA, which is an intermediate of the citric acid cycle and can be readily incorporated there.
Children were challenged with calcium propionate or placebo through daily bread in a double‐blind placebo‐controlled crossover trial. Although there was no significant differe
|
https://en.wikipedia.org/wiki/Neuroimaging
|
Neuroimaging is the use of quantitative (computational) techniques to study the structure and function of the central nervous system, developed as an objective way of scientifically studying the healthy human brain in a non-invasive manner. Increasingly it is also being used for quantitative research studies of brain disease and psychiatric illness. Neuroimaging is highly multidisciplinary involving neuroscience, computer science, psychology and statistics, and is not a medical specialty. Neuroimaging is sometimes confused with neuroradiology.
Neuroradiology is a medical specialty and uses non-statistical brain imaging in a clinical setting, practiced by radiologists who are medical practitioners. Neuroradiology primarily focuses on recognising brain lesions, such as vascular disease, strokes, tumors and inflammatory disease. In contrast to neuroimaging, neuroradiology is qualitative (based on subjective impressions and extensive clinical training) but sometimes uses basic quantitative methods. Functional brain imaging techniques, such as functional magnetic resonance imaging (fMRI), are common in neuroimaging but rarely used in neuroradiology. Neuroimaging falls into two broad categories:
Structural imaging, which is used to quantify brain structure using e,g, voxel based morphometry.
Functional imaging, which is used to study brain function, often using fMRI and other techniques such as PET and MEG (see below).
History
The first chapter of the history of neuroimaging traces back to the Italian neuroscientist Angelo Mosso who invented the 'human circulation balance', which could non-invasively measure the redistribution of blood during emotional and intellectual activity.
In 1918, the American neurosurgeon Walter Dandy introduced the technique of ventriculography. X-ray images of the ventricular system within the brain were obtained by injection of filtered air directly into one or both lateral ventricles of the brain. Dandy also observed that air i
|
https://en.wikipedia.org/wiki/Canalisation%20%28genetics%29
|
Canalisation is a measure of the ability of a population to produce the same phenotype regardless of variability of its environment or genotype. It is a form of evolutionary robustness. The term was coined in 1942 by C. H. Waddington to capture the fact that "developmental reactions, as they occur in organisms submitted to natural selection...are adjusted so as to bring about one definite end-result regardless of minor variations in conditions during the course of the reaction". He used this word rather than robustness to consider that biological systems are not robust in quite the same way as, for example, engineered systems.
Biological robustness or canalisation comes about when developmental pathways are shaped by evolution. Waddington introduced the concept of the epigenetic landscape, in which the state of an organism rolls "downhill" during development. In this metaphor, a canalised trait is illustrated as a valley (which he called a creode) enclosed by high ridges, safely guiding the phenotype to its "fate". Waddington claimed that canals form in the epigenetic landscape during evolution, and that this heuristic is useful for understanding the unique qualities of biological robustness.
Genetic assimilation
Waddington used the concept of canalisation to explain his experiments on genetic assimilation. In these experiments, he exposed Drosophila pupae to heat shock. This environmental disturbance caused some flies to develop a crossveinless phenotype. He then selected for crossveinless. Eventually, the crossveinless phenotype appeared even without heat shock. Through this process of genetic assimilation, an environmentally induced phenotype had become inherited. Waddington explained this as the formation of a new canal in the epigenetic landscape.
It is, however, possible to explain genetic assimilation using only quantitative genetics and a threshold model, with no reference to the concept of canalisation. However, theoretical models that incorporate a com
|
https://en.wikipedia.org/wiki/Huntingtin
|
Huntingtin (Htt) is the protein coded for in humans by the HTT gene, also known as the IT15 ("interesting transcript 15") gene. Mutated HTT is the cause of Huntington's disease (HD), and has been investigated for this role and also for its involvement in long-term memory storage.
It is variable in its structure, as the many polymorphisms of the gene can lead to variable numbers of glutamine residues present in the protein. In its wild-type (normal) form, the polymorphic locus contains 6-35 glutamine residues. However, in individuals affected by Huntington's disease (an autosomal dominant genetic disorder), the polymorphic locus contains more than 36 glutamine residues (highest reported repeat length is about 250). Its commonly used name is derived from this disease; previously, the IT15 label was commonly used.
The mass of huntingtin protein is dependent largely on the number of glutamine residues it has; the predicted mass is around 350 kDa. Normal huntingtin is generally accepted to be 3144 amino acids in size. The exact function of this protein is not known, but it plays an important role in nerve cells. Within cells, huntingtin may or may not be involved in signaling, transporting materials, binding proteins and other structures, and protecting against apoptosis, a form of programmed cell death. The huntingtin protein is required for normal development before birth. It is expressed in many tissues in the body, with the highest levels of expression seen in the brain.
Gene
The 5'-end (five prime end) of the HTT gene has a sequence of three DNA bases, cytosine-adenine-guanine (CAG), coding for the amino acid glutamine, that is repeated multiple times. This region is called a trinucleotide repeat. The usual CAG repeat count is between seven and 35 repeats.
The HTT gene is located on the short arm (p) of chromosome 4 at position 16.3, from base pair 3,074,510 to base pair 3,243,960.
Protein
Function
The function of huntingtin (Htt) is not well understood but
|
https://en.wikipedia.org/wiki/Convex%20cone
|
In linear algebra, a cone—sometimes called a linear cone for distinguishing it from other sorts of cones—is a subset of a vector space that is closed under positive scalar multiplication; that is, is a cone if implies for every .
When the scalars are real numbers, or belong to an ordered field, one generally calls a cone a subset of a vector space that is closed under multiplication by a positive scalar. In this context, a convex cone is a cone that is closed under addition, or, equivalently, a subset of a vector space that is closed under linear combinations with positive coefficients. It follows that convex cones are convex sets.
In this article, only the case of scalars in an ordered field is considered.
Definition
A subset C of a vector space V over an ordered field F is a cone (or sometimes called a linear cone) if for each x in C and positive scalar α in F, the product αx is in C. Note that some authors define cone with the scalar α ranging over all non-negative scalars (rather than all positive scalars, which does not include 0).
A cone C is a convex cone if belongs to C, for any positive scalars α, β, and any x, y in C.
A cone C is convex if and only if C + C ⊆ C.
This concept is meaningful for any vector space that allows the concept of "positive" scalar, such as spaces over the rational, algebraic, or (more commonly) the real numbers. Also note that the scalars in the definition are positive meaning that the origin does not have to belong to C. Some authors use a definition that ensures the origin belongs to C. Because of the scaling parameters α and β, cones are infinite in extent and not bounded.
If C is a convex cone, then for any positive scalar α and any x in C the vector It follows that a convex cone C is a special case of a linear cone.
It follows from the above property that a convex cone can also be defined as a linear cone that is closed under convex combinations, or just under additions. More succinctly, a set C is a convex cone i
|
https://en.wikipedia.org/wiki/Wuerhosaurus
|
Wuerhosaurus is a genus of stegosaurid dinosaur from the Early Cretaceous Period of China and Mongolia. As such, it was one of the last genera of stegosaurians known to have existed, since most others lived in the late Jurassic.
Discovery and species
Wuerhosaurus homheni is the type species, described by Dong Zhiming in 1973 from the Tugulu Group in Xinjiang, western China. The generic name is derived from the city of Wuerho. Three separate localities in the Wuerho Valley were discovered to contain material from the new stegosaur: , 64043 and 64045. The remains consisted of the holotype, Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) V.4006, a skull-less fragmentary skeleton, and the paratype IVPP V.4007. Holotype material includes a mostly complete pelvis and sacrum lacking the ischium, the first caudal vertebrae, two dorsal vertebrae, a scapulocoracoid, humerus and phalanx, as well as two dermal plates. Three posterior caudal vertebrae from the tail and a partial ulna of a second individual form the paratype, and Dong referred a partial ischium from a third locality to Wuerhosaurus.
A smaller stegosaur from the Ejinhoro Formation in the Ordos Basin in Inner Mongolia was found in 1988. When the specimen (IVPP V.6877) was described by Dong in 1993, it was named W. ordosensis, as it was from a similar age and had a similar anatomy. The holotype of the species includes a nearly complete torso, consisting of three cervical vertebrae, all eleven dorsal vertebrae (with attached ribs), a complete sacrum with a right ilium, and the first five caudal vertebrae, all articulated. An additional dorsal vertebra and dermal plate were referred to the taxon when it was named. In 2014 Ulansky named a new species of Wuerhosaurus, "W. mongoliensis" for vertebrae and pelvic material, but the name is an invalid nomen nudum. It was formally described as Mongolostegus in 2018.
Description
Wuerhosaurus homheni was probably a broad-bodied animal, reaching in length
|
https://en.wikipedia.org/wiki/Axiomatic%20quantum%20field%20theory
|
Axiomatic quantum field theory is a mathematical discipline which aims to describe quantum field theory in terms of rigorous axioms. It is strongly associated with functional analysis and operator algebras, but has also been studied in recent years from a more geometric and functorial perspective.
There are two main challenges in this discipline. First, one must propose a set of axioms which describe the general properties of any mathematical object that deserves to be called a "quantum field theory". Then, one gives rigorous mathematical constructions of examples satisfying these axioms.
Analytic approaches
Wightman axioms
The first set of axioms for quantum field theories, known as the Wightman axioms, were proposed by Arthur Wightman in the early 1950s. These axioms attempt to describe QFTs on flat Minkowski spacetime by regarding quantum fields as operator-valued distributions acting on a Hilbert space. In practice, one often uses the Wightman reconstruction theorem, which guarantees that the operator-valued distributions and the Hilbert space can be recovered from the collection of correlation functions.
Osterwalder–Schrader axioms
The correlation functions of a QFT satisfying the Wightman axioms often can be analytically continued from Lorentz signature to Euclidean signature. (Crudely, one replaces the time variable with imaginary time the factors of change the sign of the time-time components of the metric tensor.) The resulting functions are called Schwinger functions. For the Schwinger functions there is a list of conditions — analyticity, permutation symmetry, Euclidean covariance, and reflection positivity — which a set of functions defined on various powers of Euclidean space-time must satisfy in order to be the analytic continuation of the set of correlation functions of a QFT satisfying the Wightman axioms.
Haag–Kastler axioms
The Haag–Kastler axioms axiomatize QFT in terms of nets of algebras.
Euclidean CFT axioms
These axioms (see e.
|
https://en.wikipedia.org/wiki/Background%20field%20method
|
In theoretical physics, background field method is a useful procedure to calculate the effective action of a quantum field theory by expanding a quantum field around a classical "background" value B:
.
After this is done, the Green's functions are evaluated as a function of the background. This approach has the advantage that the gauge invariance is manifestly preserved if the approach is applied to gauge theory.
Method
We typically want to calculate expressions like
where J(x) is a source, is the Lagrangian density of the system, d is the number of dimensions and is a field.
In the background field method, one starts by splitting this field into a classical background field B(x) and a field η(x) containing additional quantum fluctuations:
Typically, B(x) will be a solution of the classical equations of motion
where S is the action, i.e. the space integral of the Lagrangian density. Switching on a source J(x) will change the equations into
.
Then the action is expanded around the background B(x):
The second term in this expansion is zero by the equations of motion. The first term does not depend on any fluctuating fields, so that it can be brought out of the path integral. The result is
The path integral which now remains is (neglecting the corrections in the dots) of Gaussian form and can be integrated exactly:
where "det" signifies a functional determinant and C is a constant. The power of minus one half will naturally be plus one for Grassmann fields.
The above derivation gives the Gaussian approximation to the functional integral. Corrections to this can be computed, producing a diagrammatic expansion.
See also
BF theory
Effective action
Source field
|
https://en.wikipedia.org/wiki/R%C3%B6ntgen%20equivalent%20physical
|
The Röntgen equivalent physical or rep (symbol rep) is a legacy unit of absorbed dose first introduced by Herbert Parker in 1945 to replace an improper application of the roentgen unit to biological tissue. It is the absorbed energetic dose before the biological efficiency of the radiation is factored in. The rep has variously been defined as 83 or 93 ergs per gram of tissue (8.3/9.3 mGy) or per cm3 of tissue.
At the time, this was thought to be the amount of energy deposited by 1 roentgen. Improved measurements have since found that one roentgen of air kerma deposits 8.77 mGy in dry air, or 9.6 mGy in soft tissue, but the rep was defined as a fixed number of ergs per unit gram.
A 1952 handbook from the US National Bureau of Standards affirms that "The numerical coefficient of the rep has been deliberately changed to 93, instead of the earlier 83, to agree with L. H. Gray's 'energy-unit'." Gray's 'energy unit' was " one roentgen of hard gamma resulted in about 93 ergs per gram energy absorption in water". The lower range value of 83.8 ergs was the value in air corresponding to wet tissue. The rep was commonly used until the 1960s, but was gradually displaced by the rad starting in 1954 and later the gray starting in 1977.
|
https://en.wikipedia.org/wiki/Pion%20decay%20constant
|
In particle physics, the pion decay constant is the square root of the coefficient in front of the kinetic term for the pion in the low-energy effective action. It is dimensionally an energy scale and it determines the strength of the chiral symmetry breaking. The values are:
Beware: There are several conventions which differ by factors of . The textbook by Weinberg uses the value 184 MeV. The textbook by Peskin and Schroeder uses the value 93 MeV.
According to Brown–Rho scaling, the masses of nucleons and most light mesons decrease at finite density as the ratio of the in-medium pion decay rate to the free-space pion decay constant. The pion mass is an exception to Brown-Rho scaling because the pion's mass is protected by its Goldstone boson nature.
|
https://en.wikipedia.org/wiki/Chiral%20symmetry%20breaking
|
In particle physics, chiral symmetry breaking generally refers to the dynamical spontaneous breaking of a chiral symmetry associated with massless fermions. This is usually associated with a gauge theory such as quantum chromodynamics, the quantum field theory of the strong interaction, and it also occurs through the Brout-Englert-Higgs mechanism in the electroweak interactions of the standard model. This phenomenon is analogous to magnetization and superconductivity in condensed matter physics. The basic idea was introduced to particle physics by Yoichiro Nambu, in particular, in the Nambu–Jona-Lasinio model, which is a solvable theory of composite bosons that exhibits dynamical spontaneous chiral symmetry when a 4-fermion coupling constant becomes sufficiently large. Nambu was awarded the 2008 Nobel prize in physics "for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics."
Overview
Quantum chromodynamics
Massless fermions in 4 dimensions are described by either left or right-handed spinors
that each have 2 complex components. These have spin either aligned (right-handed chirality), or counter-aligned (left-handed chirality), with their momenta. In this case the chirality is a conserved quantum number of the given fermion, and the left and right handed spinors can be independently phase transformed. More generally they can form multiplets under some symmetry group .
A Dirac mass term explicitly breaks the chiral symmetry. In quantum electrodynamics (QED) the electron mass unites left and right handed spinors forming a 4 component Dirac spinor. In the absence
of mass and quantum loops, QED would have a chiral symmetry, but the Dirac mass of the electron breaks this to a single symmetry that allows a common phase rotation of left and right together, which is the gauge symmetry of electrodynamics. (At the quantum loop level, the chiral symmetry is broken, even for massless electrons, by the chiral anomaly, but the gauge s
|
https://en.wikipedia.org/wiki/Isoscalar
|
In particle physics, isoscalar refers to the scalar transformation of a particle or field under the SU(2) group of isospin. Isoscalars are a singlet state, with total isospin 0 and the third component of isospin 0, much like a singlet state in a 2-particle addition of spin. Mesons which have all flavor quantum numbers equal to zero, are known as isoscalars.
See also
Isovector
|
https://en.wikipedia.org/wiki/Morphea
|
Morphea is a form of scleroderma that mainly involves isolated patches of hardened skin on the face, hands, and feet, or anywhere else on the body, usually with no internal organ involvement. However, in Deep Morphea inflammation and sclerosis can be found in the deep dermis, panniculus, fascia, superficial muscle and bone.
Signs and symptoms
Morphea most often presents as macules or plaques a few centimeters in diameter, but also may occur as bands or in guttate lesions or nodules.
Morphea is a thickening and hardening of the skin and subcutaneous tissues from excessive collagen deposition. Morphea includes specific conditions ranging from very small plaques only involving the skin to widespread disease causing functional and cosmetic deformities. Morphea discriminates from systemic sclerosis by its supposed lack of internal organ involvement. This classification scheme does not include the mixed form of morphea in which different morphologies of skin lesions are present in the same individual. Up to 15% of morphea patients may fall into this previously unrecognized category.
Cause
Physicians and scientists do not know what causes morphea. Case reports and observational studies suggest there is a higher frequency of family history of autoimmune diseases in patients with morphea. Tests for autoantibodies associated with morphea have shown results in higher frequencies of anti-histone and anti-topoisomerase IIa antibodies. Case reports of morphea co-existing with other systemic autoimmune diseases such as primary biliary cirrhosis, vitiligo, and systemic lupus erythematosus lend support to morphea as an autoimmune disease.
Borrelia burgdorferi infection may be relevant for the induction of a distinct autoimmune type of scleroderma; it may be called "Borrelia-associated early onset morphea" and is characterized by the combination of disease onset at younger age, infection with B. burgdorferi, and evident autoimmune phenomena as reflected by high-titer antinuc
|
https://en.wikipedia.org/wiki/Isovector
|
In particle physics, isovector refers to the vector transformation of a particle under the SU(2) group of isospin. An isovector state is a triplet state with total isospin 1, with the third component of isospin either 1, 0, or -1, much like a triplet state in the two-particle addition of Spin.
See also
Isoscalar
|
https://en.wikipedia.org/wiki/DUAL%20%28cognitive%20architecture%29
|
DUAL is a general cognitive architecture integrating the connectionist and symbolic approaches at the micro level. DUAL is based on decentralized representation and emergent computation. It was inspired by the Society of Mind idea proposed by Marvin Minsky, but departs from the initial proposal in many ways. Computations in DUAL emerge from the interaction of many micro-agents, each of which is a hybrid symbolic/connectionist device. The agents exchange messages and activation via links that can be learned and modified, they form coalitions which collectively represent concepts, episodes, and facts.
Several models have been developed on the basis of DUAL. These include: AMBR (a model of analogy-making and memory), JUDGEMAP (a model of judgment), PEAN (a model of perception), etc.
DUAL is developed by a team at the New Bulgarian University led by Boicho Kokinov. The second version was co-authored by Alexander Petrov. The third version is co-authored by Georgi Petkov and Ivan Vankov.
External links
Cognitive architecture
Emergence
|
https://en.wikipedia.org/wiki/Staggered%20conformation
|
In organic chemistry, a staggered conformation is a chemical conformation of an ethane-like moiety abcX–Ydef in which the substituents a, b, and c are at the maximum distance from d, e, and f; this requires the torsion angles to be 60°. It is the opposite of an eclipsed conformation, in which those substituents are as close to each other as possible.
Such a conformation exists in any open chain single chemical bond connecting two sp3-hybridised atoms, and is normally a conformational energy minimum. For some molecules such as those of n-butane, there can be special versions of staggered conformations called gauche and anti; see first Newman projection diagram in Conformational isomerism.
Staggered/eclipsed configurations also distinguish different crystalline structures of e.g. cubic/hexagonal boron nitride, and diamond/lonsdaleite.
See also
Alkane stereochemistry
Eclipsed conformation
|
https://en.wikipedia.org/wiki/Bootstrap%20model
|
The term "bootstrap model" is used for a class of theories that use very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles. It is a form of S-matrix theory.
Overview
In the 1960s and '70s, the ever-growing list of strongly interacting particles — mesons and baryons — made it clear to physicists that none of these particles is elementary. Geoffrey Chew and others went so far as to question the distinction between composite and elementary particles, advocating a "nuclear democracy" in which the idea that some particles were more elementary than others was discarded. Instead, they sought to derive as much information as possible about the strong interaction from plausible assumptions about the S-matrix, which describes what happens when particles of any sort collide, an approach advocated by Werner Heisenberg two decades earlier.
The reason the program had any hope of success was because of crossing, the principle that the forces between particles are determined by particle exchange. Once the spectrum of particles is known, the force law is known, and this means that the spectrum is constrained to bound states which form through the action of these forces. The simplest way to solve the consistency condition is to postulate a few elementary particles of spin less than or equal to one, and construct the scattering perturbatively through field theory, but this method does not allow for composite particles of spin greater than 1 and without the then undiscovered phenomenon of confinement, it is naively inconsistent with the observed Regge behavior of hadrons.
Chew and followers believed that it would be possible to use crossing symmetry and Regge behavior to formulate a consistent S-matrix for infinitely many particle types. The Regge hypothesis would determine the spectrum, crossing and analyticity would determine the scattering amplitude (the forces), while unitarity would determine the self-con
|
https://en.wikipedia.org/wiki/Syncytiotrophoblast
|
Syncytiotrophoblast (from the Greek 'syn'- "together"; 'cytio'- "of cells"; 'tropho'- "nutrition"; 'blast'- "bud") is the epithelial covering of the highly vascular embryonic placental villi, which invades the wall of the uterus to establish nutrient circulation between the embryo and the mother. It is a multinucleate, terminally differentiated syncytium, extending to 13cm.
Function
It is the outer layer of the trophoblasts and actively invades the uterine wall, during implantation, rupturing maternal capillaries and thus establishing an interface between maternal blood and embryonic extracellular fluid, facilitating passive exchange of material between the mother and the embryo.
The syncytial property is important since the mother's immune system includes white blood cells that are able to migrate into tissues by "squeezing" in between cells. If they were to reach the fetal side of the placenta, many foreign proteins would be recognized, triggering an immune reaction. However the syncytium acts as a giant cell so there are no gaps for immune cells to migrate through.
One way in which it accomplishes this task is by suppressing the expression of immunity-related genes HLA-A and HLA-B, which are classically known to be expressed by all nucleated cells. These genes normally express the MHC-I ligand that acts as a major binding mechanism for T-cells. By decreasing the translation of these gene products, the syncytiotrophoblast reduces the chances of an attack by the maternal immune system mediated by T-cells.
The syncytiotrophoblast secretes progesterone and leptin in addition to human chorionic gonadotropin (hCG) and human placental lactogen (HPL); hCG prevents degeneration of the corpus luteum. Progesterone serves to maintain the integrity of the uterine lining and, until the syncytiotrophoblast is mature enough to secrete enough progesterone to support pregnancy (in the fourth month of embryonic development), it is aided by the corpus luteum graviditatis.
Fo
|
https://en.wikipedia.org/wiki/ISO%2031-8
|
ISO 31-8 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to physical chemistry and molecular physics.
Quantities and units
Notes
In the tables of quantities and their units, the ISO 31-8 standard shows symbols for substances as subscripts (e.g., cB, wB, pB). It also notes that it is generally advisable to put symbols for substances and their states in parentheses on the same line, as in c(H2SO4).
Normative annexes
Annex A: Names and symbols of the chemical elements
This annex contains a list of elements by atomic number, giving the names and standard symbols of the chemical elements from atomic number 1 (hydrogen, H) to 109 (unnilennium, Une).
The list given in ISO 31-8:1992 was quoted from the 1998 IUPAC "Green Book" Quantities, Units and Symbols in Physical Chemistry and adds in some cases in parentheses the Latin name for information, where the standard symbol has no relation to the English name of the element. Since the 1992 edition of the standard was published, some elements with atomic number above 103 have been discovered and renamed.
Annex B: Symbols for chemical elements and nucleides
Symbols for chemical elements shall be written in roman (upright) type. The symbol is not followed by a full-stop.
Examples:
H He C Ca
Attached subscripts or superscripts specifying a nucleotide or molecule have the following meanings and positions:
The nucleon number (mass number) is shown in the left superscript position (e.g., 14N)
The number of atoms of a nucleotide is shown in the right subscript position (e.g., 14N2)
The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd)
If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Na+)
Annex C: pH
pH is defined operationally as follows. For a solution X, first measure the electromotive force EX of the galvanic cell
reference electrode
|
https://en.wikipedia.org/wiki/Protocol%20spoofing
|
Protocol spoofing is used in data communications to improve performance in situations where an existing protocol is inadequate, for example due to long delays or high error rates.
Spoofing techniques
In most applications of protocol spoofing, a communications device such as a modem or router simulates ("spoofs") the remote endpoint of a connection to a locally attached host, while using a more appropriate protocol to communicate with a compatible remote device that performs the equivalent spoof at the other end of the communications link.
File transfer spoofing
Error correction and file transfer protocols typically work by calculating a checksum or CRC for a block of data known as a packet, and transmitting the resulting number at the end of the packet. At the other end of the connection, the receiver re-calculates the number based on the data it received and compares that result to what was sent from the remote machine. If the two match the packet was transmitted correctly, and the receiver sends an ACK to signal that it's ready to receive the next packet.
The time to transmit the ACK back to the sender is a function of the phone lines, as opposed to the modem's speed, and is typically about of a second on short links and may be much longer on long-distance links or data networks like X.25. For a protocol using small packets, this delay can be larger than the time needed to send a packet. For instance, the UUCP "g" protocol and Kermit both use 64-byte packets, which on a 9600 bit/s link takes about of a second to send. XMODEM used a slightly larger 128-byte packet, which takes about of a second to send.
The next packet of data cannot be sent until the ACK for the previous packet is received. In the case of XMODEM, for instance, that means it takes a minimum of of a second for the entire cycle to complete for a single packet. This means that the overall speed is only half the theoretical maximum, a 50% channel efficiency.
Protocol spoofing addresses this pr
|
https://en.wikipedia.org/wiki/Carpometacarpal%20joint
|
The carpometacarpal (CMC) joints are five joints in the wrist that articulate the distal row of carpal bones and the proximal bases of the five metacarpal bones.
The CMC joint of the thumb or the first CMC joint, also known as the trapeziometacarpal (TMC) joint, differs significantly from the other four CMC joints and is therefore described separately.
Thumb
The carpometacarpal joint of the thumb (pollex), also known as the first carpometacarpal joint, or the trapeziometacarpal joint (TMC) because it connects the trapezium to the first metacarpal bone, plays an irreplaceable role in the normal functioning of the thumb. The most important joint connecting the wrist to the metacarpus, osteoarthritis of the TMC is a severely disabling condition; up to twenty times more common among elderly women than in average.
Pronation-supination of the first metacarpal is especially important for the action of opposition. The movements of the first CMC are limited by the shape of the joint, by the capsulo-ligamentous complex surrounding the joint, and by the balance among involved muscles. If the first metacarpal fails to sit well 'on the saddle', for example because of hypoplasia, the first CMC joint tends to be subluxated (i.e. slightly displaced) towards the radius.
The capsule is sufficiently slack to allow a wide range of movements and a distraction of roughly 3 mm, while reinforcing ligaments and tendons give stability to the joint. It is slightly thicker on its dorsal side than on the other.
The first carpometacarpal joint is a frequent site of osteoarthritis in postmenopausal women.
Ligaments
The description of the number and names of the ligaments of the first CMC varies considerably in anatomical literature. describe three intracapsular and two extracapsular ligaments to be most important in stabilizing the thumb:
Anterior oblique ligament (AOL) A strong, thick, and intracapsular ligament originating on the palmar tubercle of the trapezium to be inserted o
|
https://en.wikipedia.org/wiki/Hardware%20architect
|
(In the automation and engineering environments, the hardware engineer or architect encompasses the electronics engineering and electrical engineering fields, with subspecialities in analog, digital, or electromechanical systems.)
The hardware systems architect or hardware architect is responsible for:
Interfacing with a systems architect or client stakeholders. It is extraordinarily rare nowadays for sufficiently large and/or complex hardware systems that require a hardware architect not to require substantial software and a systems architect. The hardware architect will therefore normally interface with a systems architect, rather than directly with user(s), sponsor(s), or other client stakeholders. However, in the absence of a systems architect, the hardware systems architect must be prepared to interface directly with the client stakeholders in order to determine their (evolving) needs to be realized in hardware. The hardware architect may also need to interface directly with a software architect or engineer(s), or with other mechanical or electrical engineers.
Generating the highest level of hardware requirements, based on the user's needs and other constraints such as cost and schedule.
Ensuring that this set of high level requirements is consistent, complete, correct, and operationally defined.
Performing cost–benefit analyses to determine the best methods or approaches for meeting the hardware requirements; making maximum use of commercial off-the-shelf or already developed components.
Developing partitioning algorithms (and other processes) to allocate all present and foreseeable (hardware) requirements into discrete hardware partitions such that a minimum of communications is needed among partitions, and between the user and the system.
Partitioning large hardware systems into (successive layers of) subsystems and components each of which can be handled by a single hardware engineer or team of engineers.
Ensuring that maximally robust hardware architec
|
https://en.wikipedia.org/wiki/Garou%3A%20Mark%20of%20the%20Wolves
|
is a 1999 fighting game produced by SNK, originally for the Neo Geo system and then as Fatal Fury: Mark of the Wolves for the Dreamcast. It is the eighth (or ninth if one counts Fatal Fury: Wild Ambition) installment of the Fatal Fury series.
Gameplay
The two-plane system in which characters would fight from two different planes was removed from the game. The game introduces the "Tactical Offense Position" (T.O.P.), which is a special area on the life gauge. When the gauge reaches this area, the character enters the T.O.P. mode, granting the player's character the ability to use a T.O.P. attack, gradual life recovery, and increased attack damage. The game also introduces the "Just Defend" system, which rewards the player who successfully blocks an attack at the last moment with a small amount of health recovery and the ability to immediately counterattack out of block stun. Just Defend was later added as a feature of the K-Groove in Capcom's Capcom vs. SNK 2. Similar to previous titles, the player is given a fighting rank after every round. If the player manages to win all rounds from the Arcade Mode with at least an "AAA" rank, they will face the boss Kain R. Heinlein, which unlocks an ending after he is defeated. If the requirements are not met, then Grant will be the final boss and there will be no special endings. Additionally, through Arcade Mode, before facing Grant, the player will face a mid-boss which can be any character from the cast depending on the character they use.
Playable characters
Plot
Ten years after crime lord Geese Howard's death, the city of Southtown has become more peaceful, leading it to be known as the Second Southtown in reference to having formerly been corrupted by Geese. A new fighting tournament called "King of Fighters: Maximum Mayhem" starts in the area, and several characters related with the fighters from the previous King of Fighters tournaments participate in it.
Development
Multiple changes to Garou were made to show a
|
https://en.wikipedia.org/wiki/Water%2C%20Water%20Every%20Hare
|
Water, Water Every Hare is a 1952 Warner Bros. Looney Tunes cartoon directed by Chuck Jones. The cartoon was released on April 19, 1952 and stars Bugs Bunny. The short is a return to the themes of the 1946 cartoon Hair-Raising Hare and brings the monster Gossamer back to the screen.
The title is a pun on the line "Water, water, everywhere / Nor any drop to drink" from the poem The Rime of the Ancient Mariner, by Samuel Taylor Coleridge. The cartoon is available on Disc 1 of the Looney Tunes Golden Collection: Volume 1.
Plot
After being flooded out of his rabbit hole while sleeping during a heavy storm, Bugs winds up in the castle of an "evil scientist" , a caricature of Boris Karloff, who needs a living brain to complete the construction of his giant robot. Bugs awakens with a mummy resting on top of him, then leaps in terror around the room and flees down the hall. The annoyed scientist dispatches an orange, hairy monster he calls "Rudolph" to retrieve him, with the promise of being rewarded with a spider goulash.
Bugs keeps running until a trap door opens to reveal a water pit below with hungry crocodiles snapping their jaws. He steps backward while praying aloud and bumps into "Rudolph". Bugs quickly makes as a gabby hairdresser, giving the monster a new hairdo. He retrieves dynamite sticks from the "high explosives" room and puts them in the monster's hair to mimic curlers. He lights them and runs off just before the explosion, which leaves the monster with a bald head.
The enraged monster ties up his hair to cover the bald spot and darts after Bugs. The chase leads to a chemical storage room, where Bugs uses "vanishing fluid" to gain invisibility. As the monster looks around, invisible Bugs slams a trash can over the monster's head and whacks it with a mallet. Then Bugs pulls the rug out from beneath the monster's feet, causing him to crash on the floor. For the coup de grâce, Bugs pours "reducing oil" on the dazed monster, shrinking him as he lets out a ro
|
https://en.wikipedia.org/wiki/An%20Essay%20on%20the%20Principle%20of%20Population
|
The book An Essay on the Principle of Population was first published anonymously in 1798, but the author was soon identified as Thomas Robert Malthus. The book warned of future difficulties, on an interpretation of the population increasing in geometric progression (so as to double every 25 years) while food production increased in an arithmetic progression, which would leave a difference resulting in the want of food and famine, unless birth rates decreased.
While it was not the first book on population, Malthus's book fuelled debate about the size of the population in Britain and contributed to the passing of the Census Act 1800. This Act enabled the holding of a national census in England, Wales and Scotland, starting in 1801 and continuing every ten years to the present. The book's 6th edition (1826) was independently cited as a key influence by both Charles Darwin and Alfred Russel Wallace in developing the theory of natural selection.
A key portion of the book was dedicated to what is now known as the Malthusian Law of Population. The theory claims that growing population rates contribute to a rising supply of labour and inevitably lowers wages. In essence, Malthus feared that continued population growth lends itself to poverty.
In 1803, Malthus published, under the same title, a heavily revised second edition of his work. His final version, the 6th edition, was published in 1826. In 1830, 32 years after the first edition, Malthus published a condensed version entitled A Summary View on the Principle of Population, which included responses to criticisms of the larger work.
Overview
Between 1798 and 1826 Malthus published six editions of his famous treatise, updating each edition to incorporate new material, to address criticism, and to convey changes in his own perspectives on the subject. He wrote the original text in reaction to the optimism of his father and his father's associates (notably Rousseau) regarding the future improvement of society. Malthu
|
https://en.wikipedia.org/wiki/Life%20Sciences%20Research%20Office
|
Life Sciences Research Organization (LSRO) is a non-profit organization based in Maryland, United States, that specializes in assembling "ad hoc" expert panels to evaluate scientific literature, data, systems, and proposals in the biomedical sciences.
Overview
LSRO was founded in 1962 as an office within the Federation of American Societies for Experimental Biology (FASEB) to fulfill a US military need for independent scientific counsel. In 2000, LSRO became an independent non-profit organization. It changed its name from Life Sciences Research Office to Life Sciences Research Organization in 2010, and in that same year announced the formation of LSRO Solutions which along with LSRO provides independent, impartial scientific analysis and advice. The organization has a reputation for conducting studies on politically charged issues which are of concern to federal agencies or corporations. Some issues include the dental amalgam controversy, dietary supplement monitoring, and "reduced risk" cigarette products.
It has faced scrutiny for its private clients, particularly in relation to tobacco research.
Past and current clients
Federal government
Centers for Disease Control and Prevention (CDC)
Federal Aviation Administration (FAA)
U.S. Food and Drug Administration (FDA)
NASA
National Center for Health Services Research
National Institutes of Health (NIH)
Office of Naval Research
U.S. Army Medical Research and Materiel Command
United States Department of Agriculture (USDA)
United States Department of Health and Human Services
United States National Library of Medicine
Private sector
American Physiological Society
American Society for Nutritional Sciences
American Society for Pharmacology and Experimental Therapeutics
Amoco BioProducts Corp
Biothera
California Walnut Commission
Calorie Control Council
ChemiNutra
Dow AgroSciences
Kellogg Company
Keller and Heckman LLP
Monsanto Company
Philip Morris
Porter Novelli
Procter & Gamble
Researc
|
https://en.wikipedia.org/wiki/Purchase%20line
|
The Purchase Line is the name commonly given to the line dividing Indian from British Colonial lands established in the Treaty of Fort Stanwix of 1768 in western Pennsylvania. In New York State documents, it is referred to as the Line of Property. That article contains the treaty text and other sections.
History
The relevant section of the treaty reads:
"from thence" (Kittanning) "a direct Line to the nearest Fork of the west branch of Susquehanna"
This line was not clearly defined until in a meeting between Indian and Pennsylvania representatives in 1773 at the well-known "Canoe Place" or upper limit of canoe navigation on the Susquehanna at its confluence with Cush Cushion Creek at present-day Cherry Tree, Pennsylvania. This was agreed to as the "nearest point" of the treaty. This became the tri-point between present-day Clearfield, Cambria, and Indiana counties, although the borough of Cherry Tree, Pennsylvania was later included entirely in Indiana for convenience.
The line is still a boundary through most of its length: in Armstrong County running ESE from Kittanning it separates the townships of Rayburn and Valley (north) from Manor and Kittanning (south). There is gap through the town of Cowanshannock, but at the knick point of the boundary between Armstrong County and Indiana County the line begins again, separating the Indiana County townships of South Mahoning, East Mahoning, Grant and Montgomery (north) from Washington, Rayne and Green (south). The hamlet of Purchase Line is on PA 286 in Green township just south of the actual line.
Maps
|
https://en.wikipedia.org/wiki/Hypergamy
|
Hypergamy (colloquially referred to as "dating up" or "marrying up") is a term used in social science for the act or practice of a person dating or marrying a spouse of higher social status or sexual capital than themselves.
The antonym "hypogamy" refers to the inverse: marrying a person of lower social class or status (colloquially "marrying down"). Both terms were invented in the Indian subcontinent in the 19th century while translating classical Hindu law books, which used the Sanskrit terms anuloma and pratiloma, respectively, for the two concepts.
The term hypergyny is used to describe the overall practice of women marrying up, since the men would be marrying down.
Research
One study found that women are more selective in their choice of marriage partners than are men.
A study done by the University of Minnesota in 2017 found that females generally prefer dominant males as mates. Research conducted throughout the world strongly supports the position that women prefer marriage with partners who are culturally successful or have high potential to become culturally successful. The most extensive of these studies included 10,000 people in 37 cultures across six continents and five islands. Women rated "good financial prospect" higher than men did in all cultures. In 29 samples, the "ambition and industriousness" of a prospective mate were more important for women than for men. Meta-analysis of research published from 1965 to 1986 revealed the same sex difference (Feingold, 1992). Across studies, 3 out of 4 women rated socioeconomic status as more important in a prospective marriage partner than did the average man.
Gilles Saint-Paul (2008) argued that, based on mathematical models, human female hypergamy occurs because women have greater lost mating opportunity costs from monogamous mating (given their slower reproductive rate and limited window of fertility), and thus must be compensated for this cost of marriage. Marriage reduces the overall genetic qualit
|
https://en.wikipedia.org/wiki/Anterior%20inferior%20cerebellar%20artery
|
The anterior inferior cerebellar artery (AICA) is one of three pairs of arteries that supplies blood to the cerebellum.
It arises from the basilar artery on each side at the level of the junction between the medulla oblongata and the pons in the brainstem. It has a variable course, passing backward to be distributed to the anterior part of the undersurface of the cerebellum, anastomosing with both the posterior inferior cerebellar (PICA) branch of the vertebral artery and the superior cerebellar artery.
It also gives off the internal auditory or labyrinthine artery in most cases; however, the labyrinthine artery can less commonly emerge as a branch of the basilar artery.
The amount of tissue supplied by the AICA is variable, depending upon whether the PICA is more or less dominant, but usually includes the anteroinferior surface of the cerebellum, the flocculus, middle cerebellar peduncle and inferolateral portion of the pons.
Clinical significance
Occlusion of AICA is considered rare, but generally results in a lateral pontine syndrome, also known as AICA syndrome. The symptoms include sudden onset of vertigo and vomiting, nystagmus, dysarthria, falling to the side of the lesion (due to damage to vestibular nuclei), and a variety of ipsilateral features including hemiataxia, loss of all modalities of sensation of the face (due to damage to the principal sensory trigeminal nucleus), facial paralysis (due to damage to the facial nucleus), and hearing loss and tinnitus (due to damage to the cochlear nuclei).
Vertigo may sometimes present as an isolated symptom several weeks or months before acute ischemia and cerebral infarction occurs, probably with the meaning of transient ischemia of the inner ear or the vestibular nerve.
There is also loss of pain and temperature sensation from the contralateral limbs and trunk, which can lead to diagnostic confusion with lateral medullary syndrome, which also gives rise to "crossed" neurological signs but does not normally
|
https://en.wikipedia.org/wiki/Fingering%20%28music%29
|
In music, fingering, or on stringed instruments sometimes also called stopping, is the choice of which fingers and hand positions to use when playing certain musical instruments. Fingering typically changes throughout a piece; the challenge of choosing good fingering for a piece is to make the hand movements as comfortable as possible without changing hand position too often. A fingering can be the result of the working process of the composer, who puts it into the manuscript, an editor, who adds it into the printed score, or the performer, who puts his or her own fingering in the score or in performance.
A substitute fingering is an alternative to the indicated fingering, not to be confused with finger substitution. Depending on the instrument, not all the fingers may be used. For example, saxophonists do not use the right thumb and string instruments (usually) only use the fingers and not the thumbs.
Instruments
Brass instruments
Fingering applies to the rotary and piston valves employed on many brass instruments.
The trombone, a fully chromatic brass instrument without valves, employs equivalent numbered notation for slide positions rather than fingering.
Keyboard instruments
In notation for keyboard instruments, numbers are used to relate to the fingers themselves, not the hand position on the keyboard. In modern scores, the fingers are numbered from 1 to 5 on each hand: the thumb is 1, the index finger is 2, the middle finger is 3, the ring finger is 4 and the little finger is 5.
Earlier usage varied by region. In Britain in the 19th century, the thumb was shown by a cross (+) or number 0 and the fingers were numbered from 1 to 4. This was known as "English fingering" while the other way (from 1 to 5) was known as "Continental fingering." However, from the beginning of the 20th century the British adopted the Continental (1 to 5) fingering, which remains in use everywhere.
Piano
After Cristofori invented the pianoforte from the harpsichord in 1700, a
|
https://en.wikipedia.org/wiki/Palamedes%20%28video%20game%29
|
is a puzzle video game released by Taito in 1990.
Gameplay
Palamedes is a puzzle game requiring the players to match the dice they are holding to the dice at the top of the screen. Using the "B" button, the player can change the number on their dice, then throw it using the "A" button when it matches the dice at the top of the screen, which wipes the target dice off the board. By matching dice in some combinations, like doing it with the same number several times in a row, or by doing a 1-to-6 sequence, the player is awarded a special move where they can eliminate three to five lines of dice on the game field. At regular time intervals (which get smaller as the game progresses) new dice lines are added, and when a die touches the bottom of the screen, the game ends.
The player can play in "solitaire" mode against the computer or another player, or "tournament" mode against AI opponents. There are six sides and numbers on the dice, making an attempt to match all the numbers on the screen and eliminating them a challenge.
Ports
Ports of the game were published for the NES, MSX, FM Towns and Game Boy by HOT-B. The Japan-only sequel, Palamedes 2: Star Twinkles, was released in 1991 for the NES by HOT-B. It featured most of the same basic gameplay elements as the original but with the play field scrolling in the opposite direction.
Reception
In Japan, Game Machine listed Palamedes on their December 15, 1990 issue as being the sixteenth most-successful table arcade unit of the month.
David Wilson of Your Sinclair magazine reviewed the arcade game, giving it an 80% score. Zero magazine rated it three out of five.
Famitsu magazine reviewed the Game Boy version, scoring the game a 22 out of 40.
|
https://en.wikipedia.org/wiki/Natural%20burial
|
Natural burial is the interment of the body of a dead person in the soil in a manner that does not inhibit decomposition but allows the body to be naturally recycled. It is an alternative to typical contemporary Western burial methods and modern funerary customs.
The body may be prepared without chemical preservatives or disinfectants such as embalming fluid, which are designed to inhibit the microbial decomposers that break the body down. It may be buried in a biodegradable coffin, casket, or shroud. The grave does not use a burial vault or outer burial container that would prevent the body's contact with soil. The grave should be shallow enough to allow microbial activity similar to that found in composting.
Natural burial grounds have been used throughout human history and are used in many countries.
History
Although natural burials present themselves as a relatively modern concept in Western societies, they have been practiced for many years in different cultures out of "religious obligation, necessity, or tradition". For example, many Muslims perform natural burial out of a duty to their religion. Others, like those in African countries, bury naturally because they cannot afford the cost of embalming. In China, the cultural revolution saw the popularity of burial rise over cremation. Truly natural burials also include the burial of bodies within tree roots in the Amazon rainforest in Peru, and burying the deceased in the Tanzanian bush. According to Nature, the earliest known human burial dates back to the Middle Stone Age (about 74 – 82 thousand years ago) of a toddler in what is now Kenya.
Natural burial has been practiced for thousands of years, but has been interrupted in modern times by new methods such as vaults, liners, embalming, and mausoleums that mitigate the decomposition process. In the late 19th century Sir Francis Seymour Hayden proposed "earth to earth burial" in a pamphlet of the same name, as an alternative to both cremation and the slow
|
https://en.wikipedia.org/wiki/Cereal%20coffee
|
A cereal coffee (also known as grain coffee, roasted grain drink or roasted grain beverage) is a hot drink made from one or more cereal grains roasted and commercially processed into crystal or powder form to be reconstituted later in hot water. The product is often marketed as a caffeine-free alternative to coffee and tea, or in other cases where those drinks are scarce or expensive.
Several well-known cereal coffee brands are Nestlé Caro, Postum, and Inka. Other brands can be found at health food stores and at some grocery stores. Some common ingredients include toasted barley, malted barley, rye, chicory, molasses, and beet root.
Use
Asia
Cereal coffee is popular in East Asian cuisines—Korea, Japan, and China each having one or more versions (usually roasted grains simply steeped in hot water).
Barley tea (bori-cha, dàmài-chá, mugi-cha)
Rice tea
Brown rice tea (hyeonmi-cha, nước gạo lứt)
Sungnyung
Corn tea (oksusu-cha)
Job's tears tea (yulmu-cha)
Grain-like seeds and pseudocereals are used to make similar drinks.
Buckwheat tea (memil-cha, soba-cha)
Sicklepod tea (gyeolmyeongja-cha)
Grain teas can also be blended with green tea or other tea drinks.
Brown rice green tea (hyeonmi-nokcha)
Genmaicha
Europe
Some notable Polish brands which specialize in cereal coffee are Inka, Krakus and Anatol.
In Czech Republic, a Kávoviny Melta brand has been roasting grain coffee since 1896.
Such roasted grain mixes are also used as a base to make podpiwek, a type of non-alcoholic beverage.
See also
Coffee substitute
Grain milk
List of barley-based drinks
Mash ingredients
|
https://en.wikipedia.org/wiki/Yips
|
In sports, the yips are a sudden and unexplained loss of ability to execute certain skills in experienced athletes. Symptoms of the yips are losing fine motor skills and psychological issues that impact on the muscle memory and decision-making of athletes, leaving them unable to perform basic skills of their sport.
Common treatments include clinical sport psychology therapy as well as refocusing attention on the underlying biomechanics of their physical actions. The impact varies widely. A yips event may last a short time before the athlete regains their composure or it can require longer term adjustments to technique before recovery occurs. The worst cases are those where the athlete does not recover at all, forcing the player to abandon the sport at the highest level.
In golf
In golf, the yips is a movement disorder known to interfere with putting. The term yips is said to have been popularized by Tommy Armour—a golf champion and later golf teacher—to explain the difficulties that led him to abandon tournament play. In describing the yips, golfers have used terms such as twitches, staggers, jitters and jerks. The yips affects between a quarter and a half of all mature golfers. Researchers at the Mayo Clinic found that 33% to 48% of all serious golfers have experienced the yips. Golfers who have played for more than 25 years appear most prone to the condition.
Although the exact cause of the yips has yet to be determined, one possibility is biochemical changes in the brain that accompany aging. Excessive use of the involved muscles and intense demands of coordination and concentration may exacerbate the problem. Giving up golf for a month sometimes helps. Focal dystonia has been mentioned as another possibility for the cause of yips.
Professional golfers seriously afflicted by the yips include Ernie Els, David Duval, Pádraig Harrington, Bernhard Langer, Ben Hogan, Harry Vardon, Sam Snead, Ian Baker-Finch and Keegan Bradley, who missed a six-inch putt in the fi
|
https://en.wikipedia.org/wiki/Particle%20physics%20and%20representation%20theory
|
There is a natural connection between particle physics and representation theory, as first noted in the 1930s by Eugene Wigner. It links the properties of elementary particles to the structure of Lie groups and Lie algebras. According to this connection, the different quantum states of an elementary particle give rise to an irreducible representation of the Poincaré group. Moreover, the properties of the various particles, including their spectra, can be related to representations of Lie algebras, corresponding to "approximate symmetries" of the universe.
General picture
Symmetries of a quantum system
In quantum mechanics, any particular one-particle state is represented as a vector in a Hilbert space . To help understand what types of particles can exist, it is important to classify the possibilities for allowed by symmetries, and their properties. Let be a Hilbert space describing a particular quantum system and let be a group of symmetries of the quantum system. In a relativistic quantum system, for example, might be the Poincaré group, while for the hydrogen atom, might be the rotation group SO(3). The particle state is more precisely characterized by the associated projective Hilbert space , also called ray space, since two vectors that differ by a nonzero scalar factor correspond to the same physical quantum state represented by a ray in Hilbert space, which is an equivalence class in and, under the natural projection map , an element of .
By definition of a symmetry of a quantum system, there is a group action on . For each , there is a corresponding transformation of . More specifically, if is some symmetry of the system (say, rotation about the x-axis by 12°), then the corresponding transformation of is a map on ray space. For example, when rotating a stationary (zero momentum) spin-5 particle about its center, is a rotation in 3D space (an element of ), while is an operator whose domain and range are each the space of possible quantum state
|
https://en.wikipedia.org/wiki/Sourdough%20Sam
|
Sourdough Sam is a mascot for the NFL's San Francisco 49ers.
History
Before the introduction of Sourdough Sam, the 49ers' first mascot was a mule named Clementine that wore a red saddle blanket and appeared in the 1950s and 1960s.
A gold rush prospector–themed character first appeared in the 1970s. The character's design reflected the cover art of programs created by William Kay between 1946 and 1949—when the 49ers were a part of the All-America Football Conference—which depicted a bushy-mustached prospector with two pistols.
Sourdough Sam's persona later underwent a slight change from prospector to miner. As a miner, he depicted a large man with an oversized football helmet and plaid shirt matching that worn in the original William Kay cover art. Several elements of this version of Sourdough Sam, such as a bushy beard and suspenders, remained part of his image in later iterations. In 1985, this version of Sourdough Sam appeared in a cookbook titled 49er Fixens.
Another design change switched his helmet for a wide-brimmed ten-gallon hat with a chunk taken out of its brim and gave him a longer brown beard and larger, brown eyes.
Just prior to the 2006 NFL season, Sam's appearance was altered somewhat: He appeared as a clean-shaven gold panner with blue eyes and a hat without any imperfections.
Sourdough Sam returned for the 2011 season with a beard and blue eyes.
Outfit
Sourdough Sam typically wears a cardinal football jersey, despite the 49ers' current selection of team color being 49ers Red, and his jersey number is 49. He wears a white long-sleeved shirt underneath the jersey and sports light brown gloves as well as a gold handkerchief around his neck. He also wears a large, dark brown cowboy hat emblazoned with the logo for the 49ers and dark brown boots. His jeans are held up by suspenders, and in 2014 he was outfitted with a new pair of Levi's jeans after 60 years of wearing a non-branded pair, with promotional images of his entering Levi's Taylor Shop a
|
https://en.wikipedia.org/wiki/Nuclear%20data
|
Nuclear data represents measured (or evaluated) probabilities of various physical interactions involving the nuclei of atoms. It is used to understand the nature of such interactions by providing the fundamental input to many models and simulations, such as fission and fusion reactor calculations, shielding and radiation protection calculations, criticality safety, nuclear weapons, nuclear physics research, medical radiotherapy, radioisotope therapy and diagnostics, particle accelerator design and operations, geological and environmental work, radioactive waste disposal calculations, and space travel calculations.
It groups all experimental data relevant for nuclear physics and nuclear applications. It includes a large number of physical quantities, like scattering and reaction cross sections (which are generally functions of energy and angle), nuclear structure and nuclear decay parameters, etc. It can involve neutrons, protons, deuterons, alpha particles, and virtually all nuclear isotopes which can be handled in a laboratory.
There are two major reasons to need high-quality nuclear data: theoretical model development of nuclear physics, and applications involving radiation and nuclear power. There is often an interplay between these two aspects, since applications often motivate research in particular theoretical fields, and theory can be used to predict quantities or phenomena which can lead to new or improved technological concepts.
Nuclear Data Evaluations
To ensure a level of quality required to protect the public, experimental nuclear data results are occasionally evaluated by a Nuclear Data Organization to form a nuclear data library. These organizations review multiple measurements and agree upon the highest-quality measurements before publishing the libraries. For unmeasured or very complex data regimes, the parameters of nuclear models are adjusted until the resulting data matches well with critical experiments. The result of an evaluation is almost u
|
https://en.wikipedia.org/wiki/Inverse%20polymerase%20chain%20reaction
|
Inverse polymerase chain reaction (Inverse PCR) is a variant of the polymerase chain reaction that is used to amplify DNA with only one known sequence. One limitation of conventional PCR is that it requires primers complementary to both termini of the target DNA, but this method allows PCR to be carried out even if only one sequence is available from which primers may be designed.
Inverse PCR is especially useful for the determination of insert locations. For example, various retroviruses and transposons randomly integrate into genomic DNA. To identify the sites where they have entered, the known, "internal" viral or transposon sequences can be used to design primers that will amplify a small portion of the flanking, "external" genomic DNA. The amplified product can then be sequenced and compared with DNA databases to locate the sequence which has been disrupted.
The inverse PCR method involves a series of restriction digests and ligation, resulting in a looped fragment that can be primed for PCR from a single section of known sequence. Then, like other polymerase chain reaction processes, the DNA is amplified by the thermostable DNA polymerase:
A target region with an internal section of known sequence and unknown flanking regions is identified
Genomic DNA is digested into fragments of a few kilobases by a usually low-moderate frequency (6-8 base) cutting restriction enzyme.
Under low DNA concentrations or quick ligation conditions, self-ligation is induced to give a circular DNA product.
PCR is carried out as usual with the circular template, with primers complementary to sections of the known internal sequence pointing outwards.
Finally the sequence of the sequenced PCR product is compared against sequence databases.
It is used in case of chromosome crawling.
|
https://en.wikipedia.org/wiki/Adaptive%20simulated%20annealing
|
Adaptive simulated annealing (ASA) is a variant of simulated annealing (SA) algorithm in which the algorithm parameters that control temperature schedule and random step selection are automatically adjusted according to algorithm progress. This makes the algorithm more efficient and less sensitive to user defined parameters than canonical SA. These are in the standard variant often selected on the basis of experience and experimentation (since optimal values are problem dependent), which represents a significant deficiency in practice.
The algorithm works by representing the parameters of the function to be optimized as continuous numbers, and as dimensions of a hypercube (N dimensional space). Some SA algorithms apply Gaussian moves to the state, while others have distributions permitting faster temperature schedules. Imagine the state as a point in a box and the moves as a rugby-ball shaped cloud around it. The temperature and the step size are adjusted so that all of the search space is sampled to a coarse resolution in the early stages, whilst the state is directed to favorable areas in the late stages. Another ASA variant, thermodynamic simulated annealing, automatically adjusts the temperature at each step based on the energy difference between the two states, according to the laws of thermodynamics.
See also
Simulated annealing
Combinatorial optimization
Optimization
|
https://en.wikipedia.org/wiki/ITSEC
|
The Information Technology Security Evaluation Criteria (ITSEC) is a structured set of criteria for evaluating computer security within products and systems. The ITSEC was first published in May 1990 in France, Germany, the Netherlands, and the United Kingdom based on existing work in their respective countries. Following extensive international review, Version 1.2 was subsequently published in June 1991 by the Commission of the European Communities for operational use within evaluation and certification schemes.
Since the launch of the ITSEC in 1990, a number of other European countries have agreed to recognize the validity of ITSEC evaluations.
The ITSEC has been largely replaced by Common Criteria, which provides similarly-defined evaluation levels and implements the target of evaluation concept and the Security Target document.
Concepts
The product or system being evaluated, called the target of evaluation, is subjected to a detailed examination of its security features culminating in comprehensive and informed functional and penetration testing. The degree of examination depends upon the level of confidence desired in the target. To provide different levels of confidence, the ITSEC defines evaluation levels, denoted E0 through E6. Higher evaluation levels involve more extensive examination and testing of the target.
Unlike earlier criteria, notably the TCSEC developed by the US defense establishment, the ITSEC did not require evaluated targets to contain specific technical features in order to achieve a particular assurance level. For example, an ITSEC target might provide authentication or integrity features without providing confidentiality or availability. A given target's security features were documented in a Security Target document, whose contents had to be evaluated and approved before the target itself was evaluated. Each ITSEC evaluation was based exclusively on verifying the security features identified in the Security Target.
Use
The formal Z
|
https://en.wikipedia.org/wiki/Adenomatous%20polyposis%20coli
|
Adenomatous polyposis coli (APC) also known as deleted in polyposis 2.5 (DP2.5) is a protein that in humans is encoded by the APC gene. The APC protein is a negative regulator that controls beta-catenin concentrations and interacts with E-cadherin, which are involved in cell adhesion. Mutations in the APC gene may result in colorectal cancer and desmoid tumors.
APC is classified as a tumor suppressor gene. Tumor suppressor genes prevent the uncontrolled growth of cells that may result in cancerous tumors. The protein made by the APC gene plays a critical role in several cellular processes that determine whether a cell may develop into a tumor. The APC protein helps control how often a cell divides, how it attaches to other cells within a tissue, how the cell polarizes and the morphogenesis of the 3D structures, or whether a cell moves within or away from tissue. This protein also helps ensure that the chromosome number in cells produced through cell division is correct. The APC protein accomplishes these tasks mainly through association with other proteins, especially those that are involved in cell attachment and signaling. The activity of one protein in particular, beta-catenin, is controlled by the APC protein (see: Wnt signaling pathway). Regulation of beta-catenin prevents genes that stimulate cell division from being turned on too often and prevents cell overgrowth.
The human APC gene is located on the long (q) arm of chromosome 5 in band q22.2 (5q22.2). The APC gene has been shown to contain an internal ribosome entry site. APC orthologs have also been identified in all mammals for which complete genome data are available.
Structure
The full-length human protein comprises 2,843 amino acids with a (predicted) molecular mass of 311646 Da. Several N-terminal domains have been structurally elucidated in unique atomistic high-resolution complex structures. Most of the protein is predicted to be intrinsically disordered. It is not known if this large predicte
|
https://en.wikipedia.org/wiki/Nested%20polymerase%20chain%20reaction
|
Nested polymerase chain reaction (nested PCR) is a modification of polymerase chain reaction intended to reduce non-specific binding in products due to the amplification of unexpected primer binding sites.
Polymerase chain reaction
Polymerase chain reaction itself is the process used to amplify DNA samples, via a temperature-mediated DNA polymerase. The products can be used for sequencing or analysis, and this process is a key part of many genetics research laboratories, along with uses in DNA fingerprinting for forensics and other human genetic cases. Conventional PCR requires primers complementary to the termini of the target DNA. The amount of product from the PCR increases with the number of temperature cycles that the reaction is subjected to. A commonly occurring problem is primers binding to incorrect regions of the DNA, giving unexpected products. This problem becomes more likely with an increased number of cycles of PCR.
Primers
Nested polymerase chain reaction involves two sets of primers, used in two successive runs of polymerase chain reaction, the second set intended to amplify a secondary target within the first run product. This allows amplification for a low number of runs in the first round, limiting non-specific products. The second nested primer set should only amplify the intended product from the first round of amplification and not non-specific product. This allows running more total cycles while minimizing non-specific products. This is useful for rare templates or PCR with high background.
Processes
The target DNA undergoes the first run of polymerase chain reaction with the first set of primers, shown in green. The selection of alternative and similar primer binding sites gives a selection of products, only one containing the intended sequence.
The product from the first reaction undergoes a second run with the second set of primers, shown in red. It is very unlikely that any of the unwanted PCR products contain binding sites for both
|
https://en.wikipedia.org/wiki/Recombination%20hotspot
|
Recombination hotspots are regions in a genome that exhibit elevated rates of recombination relative to a neutral expectation. The recombination rate within hotspots can be hundreds of times that of the surrounding region. Recombination hotspots result from higher DNA break formation in these regions, and apply to both mitotic and meiotic cells. This appellation can refer to recombination events resulting from the uneven distribution of programmed meiotic double-strand breaks.
Meiotic recombination
Meiotic recombination through crossing over is thought to be a mechanism by which a cell promotes correct segregation of homologous chromosomes and the repair of DNA damages. Crossing over requires a DNA double-stranded break followed by strand invasion of the homolog and subsequent repair. Initiation sites for recombination are usually identified by mapping crossing over events through pedigree analysis or through analysis of linkage disequilibrium. Linkage disequilibrium has identified more than 30,000 hotspots within the human genome. In humans, the average number of crossover recombination events per hotspot is one crossover per 1,300 meioses, and the most extreme hotspot has a crossover frequency of one per 110 meioses.
Genomic rearrangements
Recombination can also occur due to errors in DNA replication that lead to genomic rearrangements. These events are often associated with pathology. However, genomic rearrangement is also thought to be a driving force in evolutionary development as it gives rise to novel gene combinations.
Recombination hotspots may arise from the interaction of the following selective forces: the benefit of driving genetic diversity through genomic rearrangement coupled with selection acting to maintain favorable gene combinations.
Initiation sites
DNA contains "fragile sites" within the sequence that are more prone to recombination. These fragile sites are associated with the following trinucleotide repeats: CGG-CCG, GAG-CTG, GAA-TTC, an
|
https://en.wikipedia.org/wiki/Graft-chimaera
|
In horticulture, a graft-chimaera may arise in grafting at the point of contact between rootstock and scion and will have properties intermediate between those of its "parents". A graft-chimaera is not a true hybrid but a mixture of cells, each with the genotype of one of its "parents": it is a chimaera. Hence, the once widely used term "graft-hybrid" is not descriptive; it is now frowned upon.
Propagation is by cloning only. In practice graft-chimaeras are not noted for their stability and may easily revert to one of the "parents".
Nomenclature
Article 21 of the ICNCP stipulates that a graft-chimaera can be indicated either by
a formula: the names of both "parents", in alphabetical order, joined by the plus sign "+":
Crataegus + Mespilus
a name:
if the "parents" belong to different genera a name may be formed by joining part of one generic name to the whole of the other generic name. This name must not be identical to a generic name published under the International Code of Nomenclature for algae, fungi, and plants (ICN). For example + Crataegomespilus is the name for the graft-chimaera which may also be indicated by the formula Crataegus + Mespilus. This name is clearly different from ×Crataemespilus, the name under the ICN for the true hybrid between Crataegus and Mespilus, which can also be designated by the formula Crataegus × Mespilus.
if both "parents" belong to the same genus the graft-chimaera may be given a cultivar name. For example Syringa 'Correlata' is a graft-chimaera involving Syringa vulgaris (common lilac) and Syringa × chinensis (Rouen lilac, which is itself a hybrid between S. vulgaris and S. persica). No plus sign is used, because both "parents" belong to the genus Syringa.
A graft-chimaera cannot have a species name, because it is simultaneously two species. Although +Laburnocytisus 'Adamii', for example, is sometimes seen written as if it were a species (+Laburnocytisus adamii), this is incorrect.
In Darwin's works
Charles Darwin "
|
https://en.wikipedia.org/wiki/Ciena
|
Ciena Corporation is an American telecommunications networking equipment and software services supplier based in Hanover, Maryland. The company has been described by The Baltimore Sun as the "world's biggest player in optical connectivity". The company reported revenues of $3.63 billion for 2022. Ciena had over 8,000 employees, as of October 2022. Gary Smith serves as president and chief executive officer (CEO).
Customers include AT&T, Deutsche Telekom, KT Corporation and Verizon Communications.
History
Early history and initial public offering
Ciena was founded in 1992 under the name HydraLite by electrical engineer David R. Huber. Huber served as chief executive officer, while Optelecom, a company building optical networking products, provided "management assistance and production facilities," and co-founder Kevin Kimberlin "provided initial equity capital during the formation of the Company". Dave Huber engaged William K. Woodruff & Co. to raise $3.0 million in venture funding in September of 1993. Woodruff presented the idea to John Bayless at Sevin Rosen in November 1993 that resulted in Sevin Rosen investing $3.0 million April 10, 1994. William K. Woodruff & Co. was a co-manager of Ciena's IPO in February 1997. The company subsequently received funding from Sevin Rosen Funds as a result of a demonstration at its laboratory attended by Jon Bayless, a partner at the firm, who saw the value in applying HydraLite's fiber-optic technology to cable television. Sevin Rosen offered funding immediately, investing $1.25 million in April 1994.
Ciena received $40 million in venture capital financing, including $3.3 million from Sevin Rosen Funds. Other early investors in the company included Charles River Ventures, Japan Associated Finance Co., Star Venture, and Vanguard Venture Partners. Bayless also recruited physicist Patrick Nettles, a former colleague at the telecommunications company Optilink, to serve as Ciena's first CEO, and Lawrence P. Huang, another former
|
https://en.wikipedia.org/wiki/IMelody
|
iMelody is a non-polyphonic ringtone exchange object format, used for mobile phones [Extension: .imy], defined by Ericsson and Sony Ericsson together with other manufacturers and is based on Ericsson's proprietary eMelody format. This ringtone format also supports codes that can control the vibration motor, backlight, LED lights and volume of the device. The iMelody format was made because the eMelody had some musical limitations.
Transferring melody to mobile phones
In order to transfer these ringtones to a mobile phone, one can simply send an SMS message with the iMelody/eMelody text as the text of the message, or make a plain text file containing the iMelody/eMelody text, using the extension of either .imy for iMelody or .emy for eMelody, and transfer the file to the mobile phone by Bluetooth, IrDA (infrared), or by a data cable. The file could also be attached to an MMS message or an e-mail message.
iMelody
MIME: "text/x-iMelody" or "audio/iMelody"
Extension: ".imy"
Here is an example of an advanced ringtone in the iMelody format, this is a silent ringtone that only makes the phone's vibrating motor vibrate constantly:
BEGIN:IMELODY
VERSION:1.2
FORMAT:CLASS1.0
BEAT:25
MELODY:vibeonr2vibeonr2vibeon (r2vibeonvibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2vibeonr2ledonr2vibeonr2vibeon
END:IMELODY
Another example of an advanced ringtone, which makes the vibrating motor vibrate constantly, the backlight of the phone's display blink and plays a simple tone:
BEGIN:IMELODY
VERSION:1.2
FORMAT:CLASS1.0
BEAT:900
STYLE:S1
MELODY:(vibeonbackoff*6c5ledon*6d5ledoff*6e5ledon*6f5ledoff*6g5ledon*6a5ledoff*6b5ledon*6a5ledoff*6g5*6f5*6e5*6d5*6c5backon*6d5*6e5*6f5*6g5*6a5*6b5*6a5*6g5*6f5*6e5*6d5@0)
END:IMELODY
eMelody
MIME: "text/x-eMelody" or "audio/e-melody"
Extension: ".emy"
Here is an example of a ringtone in eMelody format:
BEGIN:EMELODY
VERSION:1.0
NAME:Test melody 1
COMPOSER:
|
https://en.wikipedia.org/wiki/Motif%20%28visual%20arts%29
|
In art and iconography, a motif () is an element of an image. The term can be used both of figurative and narrative art, and ornament and geometrical art. A motif may be repeated in a pattern or design, often many times, or may just occur once in a work.
A motif may be an element in the iconography of a particular subject or type of subject that is seen in other works, or may form the main subject, as the Master of Animals motif in ancient art typically does. The related motif of confronted animals is often seen alone, but may also be repeated, for example in Byzantine silk and other ancient textiles. Where the main subject of an artistic work such as a painting is a specific person, group, or moment in a narrative, that should be referred to as the "subject" of the work, not a motif, though the same thing may be a "motif" when part of another subject, or part of a work of decorative art such as a painting on a vase.
Ornamental or decorative art can usually be analysed into a number of different elements, which can be called motifs. These may often, as in textile art, be repeated many times in a pattern. Important examples in Western art include acanthus, egg and dart, and various types of scrollwork.
Some examples
Geometric, typically repeated: Meander, palmette, rosette, gul in Oriental rugs, acanthus, egg and dart, Bead and reel, Pakudos, Swastika, Adinkra symbols.
Figurative: Master of Animals, confronted animals, velificatio, Death and the Maiden, Three hares, Sheela na gig, puer mingens. In the Nativity of Jesus in art, the detail of showing Saint Joseph as asleep, which was common in medieval depictions, can be regarded as a "motif".
Many designs in Islamic culture are motifs, including those of the sun, moon, animals such as horses and lions, flowers, and landscapes. Motifs can have emotional effects and be used for propaganda. In kilim flatwoven carpets, motifs such as the hands-on-hips elibelinde are woven in to the design to express the hopes a
|
https://en.wikipedia.org/wiki/Medical%20students%27%20disease
|
Medical students' disease (also known as second year syndrome or intern's syndrome) is a condition frequently reported in medical students, who perceive themselves to be experiencing the symptoms of a disease that they are studying.
The condition is associated with the fear of contracting the disease in question. Some authors suggested that the condition must be referred to as nosophobia rather than "hypochondriasis", because the quoted studies show a very low percentage of hypochondriacal character of the condition, and hence the term "hypochondriasis" would have ominous therapeutic and prognostic indications. The reference suggests that the condition is associated with immediate preoccupation with the symptoms in question, leading the student to become unduly aware of various casual psychological and physiological dysfunctions; cases show little correlation with the severity of psychopathology, but rather with accidental factors related to learning and experience.
Overview
Baars (2001) writes that medical students who study "frightening diseases" for the first time routinely experience vivid delusions of having contracted such diseases, and describes it as a "temporary kind of hypochondria". Baars says that the experience is so common that it has become known as "medical student syndrome".
Hodges (2004), reviewing the literature, said that "the first descriptions of medical students' disease appeared in the 1960s." He may have been referring to the phrase, for the phenomenon itself was noted much earlier. George Lincoln Walton (1908) reported that
Medical instructors are continually consulted by students who fear that they have the diseases they are studying. The knowledge that pneumonia produces pain in a certain spot leads to a concentration of attention upon that region which causes any sensation there to give alarm. The mere knowledge of the location of the appendix transforms the most harmless sensations in that region into symptoms of serious menace.
Ho
|
https://en.wikipedia.org/wiki/Autodesk%20Vault
|
Autodesk Vault is a data management tool integrated with Autodesk Inventor Series, Autodesk Inventor Professional, AutoCAD Mechanical, AutoCAD Electrical, Autodesk Revit and Civil 3D products. It helps design teams track work in progress and maintain version control in multi-user environments. It allows them to organize and reuse designs by consolidating product information and reducing the need to re-create designs from scratch. Users can store and search both CAD data (such as Autodesk Inventor, DWG, and DWF files) and non-CAD documents (such as Microsoft Word and Microsoft Excel files).
Overview
The Vault environment functions as a client server application with the central SQL database and Autodesk Data Management Server (ADMS) applications installed on a Windows-based server with client access granted via various clients such as: Thick Client (Vault Explorer) and Application Integrations. ADMS acts as the middleware that handles client transactions with the SQL database. Vault Explorer functions as the client application and is intended to run alongside the companion CAD software. The Vault Explorer UI (User Interface) is intended to have an appearance similar to Microsoft Outlook and can display the Vault folder structure, file metadata in the form of a grid and a preview pane for more detailed information.
Autodesk Vault is a file versioning system that "records" the progression of all edits a file has undergone. All files and their associated metadata are indexed in the SQL base data management system and are searchable from the Vault client interface. Other information about the files include version history, uses (composed of a list of children), "Where Used" (a list of all parents) as well as a light weight viewable in the form of the Autodesk Design Web Format (DWF) file which is automatically published upon check-in. When users intend to edit a file the file is checked-out and edits are made. When the user is satisfied with the changes the file chec
|
https://en.wikipedia.org/wiki/Nautical%20star
|
The nautical star is a symbolic star representing the North Star, associated with the sea services of the United States armed forces and with tattoo culture. It is usually rendered as a five-pointed star in dark and light shades counterchanged in a style similar to a compass rose.
In Unicode, this symbol is in the dingbats block as , referencing a pinwheel toy.
Nautical charts
Modern nautical charts use the star to indicate true north on the outer of the two compass circles of a compass rose, symbolizing the North Star. The US Coast and Geodetic Survey started using this symbol in its double-circle compass roses around 1900.
Use as a symbol
Sea services
The nautical star is an informal signifier indicating membership in the United States Coast Guard, United States Navy, or Marine Corps. The symbol recalls both the five-pointed star of the US national flag and the color pattern of the compass rose found on many nautical charts.
Insignia including nautical stars:
United States Coast Guard officer rank insignia
German Navy officer rank insignia
Ships
The Endurance, in which Ernest Shackleton and crew sailed on the 1914–1917 Imperial Trans-Antarctic Expedition, was originally named after the pole star and retained a large badge in the shape of a five-pointed star on her stern.
Other
The nautical star is common in insignia, flags, and logos. Examples:
Sixpoint Brewery in Red Hook, Brooklyn, uses a six-pointed version of the star in its logo to reflect the neighborhood's maritime history.
Blue Stars Drum and Bugle Corps
The California flag includes a red five-pointed star, which is sometimes stylized like a nautical star:
Called the NorCal Star, it is sometimes used to represent Northern California on clothing and tattoos.
Sacramento Republic FC, a Sacramento, California soccer team, uses a red nautical star in its crest.
Tattoo culture
This symbol is part of the tradition of sailor tattoos. A nautical star represented the North Star, with the idea t
|
https://en.wikipedia.org/wiki/Cross-linking%20immunoprecipitation
|
Cross-linking and immunoprecipitation (CLIP, or CLIP-seq) is a method used in molecular biology that combines UV crosslinking with immunoprecipitation in order to identify RNA binding sites of proteins on a transcriptome-wide scale, thereby increasing our understanding of post-transcriptional regulatory networks. CLIP can be used either with antibodies against endogenous proteins, or with common peptide tags (including FLAG, V5, HA, and others) or affinity purification, which enables the possibility of profiling model organisms or RBPs otherwise lacking suitable antibodies.
Workflow
CLIP begins with the in-vivo cross-linking of RNA-protein complexes using ultraviolet light (UV). Upon UV exposure, covalent bonds are formed between proteins and nucleic acids that are in close proximity (on the order of Angstroms apart). The cross-linked cells are then lysed, RNA is fragmented, and the protein of interest is isolated via immunoprecipitation. In order to allow for priming of reverse transcription, RNA adapters are ligated to the 3' ends, and RNA fragments are labelled to enable the analysis of the RNA-protein complexes after they have been separated from free RNA using gel electrophoresis and membrane transfer. Proteinase K digestion is then performed in order to remove protein from the crosslinked RNA, which leaves a few amino acids at the crosslink site. This often leads to truncation of cDNAs at the crosslinked nucleotide, which is exploited in variants such as iCLIP to increase the resolution of the method. cDNA is then synthesized via RT-PCR followed by high-throughput sequencing followed by mapping the reads back to the transcriptome and other computational analyses to study the interaction sites.
History and applications
CLIP was originally undertaken to study interactions between the neuron-specific RNA-binding protein and splicing factors NOVA1 and NOVA2 in the mouse brain, identifying RNA binding sites that contained the expected Nova-binding motifs. Sequen
|
https://en.wikipedia.org/wiki/GPS%20for%20the%20visually%20impaired
|
Since the Global Positioning System (GPS) was introduced in the late 1980s there have been many attempts to integrate it into a navigation-assistance system for blind and visually impaired people.
Software
Android
RightHear
RightHear was first released in December 2015. It uses data from OpenStreetMap alongside their own databases and with this information, RightHear provides their users with multilingual audio-descriptions of the environment, indoors and outdoors.
RightHear main features are as follow:
Informing the user about their current location on request and automatically in predefined intervals. Also, providing the user with a link to a relevant online destination (if applicable) like the menu at restaurants and description of exhibits at monuments.
Saving user points of interest as recordings. Users can be notified when they approach these points and hear their personal recordings.
Automatic announcements of public points of interest, street intersections, and points saved by the user.
Supporting third-party public transportation apps like Moovit, Uber, Lyft, Gett, and many more. RightHear users can create their journey from their current location to their destination and can look up schedules and routes to their destination.
Simulation of locations, letting users explore distant places before traveling there.
Announcing public and user points of interest and intersections located in the direction the user points their device in. RightHear also provides 3D sounds which allow the user to hear the information from the relevant direction when his headphones are on.
Announcing the Sky direction that the user is facing by holding the device vertically.
Indoor navigation via Bluetooth beacons. RightHear support the open Wayfindr standard.
Calling for a local assistant if needed from the relevant person at the RightHear enabled building (like receptions at hotels).
Corsair GPS
Corsair is a GPS for pedestrians. It allows you to discover places aroun
|
https://en.wikipedia.org/wiki/Wigner%27s%20theorem
|
Wigner's theorem, proved by Eugene Wigner in 1931, is a cornerstone of the mathematical formulation of quantum mechanics. The theorem specifies how physical symmetries such as rotations, translations, and CPT are represented on the Hilbert space of states.
The physical states in a quantum theory are represented by unit vectors in Hilbert space up to a phase factor, i.e. by the complex line or ray the vector spans. In addition, by the Born rule the absolute value of the unit vectors inner product, or equivalently the cosine squared of the angle between the lines the vectors span, corresponds to the transition probability. Ray space, in mathematics known as projective Hilbert space, is the space of all unit vectors in Hilbert space up to the equivalence relation of differing by a phase factor. By Wigner's theorem, any transformation of ray space that preserves the absolute value of the inner products can be represented by a unitary or antiunitary transformation of Hilbert space, which is unique up to a phase factor. As a consequence, the representation of a symmetry group on ray space can be lifted to a projective representation or sometimes even an ordinary representation on Hilbert space.
Rays and ray space
It is a postulate of quantum mechanics that vectors in Hilbert space that are scalar nonzero multiples of each other represent the same pure state. A ray belonging to the vector is the complex line through the origin
.
Two nonzero vectors define the same ray, if and only if they differ by some nonzero complex number: , .
Alternatively, we can consider a ray as a set of vectors with norm 1 that span the same line, a unit ray, by intersecting the line with the unit sphere
.
Two unit vectors then define the same unit ray if they differ by a phase factor: .
This is the more usual picture in physics.
The set of rays is in one to one correspondence with the set of unit rays and we can identify them.
There is also a one-to-one correspondence between p
|
https://en.wikipedia.org/wiki/Hinge%20theorem
|
In geometry, the hinge theorem (sometimes called the open mouth theorem) states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. This theorem is given as Proposition 24 in Book I of Euclid's Elements.
Scope and generalizations
The hinge theorem holds in Euclidean spaces and more generally in simply connected non-positively curved space forms.
It can be also extended from plane Euclidean geometry to higher dimension Euclidean spaces (e.g., to tetrahedra and more generally to simplices), as has been done for orthocentric tetrahedra (i.e., tetrahedra in which altitudes are concurrent) and more generally for orthocentric simplices (i.e., simplices in which altitudes are concurrent).
Converse
The converse of the hinge theorem is also true: If the two sides of one triangle are congruent to two sides of another triangle, and the third side of the first triangle is greater than the third side of the second triangle, then the included angle of the first triangle is larger than the included angle of the second triangle.
In some textbooks, the theorem and its converse are written as the SAS Inequality Theorem and the AAS Inequality Theorem respectively.
|
https://en.wikipedia.org/wiki/Energy%20Conversion%20Devices
|
Energy Conversion Devices (ECD) was an American photovoltaics manufacturer of thin-film solar cells made of amorphous silicon used in flexible laminates and in building-integrated photovoltaics. The company was also a manufacturer of rechargeable batteries and other renewable energy related products. ECD was headquartered in Rochester Hills, Michigan.
Through its wholly owned Auburn Hills, Michigan, subsidiary United Solar Ovonic, LLC, better known as Uni-Solar, ECD was at one time the world's largest producer of flexible solar panels. Uni-Solar panels consisted of long rectangular strips with wiring at one end, which could be glued to any suitable supporting surface. They were widely used on flat roofs, motorhomes, semi-trailer cabs and similar roles.
On February 14, 2012, Energy Conversion Devices, Inc. and its subsidiaries, United Solar Ovonic LLC and Solar Integrated Technologies, Inc. filed for bankruptcy in the U.S. United States District Court for the Eastern District of Michigan.
Company
Energy Conversion Devices, Inc. (ECD), through its United Solar Ovonic (USO) subsidiary, was engaged in building-integrated and rooftop photovoltaics (PV). The Company manufactured, sold and installed thin-film solar laminates that converted sunlight to electrical energy.
The Company operated in two segments: United Solar Ovonic and Ovonic Materials. The Company's USO segment consisted of its wholly owned subsidiary, United Solar Ovonic LLC, which was engaged in manufacturing of PV laminates designed to be integrated directly with roofing materials. The Ovonic Materials segment invented, designed and developed materials and products based on ECD's materials science technology. ECD, through its subsidiaries, commercialized materials, products and production processes for the alternative energy generation (primarily solar energy), energy storage and information technology markets.
Ovonics (coined from "Ovshinsky" and "electronics") is a field of electronics that uses m
|
https://en.wikipedia.org/wiki/Socle%20%28mathematics%29
|
In mathematics, the term socle has several related meanings.
Socle of a group
In the context of group theory, the socle of a group G, denoted soc(G), is the subgroup generated by the minimal normal subgroups of G. It can happen that a group has no minimal non-trivial normal subgroup (that is, every non-trivial normal subgroup properly contains another such subgroup) and in that case the socle is defined to be the subgroup generated by the identity. The socle is a direct product of minimal normal subgroups.
As an example, consider the cyclic group Z12 with generator u, which has two minimal normal subgroups, one generated by u4 (which gives a normal subgroup with 3 elements) and the other by u6 (which gives a normal subgroup with 2 elements). Thus the socle of Z12 is the group generated by u4 and u6, which is just the group generated by u2.
The socle is a characteristic subgroup, and hence a normal subgroup. It is not necessarily transitively normal, however.
If a group G is a finite solvable group, then the socle can be expressed as a product of elementary abelian p-groups. Thus, in this case, it is just a product of copies of Z/pZ for various p, where the same p may occur multiple times in the product.
Socle of a module
In the context of module theory and ring theory the socle of a module M over a ring R is defined to be the sum of the minimal nonzero submodules of M. It can be considered as a dual notion to that of the radical of a module. In set notation,
Equivalently,
The socle of a ring R can refer to one of two sets in the ring. Considering R as a right R-module, soc(RR) is defined, and considering R as a left R-module, soc(RR) is defined. Both of these socles are ring ideals, and it is known they are not necessarily equal.
If M is an Artinian module, soc(M) is itself an essential submodule of M.
A module is semisimple if and only if soc(M) = M. Rings for which soc(M) = M for all M are precisely semisimple rings.
soc(soc(M)) = soc(M).
M is a finit
|
https://en.wikipedia.org/wiki/National%20Infrastructure%20Protection%20Plan
|
The National Infrastructure Protection Plan (NIPP) is a document called for by Homeland Security Presidential Directive 7, which aims to unify Critical Infrastructure and Key Resource (CIKR) protection efforts across the country. The latest version of the plan was produced in 2013 The NIPP's goals are to protect critical infrastructure and key resources and ensure resiliency. It is generally considered unwieldy and not an actual plan to be carried out in an emergency, but it is useful as a mechanism for developing coordination between government and the private sector. The NIPP is based on the model laid out in the 1998 Presidential Decision Directive-63, which identified critical sectors of the economy and tasked relevant government agencies to work with them on sharing information and on strengthening responses to attack.
The NIPP is structured to create partnerships between Government Coordinating Councils (GCC) from the public sector and Sector Coordinating Councils (SCC) from the private sector for the eighteen sectors DHS has identified as critical.
Sector Specific Agencies
United States Department of Agriculture
United States Department of Defense
United States Department of Energy
United States Department of Health and Human Services
United States Department of the Interior
United States Department of the Treasury
United States Environmental Protection Agency
United States Department of Homeland Security
Cybersecurity and Infrastructure Security Agency
Transportation Security Administration
United States Coast Guard
United States Immigration and Customs Enforcement
Federal Protective Service
Sector Coordinating Councils
Agriculture and Food
Defense Industrial Base
Energy
Public Health and Healthcare
Financial Services
Water and Wastewater Systems
Chemical
Commercial Facilities
Dams
Emergency Services
Nuclear Reactors, Materials, and Waste
Information Technology
Communications
Postal and Shipping
Transportation Systems
Gove
|
https://en.wikipedia.org/wiki/Broad-billed%20parrot
|
The broad-billed parrot or raven parrot (Lophopsittacus mauritianus) is a large extinct parrot in the family Psittaculidae. It was endemic to the Mascarene island of Mauritius. The species was first referred to as the "Indian raven" in Dutch ships' journals from 1598 onwards. Only a few brief contemporary descriptions and three depictions are known. It was first scientifically described from a subfossil mandible in 1866, but this was not linked to the old accounts until the rediscovery of a detailed 1601 sketch that matched both the subfossils and the accounts. It is unclear what other species it was most closely related to, but it has been classified as a member of the tribe Psittaculini, along with other Mascarene parrots. It had similarities with the Rodrigues parrot (Necropsittacus rodricanus), and may have been closely related.
The broad-billed parrot's head was large in proportion to its body, and there was a distinct crest of feathers on the front of the head. The bird had a very large beak, comparable in size to that of the hyacinth macaw, which would have enabled it to crack hard seeds. Its bones indicate that the species exhibited greater sexual dimorphism in overall size and head size than any living parrot. The exact colouration is unknown, but a contemporary description indicates that it had multiple colours, including a blue head, and perhaps a red body and beak. It is believed to have been a weak flier, but not flightless. The bird became extinct in the 17th century owing to a combination of deforestation, predation by introduced invasive species, and probably hunting as well.
Taxonomy
The earliest known descriptions of the broad-billed parrot were provided by Dutch travellers during the Second Dutch Expedition to Indonesia, led by the Dutch Admiral Jacob Cornelis van Neck in 1598. They appear in reports published in 1601, which also contain the first illustration of the bird, along with the first of a dodo. The description for the illustration rea
|
https://en.wikipedia.org/wiki/Stop%20the%20Express
|
Stop the Express (also known as Bousou Tokkyuu SOS (暴走特急SOS, "Runaway Express SOS," in Japan) is a video game developed by Hudson Soft and published in 1983. It was written for the Sharp X1 and later ported to the ZX Spectrum, Commodore 64, and MSX.
It was remade for Nintendo Family Computer as Challenger (チャレンジャー) in 1985.
Gameplay
In Stage 1, the player runs along the top of an express train, jumping between carriages while avoiding enemy knives and obstacles. Halfway along the train, the player enters the train, and Stage 2 begins. The player must then proceed through the carriages, towards the front of the train, so that it can be stopped.
Upon the completion of each level, the game displays the Engrish message "Congraturation! You Sucsess!". The game then repeats from Stage 1, with more enemies. Enemies, known as "redmen", initially pursue from the rear on the roof of the train, and the front once inside, and will throw knives which the player must dodge by ducking under, or jumping over, them. In addition, once inside the train, the player can jump up and hang from the overhead straps out of the way of the redmen. However, ghosts flit up and down the carriages making it extremely dangerous to stay there too long. Once a few levels have been completed, redmen will approach from both front and rear.
The player has only two weapons at his disposal. When on the roof of the train, he can catch birds that fly overhead and then release them to run along the carriage and knock the redmen off, as well as high kicking them. Whilst inside, the high kick is the only option.
Reception
Stop the Express was rated as the 4th best Spectrum game by Your Sinclair, in their list of the top 100 Spectrum games. Retro Gamer, meanwhile, ranked it as the twelfth best game for the Spectrum.
Legacy
An NES/Famicom port was planned, but due to only having the first train level, three levels were added and became Challenger, which was released only in Japan.
|
https://en.wikipedia.org/wiki/Methyl%20cellulose
|
Methyl cellulose (or methylcellulose) is a compound derived from cellulose. It is sold under a variety of trade names and is used as a thickener and emulsifier in various food and cosmetic products, and also as a bulk-forming laxative. Like cellulose, it is not digestible, non-toxic, and not an allergen.
In addition to culinary uses, it is used in arts and crafts such as Papier-mâché and is often the main ingredient of wallpaper paste.
In 2020, it was the 422nd most commonly prescribed medication in the United States, with more than 100thousand prescriptions.
Uses
Methyl cellulose has a wide range of uses.
Medical
Constipation
Methyl cellulose is used to treat constipation. Effects generally occur within three days. It is taken by mouth and is recommended with sufficient water. Side effects may include abdominal pain. It is classified as a bulk forming laxative. It works by increasing the amount of stool present which improves intestinal contractions.
It is available over the counter. It is sold under the brand name Citrucel among others.
Artificial tears and saliva
The lubricating property of methylcellulose is of particular benefit in the treatment of dry eyes. Solutions containing methyl cellulose or similar cellulose derivatives are used as substitute for tears or saliva if the natural production of these fluids is disturbed.
Medication manufacturing
Methyl cellulose is used in the manufacture of drug capsules; its edible and nontoxic properties provide a vegetarian alternative to the use of gelatin.
Consumer products
Thickener and emulsifier
Methyl cellulose is occasionally added to hair shampoos, tooth pastes and liquid soaps, to generate their characteristic thick consistency. This is also done for foods, for example ice cream or croquette. Methyl cellulose is also an important emulsifier, preventing the separation of two mixed liquids because it is an emulsion stabilizer.
Food
The E number of methyl cellulose as food additive is E461. E464 is
|
https://en.wikipedia.org/wiki/X-ray%20crystal%20truncation%20rod
|
X-ray crystal truncation rod scattering is a powerful method in surface science, based on analysis of surface X-ray diffraction (SXRD) patterns from a crystalline surface.
For an infinite crystal, the diffracted pattern is concentrated in Dirac delta function like Bragg peaks. Presence of crystalline surfaces results in additional structure along so-called truncation rods (linear regions in momentum space normal to the surface). Crystal Truncation Rod (CTR) measurements allow detailed determination of atomic structure at the surface, especially useful in cases of oxidation, epitaxial growth, and adsorption studies on crystalline surfaces.
Theory
A particle incident on a crystalline surface with momentum will undergo scattering through a momentum change of . If and represent directions in the plane of the surface and is perpendicular to the surface, then the scattered intensity as a function of all possible values of is given by
Where is the penetration coefficient, defined as the ratio of x-ray amplitudes scattered from successive planes of atoms in the crystal, and , , and are the lattice spacings in the x, y, and z directions, respectively.
In the case of perfect absorption, , and the intensity becomes independent of , with a maximum for any (the component of parallel to the crystal surface) that satisfies the 2D Laue condition in reciprocal space
for integers and . This condition results in rods of intensity in reciprocal space, oriented perpendicular to the surface and passing through the reciprocal lattice points of the surface, as in Fig. 1. These rods are known as diffraction rods, or crystal truncation rods.
When is allowed to vary from 0, the intensity along the rods varies according to Fig. 2. Note that in the limit as approaches unity, the x-rays are fully penetrating, and the scattered intensity approaches a periodic delta function, as in bulk diffraction.
This calculation has been done according to the kinematic (single-scattering)
|
https://en.wikipedia.org/wiki/Water%20tunnel%20%28hydrodynamic%29
|
A water tunnel is an experimental facility used for testing the hydrodynamic behavior of submerged bodies in flowing water. It functions similar to a recirculating wind tunnel, but uses water as the working fluid, and related phenomena are investigated, such as measuring the forces on scale models of submarines or lift and drag on hydrofoils. Water tunnels are sometimes used in place of wind tunnels to perform measurements because techniques like particle image velocimetry (PIV) are easier to implement in water. For many cases as long as the Reynolds number is equivalent, the results are valid, whether a submerged water vehicle model is tested in air or an aerial vehicle is tested in water. For low Reynolds number flows, tunnels can be made to run oil instead of water. The advantage is that the increased viscosity will allow the flow to be a faster speed (and thus easier to maintain in a stable manner) for a lower Reynolds number.
Whereas in wind tunnels the driving force is usually sophisticated multiblade propellers with adjustable blade pitch, in water and oil tunnels the fluid is circulated with pumps, effectively using a net pressure head difference to move the fluid rather than imparting momentum on it directly. Thus the return section of water and oil tunnels does not need any flow management; typically it is just a pipe sized for the pump and desired flow speeds. The upstream section of a water tunnels generally consists of a pipe (outlet from the pump) with several holes along its side and with the end open followed by a series of coarse and fine screens to even the flow before the contraction into the test section. Wind tunnels may also have screens before the contraction, but in water tunnels they may be as fine as the screen used in window openings and screen doors.
Additionally, many water tunnels are sealed and can reduce or increase the internal static pressure, to perform cavitation studies. These are referred to as cavitation tunnels.
Methods
B
|
https://en.wikipedia.org/wiki/Tsallis%20entropy
|
In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy.
Overview
The concept was introduced in 1988 by Constantino Tsallis as a basis for generalizing the standard statistical mechanics and is identical in form to Havrda–Charvát structural α-entropy, introduced in 1967 within information theory. In scientific literature, the physical relevance of the Tsallis entropy has been debated. However, from the years 2000 on, an increasingly wide spectrum of natural, artificial and social complex systems have been identified which confirm the predictions and consequences that are derived from this nonadditive entropy, such as nonextensive statistical mechanics, which generalizes the Boltzmann–Gibbs theory.
Among the various experimental verifications and applications presently available in the literature, the following ones deserve a special mention:
The distribution characterizing the motion of cold atoms in dissipative optical lattices predicted in 2003 and observed in 2006.
The fluctuations of the magnetic field in the solar wind enabled the calculation of the q-triplet (or Tsallis triplet).
The velocity distributions in a driven dissipative dusty plasma.
Spin glass relaxation.
Trapped ion interacting with a classical buffer gas.
High energy collisional experiments at LHC/CERN (CMS, ATLAS and ALICE detectors) and RHIC/Brookhaven (STAR and PHENIX detectors).
Among the various available theoretical results which clarify the physical conditions under which Tsallis entropy and associated statistics apply, the following ones can be selected:
Anomalous diffusion.
Uniqueness theorem.
Sensitivity to initial conditions and entropy production at the edge of chaos.
Probability sets that make the nonadditive Tsallis entropy to be extensive in the thermodynamical sense.
Strongly quantum entangled systems and thermodynamics.
Thermostatistics of overdamped motion of interacting particles.
Nonlinear generalizations of the Schroedinger
|
https://en.wikipedia.org/wiki/Wireless%20Intelligent%20Network
|
Wireless Intelligent Network (also referred to as a WIN) is a concept developed by the TR-45 Mobile and Personal Communications Systems Standards engineering committee of the Telecommunications Industry Association (TIA). Its objective is to transport the resources of the Intelligent Network to the wireless network, utilizing the TIA-41 set of technical standards. Basing WIN standards on this protocol allows changing to an intelligent network without making current network infrastructure obsolete.
Overview
Today's wireless subscribers are much more sophisticated telecommunications users than they were five years ago. No longer satisfied with just completing a clear call, today's subscribers demand innovative ways to use the wireless phone. They want multiple services that allow them to handle or select incoming calls in a variety of ways.
Enhanced services are very important to wireless customers. They have come to expect, for instance, services such as caller ID and voice messaging bundled in the package when they buy and activate a cellular or personal communications service (PCS) phone. Whether prepaid, voice/data messaging, Internet surfing, or location-sensitive billing, enhanced services will become an important differentiator in an already crowded, competitive service-provider market. Enhanced services will also entice potentially new subscribers to sign up for service and will drive up airtime through increased usage of PCS or cellular services. As the wireless market becomes increasingly competitive, rapid deployment of enhanced services becomes critical to a successful wireless strategy.
Intelligent Network (IN) solutions have revolutionized wireline networks. Rapid creation and deployment of services has become the hallmark of a wireline network based on IN concepts. Wireless Intelligent Network (WIN) will bring those same successful strategies into the wireless networks.
The evolution of wireless networks to a WIN concept of service deployment deliv
|
https://en.wikipedia.org/wiki/Vibrational%20partition%20function
|
The vibrational partition function traditionally refers to the component of the canonical partition function resulting from the vibrational degrees of freedom of a system. The vibrational partition function is only well-defined in model systems where the vibrational motion is relatively uncoupled with the system's other degrees of freedom.
Definition
For a system (such as a molecule or solid) with uncoupled vibrational modes the vibrational partition function is defined by
where is the absolute temperature of the system, is the Boltzmann constant, and is the energy of j-th mode when it has vibrational quantum number . For an isolated molecule of n atoms, the number of vibrational modes (i.e. values of j) is 3n − 5 for linear molecules and 3n − 6 for non-linear ones. In crystals, the vibrational normal modes are commonly known as phonons.
Approximations
Quantum harmonic oscillator
The most common approximation to the vibrational partition function uses a model in which the vibrational eigenmodes or normal modes of the system are considered to be a set of uncoupled quantum harmonic oscillators. It is a first order approximation to the partition function which allows one to calculate the contribution of the vibrational degrees of freedom of molecules towards its thermodynamic variables. A quantum harmonic oscillator has an energy spectrum characterized by:
where j runs over vibrational modes and is the vibrational quantum number in the j-th mode, is Planck's constant, h, divided by and
is the angular frequency of the j'''th mode. Using this approximation we can derive a closed form expression for the vibrational partition function.
where is total vibrational zero point energy of the system.
Often the wavenumber, with units of cm−1 is given instead of the angular frequency of a vibrational mode and also often misnamed frequency. One can convert to angular frequency by using where c'' is the speed of light in vacuum. In terms of the vibrational wavenumb
|
https://en.wikipedia.org/wiki/Branching%20quantifier
|
In logic a branching quantifier, also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering
of quantifiers for Q ∈ {∀,∃}. It is a special case of generalized quantifier. In classical logic, quantifier prefixes are linearly ordered such that the value of a variable ym bound by a quantifier Qm depends on the value of the variables
y1, ..., ym−1
bound by quantifiers
Qy1, ..., Qym−1
preceding Qm. In a logic with (finite) partially ordered quantification this is not in general the case.
Branching quantification first appeared in a 1959 conference paper of Leon Henkin. Systems of partially ordered quantification are intermediate in strength between first-order logic and second-order logic. They are being used as a basis for Hintikka's and Gabriel Sandu's independence-friendly logic.
Definition and properties
The simplest Henkin quantifier is
It (in fact every formula with a Henkin prefix, not just the simplest one) is equivalent to its second-order Skolemization, i.e.
It is also powerful enough to define the quantifier (i.e. "there are infinitely many") defined as
Several things follow from this, including the nonaxiomatizability of first-order logic with (first observed by Ehrenfeucht), and its equivalence to the -fragment of second-order logic (existential second-order logic)—the latter result published independently in 1970 by Herbert Enderton and W. Walkoe.
The following quantifiers are also definable by .
Rescher: "The number of φs is less than or equal to the number of ψs"
Härtig: "The φs are equinumerous with the ψs"
Chang: "The number of φs is equinumerous with the domain of the model"
The Henkin quantifier can itself be expressed as a type (4) Lindström quantifier.
Relation to natural languages
Hintikka in a 1973 paper advanced the hypothesis that some sentences in natural languages are best understood in terms of branching quantifiers, for example: "some relative of each
|
https://en.wikipedia.org/wiki/Interconnect%20agreement
|
An interconnect agreement is a business contract between telecommunications organizations for the purpose of interconnecting their networks and exchanging telecommunications traffic. Interconnect agreements are found both in the public switched telephone network and the Internet.
In the public switched telephone network, an interconnect agreement invariably involves settlement fees based on call source and destination, connection times and duration, when these fees do not cancel out between operators.
On the Internet, where the concept of a "call" is generally hard to define, settlement-free peering and Internet transit are common forms of interconnection. A contract for interconnection within the Internet is usually called a peering agreement.
Interconnect agreements are typically complex contractual agreements involving payment schemes and schedules, coordination of routing policies, acceptable use policies, traffic balancing requirements, technical standards, coordination of network operations, dispute resolution, etc. Legal and regulatory requirements are often an issue. For example, network operators may be forced by law to interconnect with their competitors. In the United States, the Telecommunications Act of 1996 mandated methods of interconnection and the compensation models for doing so.
External links
Telecommunications law
Internet architecture
Contract law
|
https://en.wikipedia.org/wiki/Brillouin%20and%20Langevin%20functions
|
The Brillouin and Langevin functions are a pair of special functions that appear when studying an idealized paramagnetic material in statistical mechanics. These functions are named after French physicists Paul Langevin and Léon Brillouin who contributed to the microscopic understanding of magnetic properties of matter.
Brillouin function
The Brillouin function is a special function defined by the following equation:
The function is usually applied (see below) in the context where is a real variable and is a positive integer or half-integer. In this case, the function varies from -1 to 1, approaching +1 as and -1 as .
The function is best known for arising in the calculation of the magnetization of an ideal paramagnet. In particular, it describes the dependency of the magnetization on the applied magnetic field and the total angular momentum quantum number J of the microscopic magnetic moments of the material. The magnetization is given by:
where
is the number of atoms per unit volume,
the g-factor,
the Bohr magneton,
is the ratio of the Zeeman energy of the magnetic moment in the external field to the thermal energy :
is the Boltzmann constant and the temperature.
Note that in the SI system of units given in Tesla stands for the magnetic field, , where is the auxiliary magnetic field given in A/m and is the permeability of vacuum.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Click "show" to see a derivation of this law:
|-
|A derivation of this law describing the magnetization of an ideal paramagnet is as follows. Let z be the direction of the magnetic field. The z-component of the angular momentum of each magnetic moment (a.k.a. the azimuthal quantum number) can take on one of the 2J+1 possible values -J,-J+1,...,+J. Each of these has a different energy, due to the external field B: The energy associated with quantum number m is
(where g is the g-factor, μB is the Bohr magneton, and x is as defined in t
|
https://en.wikipedia.org/wiki/Mechanical%20resonance
|
Mechanical resonance is the tendency of a mechanical system to respond at greater amplitude when the frequency of its oscillations matches the system's natural frequency of vibration (its resonance frequency or resonant frequency) closer than it does other frequencies. It may cause violent swaying motions and potentially catastrophic failure in improperly constructed structures including bridges, buildings and airplanes. This is a phenomenon known as resonance disaster.
Avoiding resonance disasters is a major concern in every building, tower and bridge construction project. The Taipei 101 building relies on a 660-ton pendulum—a tuned mass damper—to modify the response at resonance. The structure is also designed to resonate at a frequency which does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. Engineers designing objects having engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts.
Many resonant objects have more than one resonance frequency. Such objects will vibrate easily at those frequencies, and less so at other frequencies. Many clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal.
Description
The natural frequency of a simple mechanical system consisting of a weight suspended by a spring is:
where m is the mass and k is the spring constant.
A swing set is a simple example of a resonant system with which most people have practical experience. It is a form of pendulum. If the system is excited (pushed) with a period between pushes equal to the inverse of the pendulum's natural frequency, the swing will swing higher and higher, but if excited at a different frequency, it will be difficult to move. The resonance frequency of a pendulum, the only frequency at which it will vibrate, is given approximate
|
https://en.wikipedia.org/wiki/Electrical%20resonance
|
Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedances or admittances of circuit elements cancel each other. In some circuits, this happens when the impedance between the input and output of the circuit is almost zero and the transfer function is close to one.
Resonant circuits exhibit ringing and can generate higher voltages or currents than are fed into them. They are widely used in wireless (radio) transmission for both transmission and reception.
LC circuits
Resonance of a circuit involving capacitors and inductors occurs because the collapsing magnetic field of the inductor generates an electric current in its windings that charges the capacitor, and then the discharging capacitor provides an electric current that builds the magnetic field in the inductor. This process is repeated continually. An analogy is a mechanical pendulum, and both are a form of simple harmonic oscillator.
At resonance, the series impedance of the LR circuit is at a minimum and the parallel impedance is at maximum. Resonance is used for tuning and filtering, because it occurs at a particular frequency for given values of inductance and capacitance. It can be detrimental to the operation of communications circuits by causing unwanted sustained and transient oscillations that may cause noise, signal distortion, and damage to circuit elements.
Parallel resonance or near-to-resonance circuits can be used to prevent the waste of electrical energy, which would otherwise occur while the inductor built its field or the capacitor charged and discharged. As an example, asynchronous motors waste inductive current while synchronous ones waste capacitive current. The use of the two types in parallel makes the inductor feed the capacitor, and vice versa, maintaining the same resonant current in the circuit, and converting all the current into useful work.
Since the inductive reactance and the capacitive reactance are of equal magnitude,
,
so
,
w
|
https://en.wikipedia.org/wiki/Acoustic%20resonance
|
Acoustic resonance is a phenomenon in which an acoustic system amplifies sound waves whose frequency matches one of its own natural frequencies of vibration (its resonance frequencies).
The term "acoustic resonance" is sometimes used to narrow mechanical resonance to the frequency range of human hearing, but since acoustics is defined in general terms concerning vibrational waves in matter, acoustic resonance can occur at frequencies outside the range of human hearing.
An acoustically resonant object usually has more than one resonance frequency, especially at harmonics of the strongest resonance. It will easily vibrate at those frequencies, and vibrate less strongly at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance.
Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of a drum membrane. Acoustic resonance is also important for hearing. For example, resonance of a stiff structural element, called the basilar membrane within the cochlea of the inner ear allows hair cells on the membrane to detect sound. (For mammals the membrane has tapering resonances across its length so that high frequencies are concentrated on one end and low frequencies on the other.)
Like mechanical resonance, acoustic resonance can result in catastrophic failure of the vibrator. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass.
Vibrating string
In musical instruments, strings under tension, as in lutes, harps, guitars, pianos, violins and so forth, have resonant frequencies directly related to the mass, length, and tension of the string. The wavelength that will create the first resonance on the string is
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.