source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Cal%20Henderson
|
Callum James Henderson-Begg (born 17 January 1981), known as Cal Henderson, is a British computer programmer and author based in San Francisco.
Education
Henderson attended Sharnbrook Upper School and Community College, and Birmingham City University where he graduated with a degree in software engineering in 2002.
Career
Henderson is best known as the co-founder and chief technology officer at Slack, as well as co-owning and developing the online creative community B3ta with Denise Wilton and Rob Manuel; being the chief software architect for the photo-sharing application Flickr (originally working for Ludicorp and then Yahoo); and writing the book Building Scalable Web Sites for O'Reilly Media.
He has also worked for EMAP as their technical director of special web projects and is responsible for writing City Creator among many other websites, services and desktop applications. Cal was the co-founder and VP of engineering at Tiny Speck, the company whose internal tool transitioned into Slack.
Henderson's connection to Stewart Butterfield and Slack began through a game developed by Butterfield's first company, Ludicorp, called Game Neverending. He ran a fan website dedicated to the game and broke into an internal Ludicorp mailing list. Instead of repercussions, Butterfield hired Henderson to work for his company.
Personal life
Henderson is color blind, and has worked on applications to make the web more accessible to the color blind. He is also a frequent contributor to open-source software projects and runs a number of utility websites, such as Unicodey, to make certain programming tasks easier.
Politics
In August 2022, Henderson contributed $50,000 to The Next 50, a liberal political action committee (PAC).
|
https://en.wikipedia.org/wiki/C-theorem
|
In quantum field theory the C-theorem states that there exists a positive real function, , depending on the coupling constants of the quantum field theory considered, , and on the energy scale, , which has the following properties:
decreases monotonically under the renormalization group (RG) flow.
At fixed points of the RG flow, which are specified by a set of fixed-point couplings , the function is a constant, independent of energy scale.
The theorem formalizes the notion that theories at high energies have more degrees of freedom than theories at low energies and that information is lost as we flow from the former to the latter.
Two-dimensional case
Alexander Zamolodchikov proved in 1986 that two-dimensional quantum field theory always has such a C-function. Moreover, at fixed points of the RG flow, which correspond to conformal field theories, Zamolodchikov's C-function is equal to the central charge of the corresponding conformal field theory, which lends the name C to the theorem.
Four-dimensional case: A-theorem
John Cardy in 1988 considered the possibility to generalise C-theorem to higher-dimensional quantum field theory. He conjectured that in four spacetime dimensions, the quantity behaving monotonically under renormalization group flows, and thus playing the role analogous to the central charge in two dimensions, is a certain anomaly coefficient which came to be denoted as .
For this reason, the analog of the C-theorem in four dimensions is called the A-theorem.
In perturbation theory, that is for renormalization flows which do not deviate much from free theories, the A-theorem in four dimensions was proved by Hugh Osborn using the local renormalization group equation. However, the problem of finding a proof valid beyond perturbation theory remained open for many years.
In 2011, Zohar Komargodski and Adam Schwimmer of the Weizmann Institute of Science proposed a nonperturbative proof for the A-theorem, which has gained acceptance. (Still, simult
|
https://en.wikipedia.org/wiki/Flag%20of%20Buffalo%2C%20New%20York
|
The municipal flag of Buffalo is the official banner of the city of Buffalo, New York. The navy blue flag contains a large central emblem consisting of the city seal with 13 "electric flashes" (depicted as lightning bolts) and interspaced 5-pointed white stars emanating from it.
History
Following a request from New York City publisher, the Julius Bien Company, to provide a copy of the banner graphic for a work depicting the flags of large municipalities, Mayor Louis P. Fuhrmann and Commissioner of Public Works, Francis G. Ward, proposed a design. The city's first flag was composed of the city seal superimposed on the state coat-of-arms in blue over a buff-colored background.
1912 proposal
From Common Council Proceedings, June 3, 1912:
To the left center a lighthouse on pier with ship passing it into harbor. To the lower right canal boat passing into canal to the right surrounded in circle by the legend "City of Buffalo, Incorporated 1832."
The municipalities of the United States having as a rule a Municipal Flag, examinination shows the general practice to be the use of the Coat of Arms of the City or the Coat of Arms of the State upon which is superimposed the seal of the City.
In accordance with the latter rule we submit as a design for the Municipal Flag of the City of Buffalo, the following:
"The Coat of Arms of the State of New York with the Seal of the City of Buffalo superimposed upon the shield of the same all in blue upon the field of the flag in Continental buff."
(Despite the assertion above, no such "general practice" of superimposing city seals over state seals has been documented.)
Flag seal
Though the Common Council passed an ordinance describing the official seal of the city and its flag, the seal described was not the one included on the banner. At the time there were several seals being used by various city officials. The seal depicted on the flag was actually the seal being used by the Mayor. There are a few differences, the most
|
https://en.wikipedia.org/wiki/B-factory
|
In particle physics, a B-factory, or sometimes a beauty factory, is a particle collider experiment designed to produce and detect a large number of B mesons so that their properties and behavior can be measured with small statistical uncertainty. Tau leptons and D mesons are also copiously produced at B-factories.
History and development
A sort of "prototype" or "precursor" B-factory was the HERA-B experiment at DESY that was planned to study B-meson physics in the 1990–2000s, before the actual B-factories were constructed/operational. However, usually HERA-B is not considered a B-factory.
Two B-factories were designed and built in the 1990s, and they operated from late 1999 onward: the Belle experiment at the KEKB collider in Tsukuba, Japan, and the BaBar experiment at the PEP-II collider at SLAC in California, United States. They were both electron-positron colliders with the center of mass energy tuned to the ϒ(4S) resonance peak, which is just above the threshold for decay into two B mesons (both experiments took smaller data samples at different center of mass energies). BaBar prematurely ceased data collection in 2008 due to budget cuts, but Belle ran until 2010, when it stopped data collection both because it had reached its intended integrated luminosity and because construction was to begin on upgrades to the experiment (see below).
Current experiments
Three "next generation" B-factories were to be built in the 2010s and 2020s: SuperB near Rome in Italy; Belle II, an upgrade to Belle, and SuperPEP-II, an upgrade to the PEP-II accelerator. SuperB was canceled, and the proposal for SuperPEP-II was never acted upon. However, Belle II successfully started taking data in 2018 and is currently the only next-generation B-factory in operation.
In addition to Belle II there is the LHCb-experiment at the LHC (CERN), which started operations in 2010 and studies primarily the physics of bottom-quark containing hadrons, and thus could be understood to be a B-factory
|
https://en.wikipedia.org/wiki/Spin-weighted%20spherical%20harmonics
|
In special functions, a topic in mathematics, spin-weighted spherical harmonics are generalizations of the standard spherical harmonics and—like the usual spherical harmonics—are functions on the sphere. Unlike ordinary spherical harmonics, the spin-weighted harmonics are gauge fields rather than scalar fields: mathematically, they take values in a complex line bundle. The spin-weighted harmonics are organized by degree , just like ordinary spherical harmonics, but have an additional spin weight that reflects the additional symmetry. A special basis of harmonics can be derived from the Laplace spherical harmonics , and are typically denoted by , where and are the usual parameters familiar from the standard Laplace spherical harmonics. In this special basis, the spin-weighted spherical harmonics appear as actual functions, because the choice of a polar axis fixes the gauge ambiguity. The spin-weighted spherical harmonics can be obtained from the standard spherical harmonics by application of spin raising and lowering operators. In particular, the spin-weighted spherical harmonics of spin weight are simply the standard spherical harmonics:
Spaces of spin-weighted spherical harmonics were first identified in connection with the representation theory of the Lorentz group . They were subsequently and independently rediscovered by and applied to describe gravitational radiation, and again by as so-called "monopole harmonics" in the study of Dirac monopoles.
Spin-weighted functions
Regard the sphere as embedded into the three-dimensional Euclidean space . At a point on the sphere, a positively oriented orthonormal basis of tangent vectors at is a pair of vectors such that
where the first pair of equations states that and are tangent at , the second pair states that and are unit vectors, the penultimate equation that and are orthogonal, and the final equation that is a right-handed basis of .
A spin-weight function is a function accepting a
|
https://en.wikipedia.org/wiki/Site%20Multihoming%20by%20IPv6%20Intermediation
|
The Site Multihoming by IPv6 Intermediation (SHIM6) protocol is an Internet Layer defined in RFC 5533.
Architecture
The SHIM6 architecture defines failure detection and locator pair exploration functions. The first is used to detect outages through the path defined by the current locator pair for a communication. To achieve this, hints provided by upper protocols such as Transmission Control Protocol (TCP) are used, or specific SHIM6 packet probes. The second function is used to determine valid locator pairs that could be used when an outage is detected.
The ability to change locators while a communication is being held introduces security problems, so mechanisms based on applying cryptography to the address generation process (Cryptographically Generated Addresses, CGA), or on bounding the addresses to the prefixes assigned to a host through hash-based addresses were defined. These approaches are not needed for IPv4 because of the short address length (32 bits).
An implementation of shim6 in the Linux kernel is available under the name LinShim6.
See also
Locator/Identifier Separation Protocol
|
https://en.wikipedia.org/wiki/MOSQUITO
|
In cryptography, MOSQUITO was a stream cipher algorithm designed by Joan Daemen and Paris Kitsos. It was submitted to the eSTREAM Project of the eCRYPT network. After the initial design was broken by Joux and Muller, a tweaked version named MOUSTIQUE was proposed which made it to Phase 3 of the eSTREAM evaluation process as the only self-synchronizing cipher remaining. However, MOUSTIQUE was subsequently broken by Käsper et al., leaving the design of a secure and efficient self-synchronising stream cipher as an open research problem.
Cryptographic algorithms
|
https://en.wikipedia.org/wiki/SFINKS
|
Sfinks (Polish for "Sphynx") was also the initial name of the Janusz A. Zajdel Award
In cryptography, SFINKS is a stream cypher algorithm developed by An Braeken, Joseph Lano, Nele Mentens, Bart Preneel, and Ingrid Verbauwhede. It includes a message authentication code. It has been submitted to the eSTREAM Project of the eCRYPT network.
Stream ciphers
Cryptography
|
https://en.wikipedia.org/wiki/Graphics%20address%20remapping%20table
|
The graphics address remapping table (GART), also known as the graphics aperture remapping table, or graphics translation table (GTT), is an I/O memory management unit (IOMMU) used by Accelerated Graphics Port (AGP) and PCI Express (PCIe) graphics cards. The GART allows the graphics card direct memory access (DMA) to the host system memory, through which buffers of textures, polygon meshes and other data are loaded. AMD later reused the same mechanism for I/O virtualization with other peripherals including disk controllers and network adapters.
A GART is used as a means of data exchange between the main memory and video memory through which buffers (i.e. paging/swapping) of textures, polygon meshes and other data are loaded, but can also be used to expand the amount of video memory available for systems with only integrated or shared graphics (i.e. no discrete or inbuilt graphics processor), such as Intel HD Graphics processors. However, this type of memory (expansion) remapping has a caveat that affects the entire system: specifically, any GART, pre-allocated memory becomes pooled and cannot be utilised for any other purposes but graphics memory and display rendering.
Operating system support
Linux
Jeff Hartmann served as the primary maintainer of the Linux kernel's agpgart driver, which began as part of Brian Paul's Utah GLX accelerated Mesa 3D driver project. The developers primarily targeted Linux 2.4.x kernels, but made patches available against older 2.2.x kernels. Dave Jones heavily reworked agpgart for the Linux 2.6.x kernels, along with more contributions from Jeff Hartmann.
FreeBSD
In FreeBSD, the agpgart driver appeared in its 4.1 release.
Solaris
AGPgart support was introduced into Solaris Express Developer Edition as of its 7/05 release.
See also
Direct Rendering Manager
|
https://en.wikipedia.org/wiki/Multicomplex%20number
|
In mathematics, the multicomplex number systems are defined inductively as follows: Let C0 be the real number system. For every let in be a square root of −1, that is, an imaginary unit. Then . In the multicomplex number systems one also requires that (commutativity). Then is the complex number system, is the bicomplex number system, is the tricomplex number system of Corrado Segre, and is the multicomplex number system of order n.
Each forms a Banach algebra. G. Bayley Price has written about the function theory of multicomplex systems, providing details for the bicomplex system
The multicomplex number systems are not to be confused with Clifford numbers (elements of a Clifford algebra), since Clifford's square roots of −1 anti-commute ( when for Clifford).
Because the multicomplex numbers have several square roots of –1 that commute, they also have zero divisors: despite and , and despite and . Any product of two distinct multicomplex units behaves as the of the split-complex numbers, and therefore the multicomplex numbers contain a number of copies of the split-complex number plane.
With respect to subalgebra , k = 0, 1, ..., , the multicomplex system is of dimension over
|
https://en.wikipedia.org/wiki/Louis%20Rougier
|
Louis Auguste Paul Rougier (birth name: Paul Auguste Louis Rougier) (; 10 April 1889 – 14 October 1982) was a French philosopher. Rougier made many important contributions to epistemology, philosophy of science, political philosophy and the history of Christianity.
Early life
Rougier was born in Lyon. Debilitated by pleurisy in his youth, he was declared unfit for service in World War I and devoted his adolescence to intellectual pursuits. He studied philosophy under Edmond Goblot.
After receiving the agrégation in philosophy from the University of Lyon, he qualified as a philosophy teacher in 1914 and worked as a teacher in several high schools, before teaching at the École Chateaubriand de Rome, Besançon, the Cairo University, the Institut Universitaire des Hautes Études Internationales de Genève (Geneva Graduate Institute of International Studies), and the Fondation Édouard-Herriot (Édouard-Herriot Foundation) in Lyon.
In 1920 he obtained his doctorate from the Sorbonne and published his doctoral thesis as La philosophie géometrique de Poincaré and Les paralogismes du rationalisme. Rougier already had several publications to his name, however, beginning with a 1914 paper on the use of non-Euclidean geometry in relativity theory.
Career
Rougier taught in Algiers from 1917 to 1920 and then in Rome from 1920 to 1924. His first university appointment in France was at the University of Besançon in 1925, where he served on the faculty until his dismissal in 1948 for political reasons. Further university appointments were in Cairo from 1931 to 1936, the New School for Social Research from 1941 to 1943 and the Université de Montréal in 1945. Rougier's final academic appointment was to the Université de Caen in 1954, but he retired at the age of 66 after only one year there.
Philosophy
Under the influence of Henri Poincaré and Ludwig Wittgenstein, Rougier developed a philosophy based on the idea that systems of logic are neither apodictic (i.e., necessarily true and
|
https://en.wikipedia.org/wiki/William%20Fogg%20Osgood
|
William Fogg Osgood (March 10, 1864 – July 22, 1943) was an American mathematician.
Education and career
William Fogg Osgood was born in Boston on March 10, 1864. In 1886, he graduated from Harvard, where, after studying at the universities of Göttingen (1887–1889) and Erlangen (Ph.D., 1890), he was instructor (1890–1893), assistant professor (1893–1903), and thenceforth professor of mathematics. From 1918 to 1922, he was chairman of the department of mathematics at Harvard. He became professor emeritus in 1933. From 1934 to 1936, he was visiting professor of mathematics at Peking University.
From 1899 to 1902, he served as editor of the Annals of Mathematics, and in 1905–1906 was president of the American Mathematical Society, whose Transactions he edited in 1909–1910.
Contributions
The works of Osgood dealt with complex analysis, in particular conformal mapping and uniformization of analytic functions, and calculus of variations. He was invited by Felix Klein to write an article on complex analysis in the Enzyklopädie der mathematischen Wissenschaften which was later expanded in the book Lehrbuch der Funktionentheorie.
Osgood curves – Jordan curves with positive area – are named after Osgood, who published a paper proving their existence in 1903.
Besides his research on analysis, Osgood was also interested in mathematical physics and wrote on the theory of the gyroscope.
Awards and honors
In 1904, he was elected to the National Academy of Sciences.
Personal life
Osgood's cousin, Louise Osgood, was the mother of Bernard Koopman.
William Fogg Osgood died at his home in Belmont, Massachusetts on July 22, 1943.
Selected publications
Osgood's books include:
Introduction to Infinite Series (Harvard University Press 1897; third edition, 1906)
Lehrbuch der Funktionentheorie (Teubner, Berlin, 1907; second edition, 1912)
First Course in Differential and Integral Calculus (1907; revised edition, 1909)
(with W. C. Graustein) Plane and Solid Analytic Geometry
|
https://en.wikipedia.org/wiki/Hydlide
|
is an action role-playing game developed and published by T&E Soft. It was originally released for the NEC PC-6001 and PC-8801 computers in 1984, in Japan only; ports for the MSX, MSX2, FM-7 and NEC PC-9801 were released the following year.
A Famicom version was released under the name Hydlide Special in Japan in 1986. Three years later, it was localized and released in English regions for the Nintendo Entertainment System by Fujisankei Communications International, known as simply Hydlide. The game sold two million copies in Japan across all platforms. A Sega Genesis version of Hydlide Special was showcased at the 1989 SCES but never released.
The game spawned the Hydlide series, followed by the sequels Hydlide II: Shine of Darkness in 1985 and Hydlide 3: The Space Memories (Super Hydlide) in 1987. A 1995 remake was released for the Sega Saturn as Virtual Hydlide.
Plot
In the kingdom of Fairyland, three magic jewels were enshrined in the palace to maintain peace in the kingdom. One day, an evil man broke into the palace and stole one of the three magic jewels. Without the third jewel, the two remaining jewels lost their magic sparkle. The magic spell that sealed the power of Varalys, the most vicious demon in the kingdom, was broken. During the turmoil which followed, the last two jewels were stolen. Varalys cast a special magic on Princess Ann, turning her into three fairies, and hid her somewhere in the kingdom. He then let loose a horde of monsters across the land and became the ruler of the kingdom. The young knight Jim stood up and took action to restore peace in the kingdom. He bravely made his way into the wilderness in full armor to fight the monsters.
Development
The game was created by T&E Soft's Tokihiro Naito. His idea behind Hydlide was to mix together action and RPG elements into a new "action RPG" genre. He was inspired by The Tower of Druaga and The Black Onyx, especially the former, as Hydlides design leans more towards action than role-playing
|
https://en.wikipedia.org/wiki/Mobile%20signature
|
A mobile signature is a digital signature generated either on a mobile phone or on a SIM card on a mobile phone.
Origins of the term
mSign
The term first appeared in articles introducing mSign (short for Mobile Electronic Signature Consortium). It was founded in 1999 and comprised 35 member companies. In October 2000, the consortium published an XML-interface defining a protocol allowing service providers to obtain a mobile (digital) signature from a mobile phone subscriber.
In 2001, mSign gained industry-wide coverage when it came apparent that Brokat (one of the founding companies) also obtained a process patent in Germany for using the mobile phone to generate digital signatures.
ETSI-MSS standardization
The term was then used by Paul Gibson (G&D) and Romary Dupuis (France Telecom) in their standardisation work at the European Telecommunications Standards Institute (ETSI) and published in ETSI Technical Report TR 102 203.
The ETSI-MSS specifications define a SOAP interface and mobile signature roaming for systems implementing mobile signature services. ETSI TS 102 204, and ETSI TS 102 207.
Today
The mobile signature can have the legal equivalent of your own wet signature, hence the term "Mobile Ink", commercial term coined by Swiss Sicap. Other terms include "Mobile ID", "Mobile Certificate" by a circle of trust of 3 Finnish mobile network operators implementing a roaming mobile signature framework Mobiilivarmenne, etc.
According to the EU directives for electronic signatures the mobile signature can have the same level of protection as the handwritten signature if all components in the signature creation chain are appropriately certified. The governing standard for the mobile signature creation devices and equivalent of a handwritten signature is described in the Commission Decision 2003/511/EC of 14 July 2003 on the publication of reference numbers of generally recognised standards for electronic signature products in accordance with the Electronic Sign
|
https://en.wikipedia.org/wiki/Soft%20sensor
|
Soft sensor or virtual sensor is a common name for software where several measurements are processed together. Commonly soft sensors are based on control theory and also receive the name of state observer. There may be dozens or even hundreds of measurements. The interaction of the signals can be used for calculating new quantities that need not be measured. Soft sensors are especially useful in data fusion, where measurements of different characteristics and dynamics are combined. It can be used for fault diagnosis as well as control applications.
Well-known software algorithms that can be seen as soft sensors include Kalman filters. More recent implementations of soft sensors use neural networks or fuzzy computing.
Examples of soft sensor applications:
Kalman filters for estimating the location
Velocity estimators in electric motors
Estimating process data using self-organizing neural networks
Fuzzy computing in process control
Estimators of food quality
See also
Virtual sensing
State observer
|
https://en.wikipedia.org/wiki/Olympiade%20Math%C3%A9matique%20Belge
|
The Olympiade Mathématique Belge (; OMB) is a mathematical competition for students in grades 7 to 12, organised each year since 1976. Only students from the French community participate, Dutch-speaking students can compete in the Vlaamse Wiskunde Olympiade.
The competition is split up in three age categories:
Mini-Olympiade for grades 7 and 8
Midi-Olympiade for grades 9 and 10
Maxi-Olympiade for grades 11 and 12
Among the participants, three are selected to represent Belgium in the International Mathematical Olympiad, together with three students selected through the Vlaamse Wiskunde Olympiade.
These three participants are chosen through a series of contests. The first round is the « éliminatoire » in which anyone who is eligible to participate in their own category can. Out of these students, about 10% of the highest scoring ones are selected to participate in the « demi-finale ». In this round, a similar multiple choice test (almost identical in layout to the first round) is given to the contestants. Out of the top-scorers from this round, participants are invited to take part in the « finale ». In this final test, 4 or 5 questions are given (instead of 30) and the answers and reasoning must be thoroughly explained. Finally, some promising students, from this final test, are then picked to attend a maths camp at which the 3 imo team members shall be selected.
External links
Official website
Mathematics competitions
Recurring events established in 1976
|
https://en.wikipedia.org/wiki/Dynamite%20Dan
|
Dynamite Dan is a platform game written by Rod Bowkett for the ZX Spectrum and published by Mirrorsoft in 1985. It was ported to the Amstrad CPC, Commodore 64, and MSX.
A sequel, Dynamite Dan II, was released the following year.
Gameplay
The game starts where Dan lands his airship on the top of the evil Dr Blitzen's hideout. The aim of the game is to find eight sticks of dynamite that are placed randomly around the playing area whilst avoiding the perils of the game such as moving monsters, drowning and falling from great heights. Once Dan has all eight sticks of dynamite, the player must make their way to the central safe to blow it open and steal the plans for the evil doctor's Death Ray and escape to his airship.
The playing area is one large building split up into multiple screens that wrap around a central elevator. Each screen contains a number of moving monsters that, once the player moves into them, are destroyed but take off a life in return. The only exception to being destroyed once walked into are Dr Blitzen and his assistant Donner (Donner and Blitzen) who are both located on the same screen as the safe. Other perils to Dan's life include running out of energy (caused by not collecting enough food, falling from heights and being hit by laser beams). If Dan should also fall into the underground river that flows beneath the building, the player will receive a game over unless Dan had picked up oxygen, in which case they will be sent back to the start of the game.
Once completed, the game provides a secret code to be deciphered and a telephone number to call with the answer. While the number no longer works now, the prize was a ride in the Mirrorsoft blimp.
The background music when choosing the game settings and waiting for the game to start is the third movement (Rondo Alla Turca) from Wolfgang Amadeus Mozart's Piano Sonata No. 11 in A major, K. 331, which had been used the previous year in Jet Set Willy in the Commodore 64 conversion on some screen
|
https://en.wikipedia.org/wiki/Relaxation%20%28approximation%29
|
In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.
For example, a linear programming relaxation of an integer programming problem removes the integrality constraint and so allows non-integer rational solutions. A Lagrangian relaxation of a complicated problem in combinatorial optimization penalizes violations of some constraints, allowing an easier relaxed problem to be solved. Relaxation techniques complement or supplement branch and bound algorithms of combinatorial optimization; linear programming and Lagrangian relaxations are used to obtain bounds in branch-and-bound algorithms for integer programming.
The modeling strategy of relaxation should not be confused with iterative methods of relaxation, such as successive over-relaxation (SOR); iterative methods of relaxation are used in solving problems in differential equations, linear least-squares, and linear programming. However, iterative methods of relaxation have been used to solve Lagrangian relaxations.
Definition
A relaxation of the minimization problem
is another minimization problem of the form
with these two properties
for all .
The first property states that the original problem's feasible domain is a subset of the relaxed problem's feasible domain. The second property states that the original problem's objective-function is greater than or equal to the relaxed problem's objective-function.
Properties
If is an optimal solution of the original problem, then and . Therefore, provides an upper bound on .
If in addition to the previous assumptions, , , the following holds: If an optimal solution for the relaxed problem is feasible for the original problem, then it is optimal for the original problem.
Some relaxation techniques
Linear programming relaxation
Lagrangian re
|
https://en.wikipedia.org/wiki/Lagrangian%20relaxation
|
In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information.
The method penalizes violations of inequality constraints using a Lagrange multiplier, which imposes a cost on violations. These added costs are used instead of the strict inequality constraints in the optimization. In practice, this relaxed problem can often be solved more easily than the original problem.
The problem of maximizing the Lagrangian function of the dual variables (the Lagrangian multipliers) is the Lagrangian dual problem.
Mathematical description
Suppose we are given a linear programming problem, with and , of the following form:
{| border="0" cellpadding="1" cellspacing="0"
|-
|max
|
|
|-
|s.t.
|-
|
|
|
|}
If we split the constraints in such that ,
and we may write the system:
{| border="0" cellpadding="1" cellspacing="0"
|-
|max
|
|
|-
|s.t.
|-
|(1)
|
|
|-
|(2)
|
|
|}
We may introduce the constraint (2) into the objective:
{| border="0" cellpadding="1" cellspacing="0"
|-
|max
|
|
|-
|s.t.
|-
|(1)
|
|
|}
If we let be nonnegative
weights, we get penalized if we violate the constraint (2), and we are also rewarded if we satisfy the constraint strictly. The above
system is called the Lagrangian relaxation of our original problem.
The LR solution as a bound
Of particular use is the property that for any fixed set of values, the optimal result to the Lagrangian relaxation problem will be no smaller than the optimal result to the original problem. To see this, let be the optimal solution to the original problem, and let be the optimal solution to the Lagrangian relaxation. We can then see that
{| border="0" cellpadding="1" cellspacing="0"
|-
|
|}
The first inequality is true because is feasible in the original problem and the second inequalit
|
https://en.wikipedia.org/wiki/High-capacity%20data%20radio
|
High-capacity data radio (HCDR) is a development of the Near-Term Digital Radio (NTDR) for the UK government as a part of the Bowman communication system. It is a secure wideband 225–450 MHz UHF radio system that provides a self-managing IP-based Internet backbone capability without the need for other infrastructure communications (mobile phone, fixed communications).
There is also an export version that incorporates Advanced Encryption Standard (AES) encryption rather than UK Government Type 1 Crypto. The radio offers a link throughput (terminal to terminal) of 500 kbit/s. A deployment of over 200 HCDR-equipped military vehicles can automatically configure and self manage into a fully connected autonomous mesh network intercommunicating using mobile ad hoc network (MANET) protocols. The radio is an IPv4-compliant three-port router having a radio port, Ethernet port and PPP serial port. The 20-watt radio has adaptive transmit power and adaptive forward error correction and can optimally achieve ground ranges up to 15 km with omnidirectional antennas. A maritime version allows radio LAN operation within flotillas of naval ships up to 20 km apart. The radio features coded modulation with internal wide-band or narrow band radio data modems.
|
https://en.wikipedia.org/wiki/Current%20Opinion%20in%20Cell%20Biology
|
Current Opinion in Cell Biology is a bimonthly peer-reviewed scientific journal published by Elsevier covering all aspects of cell biology including genetics, cell communication, and metabolism. It was established in 1998 and is part of the Elsevier Current Opinion series of journals. The editors-in-chief are Tom Mistley (National Institutes of Health) and Anne Ridley.
The journal has a 2018 impact factor of 8.233.
|
https://en.wikipedia.org/wiki/Wireless%20grid
|
Wireless grids are wireless computer networks consisting of different types of electronic devices with the ability to share their resources with any other device in the network in an ad hoc manner.
A definition of the wireless grid can be given as: "Ad hoc, distributed resource-sharing networks between heterogeneous wireless devices"
The following key characteristics further clarify this concept:
No centralized control
Small, low powered devices
Heterogeneous applications and interfaces
New types of resources like cameras, GPS trackers and sensors
Dynamic and unstable users / resources
The technologies that make up the wireless grid can be divided into two main categories; ad hoc networking and grid computing.
(Wireless) Ad hoc networking
In traditional networks, both wired and wireless, the connected devices, or nodes, depend on dedicated devices (edge devices) such as routers and/or servers for facilitating the throughput of information from one node to the other. These 'routing nodes' have the ability to determine where information is coming from and where it is supposed to go. They give out names and addresses (IP addresses) to each connected node and regulate the traffic between them. In wireless grids, such dedicated routing devices are not (always) available and the bandwidth that is permanently available to traditional networks has to be either 'borrowed' from an already existing network or publicly accessible bandwidth (open spectrum) has to be used.
A group addressing this problem is MANET (Mobile Ad Hoc Network).
Resource sharing
One of the intended aspects of wireless grids is that it will facilitate the sharing of a wide variety of resources. These will include both technical as information resources. The former being bandwidth, QoS, and web services, but also computational power and data storage capacity. Information resources can include virtually any kind of data from databases and membership lists to pictures and directories.
Ad hoc resource s
|
https://en.wikipedia.org/wiki/Transflective%20liquid-crystal%20display
|
A transflective liquid-crystal display is a liquid-crystal display (LCD) with an optical layer that reflects and transmits light (transflective is a portmanteau of transmissive and reflective). Under bright illumination (e.g. when exposed to daylight) the display acts mainly as a reflective display with the contrast being constant with illuminance. However, under dim and dark ambient situations the light from a backlight is transmitted through the transflective layer to provide light for the display. The transflective layer is called a transflector. It is typically made from a sheet polymer. It is similar to a one-way mirror but is not specular.
An application is digital LCD wristwatches. In dim ambient light or at night a backlight allows reading of the display in its transmissive mode. Digital time displays in alarm clocks for bedrooms may also work this way. If they are battery-powered, the backlight may be push-button operated. The backlighting is usually dim, so that the display is comfortably readable at night. Some 21st century smartwatches such as the Pebble Smartwatch and the Amazfit Stratos also use transflective LCDs.
When an illuminance sensor is added for control of the backlight, such a transflective LCD can be read over a wide range of illuminance levels. This technique is often found in automotive instrumentation. In portable electronic devices the transflective mode of operation helps to save battery charge, since in bright environments no backlighting is required.
Some displays that transmit light and have minor reflectivity are best readable in the dark and fairly readable in bright sunlight, but only under a particular angle; they are least readable in bright daylight without direct sunlight.
Trade names
Display manufacturers label their transflective screens under a variety of trade names:
BE+: SolarbON
Boe Hydis: Viewiz
Motion Computing: View Anywhere
LG Display: Shine-Out
NEC Displays: ST-NLT
DEMCO CSI: SOLARBON
Pixel Qi: 3Qi
|
https://en.wikipedia.org/wiki/Certolizumab%20pegol
|
Certolizumab pegol, sold under the brand name Cimzia, is a biopharmaceutical medication for the treatment of Crohn's disease, rheumatoid arthritis, psoriatic arthritis and ankylosing spondylitis. It is a fragment of a monoclonal antibody specific to tumor necrosis factor alpha (TNF-α) and is manufactured by UCB.
It is on the World Health Organization's List of Essential Medicines.
Medical uses
Crohn's Disease On April 22, 2008, the U.S. Food and Drug Administration (FDA) approved Cimzia for the treatment of Crohn's disease in people who did not respond sufficiently or adequately to standard therapy.
Rheumatoid arthritis On June 26, 2009, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) issued a positive opinion recommending that the European Commission grant a marketing authorisation for Cimzia for the treatment of rheumatoid arthritis only - the CHMP refused approval for the treatment of Crohn's disease. The marketing authorisation was granted to UCB Pharma SA on October 1, 2009.
Psoriatic arthritis On September 27, 2013, the U.S. FDA approved Cimzia for the treatment of adult patients with active psoriatic arthritis.
Method of action
Certolizumab pegol is a monoclonal antibody directed against tumor necrosis factor alpha. More precisely, it is a PEGylated Fab' fragment of a humanized TNF inhibitor monoclonal antibody.
Clinical trials
Crohn's disease Positive results have been demonstrated in two phase III trials (PRECiSE 1 and 2) of certolizumab pegol versus placebo in moderate to severe active Crohn's disease.
Axial spondyloarthritis In 2013, a phase 3 double blind randomized placebo-controlled study found significantly positive results in patient self-reported questionnaires, with rapid improvement of function and pain reduction, in patients with axial spondyloarthritis.
Rheumatoid arthritis Certolizumab appears beneficial in those with rheumatoid arthritis.
|
https://en.wikipedia.org/wiki/Adam%20Adamandy%20Kocha%C5%84ski
|
Adam Adamandy Kochański (5 August 1631 – 17 May 1700) was a Polish mathematician, physicist, clock-maker, pedagogue and librarian. He was the Court Mathematician of John III Sobieski.
Kochański was born in Dobrzyń nad Wisłą. He began his education in Toruń, and in 1652 he entered the Society of Jesus in Vilnius. He studied philosophy at Vilnius University (then called Vilnius Academy). He also studied mathematics, physics and theology. He went on to lecture on those subjects at several European universities: in Florence, Prague, Olomouc, Wrocław, Mainz and Würzburg. In 1680 he accepted an offer from John III Sobieski, the king of Poland, returning to Poland and taking the position of the king's chaplain, mathematician, clock maker, librarian, and tutor of the king's son, Jakub.
He wrote many scientific papers, mainly on mathematics and mechanics, but also on physics, astronomy and philosophy. The best known of his works, Observationes Cyclometricae ad facilitandam Praxin accommodatae, is devoted to the squaring the circle (or alternatively, the quadrature of the circle) and was published in 1685 in the leading scientific periodical of the time, Acta Eruditorum. He also found a famous approximation of π today called Kochański's approximation:
Kochański cooperated and corresponded with many scientists, Johannes Hevelius and Gottfried Leibniz among them. He was apparently the only one of the contemporary Poles to know elements of the newly invented calculus. As a mechanic he was a renowned clock maker. He suggested replacing the clock's pendulum with a spring, and standardizing the number of escapements per hour.
He died in Teplice in Bohemia.
|
https://en.wikipedia.org/wiki/Differentiator
|
In electronics, a differentiator is a circuit designed to produce an output approximately proportional to the rate of change (the time derivative) of the input. A true differentiator cannot be physically realized, because it has infinite gain at infinite frequency. A similar effect can be achieved, however, by limiting the gain above some frequency. The differentiator circuit is essentially a high-pass filter.
An active differentiator includes some form of amplifier, while a passive differentiator is made only of resistors, capacitors and inductors.
Passive differentiator
The simple four-terminal passive circuits depicted in figure, consisting of a resistor and a capacitor, or alternatively a resistor and an inductor, behave as differentiators.
Indeed, according to Ohm's law, the voltages at the two ends of the capacitive differentiator are related by a transfer function that has a zero in the origin and a pole in − and that is consequently a good approximation of an ideal differentiator at frequencies below the natural frequency of the pole:
Similarly, the transfer function of the inductive differentiator has a zero in the origin and a pole in −.
Active differentiator
Ideal differentiator
A differentiator circuit (also known as a differentiating amplifier or inverting differentiator) consists of an ideal operational amplifier with a resistor R providing negative feedback and a capacitor C at the input, such that:
is the voltage across C (from the op amp's virtual ground negative terminal).
is the voltage across R (also from the op amp's virtual ground negative terminal).
is the current flowing from the output through both R and C to the circuit's input.
No current flows into the ideal op amp's inputs because they have very high input impedance.
By utilizing the capacitor's current–voltage relation, this circuit's current flowing from the output to the input will be proportional to the derivative of the voltage across the capacitor:
This same curr
|
https://en.wikipedia.org/wiki/Intervention%20theory
|
In social studies and social policy, intervention theory is the analysis of the decision making problems of intervening effectively in a situation in order to secure desired outcomes. Intervention theory addresses the question of when it is desirable not to intervene and when it is appropriate to do so. It also examines the effectiveness of different types of intervention. The term is used across a range of social and medical practices, including health care, child protection and law enforcement. It is also used in business studies.
Within the theory of nursing, intervention theory is included within a larger scope of practice theories. Burns and Grove point out that it directs the implementation of a specific nursing intervention and provides theoretical explanations of how and why the intervention is effective in addressing a particular patient care problem. These theories are tested through programs of research to validate the effectiveness of the intervention in addressing the problem.
In Intervention Theory and Method Chris Argyris argues that in organization development, effective intervention depends on appropriate and useful knowledge that offers a range of clearly defined choices and that the target should be for as many people as possible to be committed to the option chosen and to feel responsibility for it. Overall, interventions should generate a situation in which actors believe that they are working to internal rather than external influences on decisions.
See also
Cognitive interventions, a set of techniques and therapies practiced in counseling
Psychological intervention, any action by psychological professionals designed to bring about change in a client
Health intervention, an effort to promote good health behaviour or to prevent bad health behaviour
Human Systems Intervention, the design and implementation of interventions in social settings where adults are confronted with the need to change their perspectives, attitudes, and actions
Int
|
https://en.wikipedia.org/wiki/Representation%20%28mathematics%29
|
In mathematics, a representation is a very general relationship that expresses similarities (or equivalences) between mathematical objects or structures. Roughly speaking, a collection Y of mathematical objects may be said to represent another collection X of objects, provided that the properties and relationships existing among the representing objects yi conform, in some consistent way, to those existing among the corresponding represented objects xi. More specifically, given a set Π of properties and relations, a Π-representation of some structure X is a structure Y that is the image of X under a homomorphism that preserves Π. The label representation is sometimes also applied to the homomorphism itself (such as group homomorphism in group theory).
Representation theory
Perhaps the most well-developed example of this general notion is the subfield of abstract algebra called representation theory, which studies the representing of elements of algebraic structures by linear transformations of vector spaces.
Other examples
Although the term representation theory is well established in the algebraic sense discussed above, there are many other uses of the term representation throughout mathematics.
Graph theory
An active area of graph theory is the exploration of isomorphisms between graphs and other structures.
A key class of such problems stems from the fact that, like adjacency in undirected graphs, intersection of sets
(or, more precisely, non-disjointness) is a symmetric relation.
This gives rise to the study of intersection graphs for innumerable families of sets.
One foundational result here, due to Paul Erdős and his colleagues, is that every n-vertex graph may be represented in terms of intersection among subsets of a set of size no more than n2/4.
Representing a graph by such algebraic structures as its adjacency matrix and Laplacian matrix gives rise to the field of spectral graph theory.
Order theory
Dual to the observation above that every graph
|
https://en.wikipedia.org/wiki/353%20%28number%29
|
353 (three hundred fifty-three) is the natural number following 352 and preceding 354. It is a prime number.
In mathematics
353 is a palindromic prime, an irregular prime, a super-prime, a Chen prime, a Proth prime, and an Eisentein prime.
In connection with Euler's sum of powers conjecture, 353 is the smallest number whose 4th power is equal to the sum of four other 4th powers, as discovered by R. Norrie in 1911:
In a seven-team round robin tournament, there are 353 combinatorially distinct outcomes in which no subset of teams wins all its games against the teams outside the subset; mathematically, there are 353 strongly connected tournaments on seven nodes.
353 is one of the solutions to the stamp folding problem: there are exactly 353 ways to fold a strip of eight blank stamps into a single flat pile of stamps.
353 in Mertens Function returns 0.
353 is an index of a prime Lucas number.
|
https://en.wikipedia.org/wiki/Probe%20card
|
A probe card (commonly referred to as a DUT board) is used in automated integrated circuit testing. It is an interface between an electronic test system and a semiconductor wafer.
Use and manufacture
A probe card or DUT board is a printed circuit board (PCB), and is the interface between the integrated circuit and a test head, which in turn attaches to automatic test equipment (ATE) (or "tester"). Typically, the probe card is mechanically docked to a Wafer testing prober and electrically connected to the ATE . Its purpose is to provide an electrical path between the test system and the circuits on the wafer, thereby permitting the testing and validation of the circuits at the wafer level, usually before they are diced and packaged. It normally comprises a PCB and some form of contact elements, usually metallic.
A semiconductor manufacturer will typically require a new probe card for each new device wafer and for device shrinks (when the manufacturer reduces the size of the device while keeping its functionality) because the probe card is effectively a custom connector that takes the universal pattern of a given tester and translates the signals to connect to electrical pads on the wafer. For testing of Dynamic random-access memory (DRAM) and Flash memory (FLASH) devices, these pads are typically made of aluminum and are 40–90 per side. Other devices may have flat pads, or raised bumps or pillars made of copper, copper alloys or many types of solders such as lead-tin, tin-silver and others.
The probe card must make good electrical contact to these pads or bumps during the testing of the device. When the testing of the device is complete, the prober will index the wafer to the next device to be tested.
Normally a probe card is inserted into a wafer prober, inside which the position of the wafer to be tested will be adjusted to ensure a precise contact between the probe card and wafer. Once the probe card and the wafer are loaded, a camera in the prober will opti
|
https://en.wikipedia.org/wiki/Gerontophobia
|
Gerontophobia is the fear of age-related self-degeneration (similar to Gerascophobia), or a hatred or fear of the elderly due to memento mori. The term comes from the Greek γέρων – gerōn, "old man" and φόβος – phobos, "fear". Gerontophobia has been linked to Thanatophobia as fear of old age can be a precursor to fear of death. Gerontophobia can be caused by harmful stereotypes of elderly people displayed in the media
Ageism
Discriminatory aspects of ageism have been strongly linked to gerontophobia. This irrational fear or hatred of the elderly can be associated with the expectation that someday all young people including oneself will be old inevitably and suffer from the irreversible health decline that comes with old age, which is associated with disability, disease, and death. The sight of aged people could be a possible reminder of death (memento mori) and inevitable biological vulnerability. This unwillingness to accept these can manifest in feelings of hostility and discriminatory acts towards the elderly.
History
Old age was previously seen as a golden age in the Middle Ages Around the time of the Anglo-Saxons there was a shift towards more negative views of the elderly, which led to more and more literature developing a Gerontophobic view.
Portrayal in Literature and the Media
Gerontophobia is heavily portrayed in literature and the media starting as early as Anglo-saxon poetry but is also found in common literary classics such as Shakespeare's King Lear, Jonathan Swift's Gulliver's Travels, and Jane Austen's Persuasion Gerontophobia can also be found in many TV shows and movies.
Treatments for Gerontophobia
Treatment for Gerontophobia can include better education about the elderly and aging as well as an increase in exposure and insight therapy.
See also
Gerascophobia
Gerontocracy
Intergenerational equity
List of phobias
|
https://en.wikipedia.org/wiki/BNU%20%28software%29
|
BNU is a high-performance communications device driver designed to provide enhanced support for serial port communications. The BNU serial port driver was specifically targeted for use with early (late 1980s - 1990s) DOS-based BBS software. The reason for BNU and other similar enhanced serial port drivers was to provide better support for serial communications software than what was offered by the machine's BIOS and/or DOS being used on the machine. Having serial port support as provided by BNU and other similar drivers allowed the communications software programmers to spend more time on the actual applications instead of the depths and details of how to talk to the serial ports and the modems connected to them. Sending communications data across a modem link was a lot more involved than sending data to a serial printer which was basically all that was originally capable of being done with the existing serial port software support.
BNU was written by David Nugent as an experimental driver for serial communications following the FOSSIL specification. David released BNU to the public in 1989 and its use in the BBS world spread rapidly. BNU was one of only two or three available FOSSIL drivers for the IBM PC compatible hardware and MS-DOS/PC DOS operating system. Because of this, BNU has been one of the most widely used MS-DOS FOSSIL communications drivers.
BNU was mainly used with DOS-based Bulletin Board System (BBS) software written in the late 1980s to mid-1990s. It is not used by Windows-based BBS software, but BNU can be used under Windows NTVDM to run DOS-based BBS software under Windows. BNU and other similar drivers were not limited solely to being used in the BBS world. The enhanced capabilities they offered were also used to easily communicate with other serially connected devices for the same reasons that the FOSSIL specification and FOSSIL drivers were originally created. That reason, as noted above, was to separate the details of serial port communica
|
https://en.wikipedia.org/wiki/NetFoss
|
NetFoss is a popular Network FOSSIL driver for Windows.
A FOSSIL is a serial communications layer to allow DOS based software to talk to modems without dealing with hardware I/O and interrupts.
A Network FOSSIL redirects such software to a TCP/IP address rather than to a serial modem.
NetFoss is faster than other FOSSIL drivers, due to being written in 32-bit assembly language.
It allows Zmodem transfers at up to 280,000 CPS.
NetFoss was developed in 2001 by pcmicro, and was released as freeware. Several minor updates have been released since then. The current version can be downloaded from http://netfoss.com.
|
https://en.wikipedia.org/wiki/X00
|
X00 was a popular DOS-based FOSSIL driver which was commonly used in the mid-1980s to the late 1990s and is even still used today. FOSSIL drivers were mainly used to run BBS software under MS-DOS. X00 can also be run under Windows, or even Linux and DOSEMU environments, to allow FOSSIL-aware MS-DOS based applications to function.
X00 was developed by Raymond L. Gwinn from 1989 until 1993. The final release version was version 1.50, with a later beta version 1.53 which added support for baud rates above 38400. X00 is free for non-commercial usage. X00 included many enhancements to the FTSC FOSSIL revision 5 specifications, which were later used in other FOSSIL drivers such as ADF and NetFoss.
Gwinn moved on to develop a replacement serial port driver for OS/2 called SIO. SIO contained a virtualized FOSSIL (VX00) that could be loaded if applications needed FOSSIL support.
|
https://en.wikipedia.org/wiki/Pcmicro
|
pcmicro was a large Bulletin Board System (BBS) support site from 1981 to 1998. Before the World Wide Web became popular, the pcmicro BBS served as a central file repository for all non-commercial BBS software and related utilities. The BBS was a FidoNet member from 1991 to 1997, and was a support and distribution site for several shareware and freeware BBS packages including RemoteAccess, Proboard, and EleBBS. pcmicro later released a telnet communications driver named NetFoss which allows DOS-based BBS software to be used over telnet.
While the BBS is no longer in service today its entire collection of freeware and shareware BBS software and utilities can be found on the BBS Archives at http://archives.thebbs.org. Containing thousands of third party add-ons for BBS packages.
See also
List of BBS software
FidoNet
DOS on IBM PC compatibles
|
https://en.wikipedia.org/wiki/Nucleic%20acid%20metabolism
|
Nucleic acid metabolism is a collective term that refers to the variety of chemical reactions by which nucleic acids (DNA and/or RNA) are either synthesized or degraded. Nucleic acids are polymers (so-called "biopolymers") made up of a variety of monomers called nucleotides. Nucleotide synthesis is an anabolic mechanism generally involving the chemical reaction of phosphate, pentose sugar, and a nitrogenous base. Degradation of nucleic acids is a catabolic reaction and the resulting parts of the nucleotides or nucleobases can be salvaged to recreate new nucleotides. Both synthesis and degradation reactions require multiple enzymes to facilitate the event. Defects or deficiencies in these enzymes can lead to a variety of diseases.
Synthesis of nucleotides
Nucleotides are the monomers which polymerize into nucleic acids. All nucleotides contain a sugar, a phosphate, and a nitrogenous base. The bases found in nucleic acids are either purines or pyrimidines. In the more complex multicellular animals, they are both primarily produced in the liver but the two different groups are synthesized in different ways. However, all nucleotide synthesis requires the use of phosphoribosyl pyrophosphate (PRPP) which donates the ribose and phosphate necessary to create a nucleotide.
Purine synthesis
Adenine and guanine are the two nucleotides classified as purines. In purine synthesis, PRPP is turned into inosine monophosphate, or IMP. Production of IMP from PRPP requires glutamine, glycine, aspartate, and 6 ATP, among other things. IMP is then converted to AMP (adenosine monophosphate) using GTP and aspartate, which is converted into fumarate. While IMP can be directly converted to AMP, synthesis of GMP (guanosine monophosphate) requires an intermediate step, in which NAD+ is used to form the intermediate xanthosine monophosphate, or XMP. XMP is then converted into GMP by using the hydrolysis of 1 ATP and the conversion of glutamine to glutamate. AMP and GMP can then be converted
|
https://en.wikipedia.org/wiki/Cryptoloop
|
Cryptoloop is a Linux kernel's disk encryption module that relies on the Crypto API, which is a cryptography framework introduced in version 2.5.45 of the Linux kernel mainline. Cryptoloop was first introduced in the 2.5.x kernel series; its functionality was later incorporated into the device mapper, a generic framework used to map one block device onto another.
Cryptoloop can create an encrypted file system within a partition or from within a regular file in the regular file system. Once a file is encrypted, it can be moved to another storage device. This is accomplished by making use of a loop device, a pseudo device that enables a normal file to be mounted as if it were a physical device. By encrypting I/O to the loop device, any data being accessed must first be decrypted before passing through the regular file system; conversely, any data being stored will be encrypted.
Cryptoloop is vulnerable to watermarking attacks, making it possible to determine presence of watermarked data on the encrypted filesystem:
This attack exploits weakness in IV computation and knowledge of how file systems place files on disk. This attack works with file systems that have soft block size of 1024 or greater. At least ext2, ext3, reiserfs and minix have such property. This attack makes it possible to detect presence of specially crafted watermarked files. Watermarked files contain special bit patterns that can be detected without decryption.
Newer versions of cryptoloop's successor, dm-crypt, are less vulnerable to this type of attack if used correctly.
See also
Comparison of disk encryption software
Disk encryption
|
https://en.wikipedia.org/wiki/Trager%20approach
|
The Trager approach is a form of somatic education. Proponents claim the Trager approach helps release deep-seated physical and mental patterns and facilitates deep relaxation, increased physical mobility, and mental clarity.
History
The founder, Milton Trager, called his work Psychophysical Integration. He was an athlete, dancer, and bodybuilder. He began doing bodywork with no training and later worked under a variety of practitioner licenses, including an MD earned in Mexico followed by 2 years residency in psychiatry. Trager wanted western medicine to accept his proposed mind-body connection in treating challenging conditions such as postpolio, Parkinson's, and other neuromuscular conditions. Doctors reportedly referred patients to him and were surprised by the results, but "none seemed to consider his drugless treatments as effective as surgery or medication" and the medical approach to these conditions did not fundamentally shift away from them as he had envisioned.
Late in life, at the Esalen Institute, he was encouraged to begin teaching, which he did for the last 22 years of his life.
Practice
At the beginning of a session, the practitioner enters into a state of meditation that Milton Trager originally termed "hook-up". From this state of mind, the practitioner uses gentle touch and a combination of passive and active movement with the intent of teaching the recipient how to move with less effort. The contact is gentle in a sense; it may be quite firm but is without strain or resistance. Regarding pain, Trager practitioners avoid causing pain, and attempt to contact the body in a way that allows the client to have decreased fear of pain and increased willingness to be present with the full range of sensations. Practitioners are taught to allow a tone of curiosity, playfulness, and effortlessness to guide their work. In addition to hands-on bodywork, clients are taught a series of movements called "Mentastics" to be performed with a certain mental atte
|
https://en.wikipedia.org/wiki/Nuclease%20protection%20assay
|
Nuclease protection assay is a laboratory technique used in biochemistry and genetics to identify individual RNA molecules in a heterogeneous RNA sample extracted from cells. The technique can identify one or more RNA molecules of known sequence even at low total concentration. The extracted RNA is first mixed with antisense RNA or DNA probes that are complementary to the sequence or sequences of interest and the complementary strands are hybridized to form double-stranded RNA (or a DNA-RNA hybrid). The mixture is then exposed to ribonucleases that specifically cleave only single-stranded RNA but have no activity against double-stranded RNA. When the reaction runs to completion, susceptible RNA regions are degraded to very short oligomers or to individual nucleotides; the surviving RNA fragments are those that were complementary to the added antisense strand and thus contained the sequence of interest.
Probe
The probes are prepared by cloning part of the gene of interest in a vector under the control of any of the following promoters, SP6, T7 or T3. These promoters are recognized by DNA dependent RNA polymerases originally characterized from bacteriophages. The probes produced are radioactive as they are prepared by in vitro transcription using radioactive UTPs. Uncomplemented DNA or RNA is cleaved off by nucleases. When the probe is a DNA molecule, S1 nuclease is used; when the probe is RNA, any single-strand-specific ribonuclease can be used. Thus the surviving probe-mRNA complement is simply detected by autoradiography.
Uses
Nuclease protection assays are used to map introns and 5' and 3' ends of transcribed gene regions. Quantitative results can be obtained regarding the amount of the target RNA present in the original cellular extract - if the target is a messenger RNA, this can indicate the level of transcription of the gene in the cell.
They are also used to detect the presence of double stranded RNA, presence of which could mean RNA interference.
N
|
https://en.wikipedia.org/wiki/Nurse%20log
|
A nurse log is a fallen tree which, as it decays, provides ecological facilitation to seedlings. Broader definitions include providing shade or support to other plants. Some of the advantages a nurse log offers to a seedling are: water, moss thickness, leaf litter, mycorrhizae, disease protection, nutrients, and sunlight. Recent research into soil pathogens suggests that in some forest communities, pathogens hostile to a particular tree species appear to gather in the vicinity of that species, and to a degree inhibit seedling growth. Nurse logs may therefore provide some measure of protection from these pathogens, thus promoting greater seedling survivorship.
Occurrence
Various mechanical and biological processes contribute to the breakdown of lignin in fallen trees, resulting in the formation of niches of increasing size, which tend to fill with forest litter such as soil from spring floods, needles, moss, mushrooms and other flora. Mosses also can cover the outside of a log, hastening its decay and supporting other species as rooting media and by retaining water. Small animals such as various squirrels often perch or roost on nurse logs, adding to the litter by food debris and scat. The decay of this detritus contributes to the formation of a rich humus that provides a seedbed and adequate conditions for germination.
Nurse logs often provide a seedbed to conifers in a temperate rain forest ecosystem.
The oldest nurse log fossils date to the earliest Permian, approximately 300 million years ago.
|
https://en.wikipedia.org/wiki/Enhanceosome
|
An enhanceosome is a protein complex that assembles at an enhancer region on DNA and helps to regulate the expression of a target gene.
Formation
Enhancers are bound by transcription activator proteins and transcriptional regulation is typically controlled by more than one activator. Enhanceosomes are formed in special cases when these activators cooperatively bind together along the enhancer sequence to create a distinct three-dimensional structure. Each enhanceosome is unique towards its specific enhancer. This assembly is facilitated by energetically favorable protein: protein and protein: DNA interactions. Therefore, all the necessary activators need to be present for the enhanceosome to be formed and able to function.
Function
Once the enhanceosome has been formed, it recruits coactivators and general transcription factors to the promoter region of the target gene to begin transcription. The effectiveness of this is dependent on DNA conformation. As a result, the enhanceosome also recruits non histone architectural transcription factors, called high-mobility group (HMG) proteins, which are responsible for regulating chromatin structure. These factors do not bind to the enhancer, but instead are used to restructure the DNA to ensure that the genes can be accessed by the transcription factors.
Role
Most enhanceosomes have been discovered pertaining to genes requiring tight regulation, like those associated with the cells defense system. Using more than one kind of transcriptional activator protein could help to ensure that a gene is not transcribed prematurely. Furthermore, the use of multiple factors enables gene regulation through a combination of cellular stimuli that function through multiple signaling cascades.
Examples
INF-β
The best known example of the enhanceosome acts on the human interferon-beta gene, which is upregulated in cells that are infected by viruses. Three activator proteins—NF-κB, an interferon activator protein such as IRF-3, and
|
https://en.wikipedia.org/wiki/Ultra%20Network%20Technologies
|
Ultra Network Technologies (previously called Ultra Corporation) was a networking company. It offered high-speed network products for the scientific computing market as well as some commercial companies. It was founded in 1986 by James N. Perdue (formerly of NASA, Ames Research Center), Drew Berding, and Wes Meador (of Control Data Corporation) to provide higher speed connectivity and networking for supercomputers and their peripherals and workstations. At the time, the only other companies offering high speed networking and connectivity for the supercomputer and high-end workstation market was Network Systems Corporation (NSC) and Computer Network Technology Corporation (CNT). They both offered 50 megabytes per second (MB/s) bandwidth between controllers but at that time, their architecture was not implemented using standard networking protocols and their applications were generally focused on supporting connectivity at high speed between large mainframes and peripherals, often only implementing only point-to-point connections. Ethernet was available in 1986 and was used by most computer centers for general networking purposes. Its bandwidth was not high enough to manage the high data rate required by the 100 MB/s supercomputer channels and 4 MB/s VMEbus channels on workstations.
Ultra's first customer, Apple Computer, purchased a system to connect their Cray 1 supercomputer to a high speed graphics framebuffer so that Apple could simulate new personal computers on the Cray Research computer (at the hardware level) and use the framebuffer as the simulated computer display device. Although not a networking application, this first contract allowed Ultra to demonstrate the basic technologies and gave them capital to continue development on a true networking processor.
In 1988, Ultra introduced ISO TP4 (level 4 networking protocol) as part of their controllers and implemented a type of star configuration network using coax and fiber optic connections. They c
|
https://en.wikipedia.org/wiki/Fold%20%28higher-order%20function%29
|
In functional programming, fold (also termed reduce, accumulate, aggregate, compress, or inject) refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure's hierarchy, using the function in a systematic way.
Folds are in a sense dual to unfolds, which take a seed value and apply a function corecursively to decide how to progressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacing it with the results of applying a combining function at each node on its terminal values and the recursive results (catamorphism, versus anamorphism of unfolds).
As structural transformations
Folds can be regarded as consistently replacing the structural components of a data structure with functions and values. Lists, for example, are built up in many functional languages from two primitives: any list is either an empty list, commonly called nil ([]), or is constructed by prefixing an element in front of another list, creating what is called a cons node ( Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil))))) ), resulting from application of a cons function (written down as a colon (:) in Haskell). One can view a fold on lists as replacing the nil at the end of the list with a specific value, and replacing each cons with a specific function. These replacements can be viewed as a diagram:
There's another way to perform the structural transformation in a consistent manner, with the order of the two links of each node flipped when fed into the combining function:
These pictures illustrate right and left fold of a list visually. They also highlight
|
https://en.wikipedia.org/wiki/Kramers%27%20theorem
|
In quantum mechanics, the Kramers' degeneracy theorem states that for every energy eigenstate of a time-reversal symmetric system with half-integer total spin, there is another eigenstate with the same energy related by time-reversal. In other words, the degeneracy of every energy level is an even number if it has half-integer spin. The theorem is named after Dutch physicist H. A. Kramers.
In theoretical physics, the time reversal symmetry is the symmetry of physical laws under a time reversal transformation:
If the Hamiltonian operator commutes with the time-reversal operator, that is
then, for every energy eigenstate , the time reversed state is also an eigenstate with the same energy. These two states are sometimes called a Kramers pair. In general, this time-reversed state may be identical to the original one, but that is not possible in a half-integer spin system: since time reversal reverses all angular momenta, reversing a half-integer spin cannot yield the same state (the magnetic quantum number is never zero).
Mathematical statement and proof
In quantum mechanics, the time reversal operation is represented by an antiunitary operator acting on a Hilbert space . If it happens that , then we have the following simple theorem:
If is an antiunitary operator acting on a Hilbert space satisfying and a vector in , then is orthogonal to .
Proof
By the definition of an antiunitary operator, , where and are vectors in . Replacing and and using that , we get which implies that .
Consequently, if a Hamiltonian is time-reversal symmetric, i.e. it commutes with , then all its energy eigenspaces have even degeneracy, since applying to an arbitrary energy eigenstate gives another energy eigenstate that is orthogonal to the first one. The orthogonality property is crucial, as it means that the two eigenstates and represent different physical states. If, on the contrary, they were the same physical state, then for an angle , which would imply
T
|
https://en.wikipedia.org/wiki/GMS%20%28software%29
|
GMS (Groundwater Modeling System) is water modeling application for building and simulating groundwater models from Aquaveo. It features 2D and 3D geostatistics, stratigraphic modeling and a unique conceptual model approach. Currently supported models include MODFLOW, MODPATH, MT3DMS, RT3D, FEMWATER, SEEP2D, and UTEXAS.
Version 6 introduced the use of XMDF (eXtensible Model Data Format), which is a compatible extension of HDF5. The purpose of this is to allow internal storage and management of data in a single HDF file, rather than using many flat files.
History
GMS was initially developed in the late 1980s and early 1990s on Unix workstations by the Engineering Computer Graphics Laboratory at Brigham Young University. The development of GMS was funded primarily by The United States Army Corps of Engineers and was known—until version 4.0, released in late 1999—as the Department of Defense Groundwater Modeling System, or DoD GMS. It was ported to Microsoft Windows in the mid 1990s. Version 3.1 was the last version that supported HP-UX, IRIX, OSF/1, and Solaris platforms. Development of GMS—along with WMS and SMS—was transferred to Aquaveo when it formed in April 2007.
A study published in the Journal of Agricultural and Applied Economics in August 2000 stated that "GMS provides an interface to the groundwater flow model, MODFLOW, and the contaminant transport model, MT3D. MODFLOW is a three-dimensional, cell-centered, finite-difference, saturated-flow model capable of both steady-state and transient analyses...These two models, when put together, provide a comprehensive tool for examining groundwater flow and nitrate transport and accumulation". The study was designed to help develop a "permit scheme to effectively manage nitrate pollution of groundwater supplies for communities in rural areas without hindering agricultural production in watersheds".
Version history
Reception
A 2001 report prepared for the Iowa Comprehensive Petroleum Underground Storage Tank Fu
|
https://en.wikipedia.org/wiki/Payload%20specialist
|
A payload specialist (PS) was an individual selected and trained by commercial or research organizations for flights of a specific payload on a NASA Space Shuttle mission. People assigned as payload specialists included individuals selected by the research community, a company or consortium flying a commercial payload aboard the spacecraft, and non-NASA astronauts designated by international partners.
The term refers to both the individual and to the position on the Shuttle crew.
History
The National Aeronautics and Space Act of 1958 states that NASA should provide the "widest practicable and appropriate dissemination of information concerning its activities and the results thereof". The Naugle panel of 1982 concluded that carrying civilians—those not part of the NASA Astronaut Corps—on the Space Shuttle was part of "the purpose of adding to the public's understanding of space flight".
Payload specialists usually fly for a single specific mission. Chosen outside the standard NASA mission specialist selection process, they are exempt from certain NASA requirements such as colorblindness. Roger Crouch and Ulf Merbold are examples of those who flew in space despite not meeting NASA physical requirements; the agency's director of crew training Jim Bilodeau said in April 1981 "we'll be able to take everybody but the walking wounded". Payload specialists were not required to be United States citizens, but had to be approved by NASA and undergo rigorous but shorter training. In contrast, a Space Shuttle mission specialist was selected as a NASA astronaut first and then assigned to a mission.
Payload specialists on early missions were technical experts to join specific payloads such as a commercial or scientific satellite. On Spacelab and other missions with science components, payload specialists were scientists with expertise in specific experiments. The term also applied to representatives from partner nations who were given the opportunity of a first flight on boar
|
https://en.wikipedia.org/wiki/Autodyne
|
The autodyne circuit was an improvement to radio signal amplification using the De Forest Audion vacuum tube amplifier. By allowing the tube to oscillate at a frequency slightly different from the desired signal, the sensitivity over other receivers was greatly improved. The autodyne circuit was invented by Edwin Howard Armstrong of Columbia University, New York, NY. He inserted a tuned circuit in the output circuit of the Audion vacuum tube amplifier. By adjusting the tuning of this tuned circuit, Armstrong was able to dramatically increase the gain of the Audion amplifier. Further increase in tuning resulted in the Audion amplifier reaching self-oscillation.
This oscillating receiver circuit meant that the then latest technology continuous wave (CW) transmissions could be demodulated. Previously only spark, interrupted continuous wave (ICW, signals which were produced by a motor chopping or turning the signal on and off at an audio rate), or modulated continuous wave (MCW), could produce intelligible output from a receiver.
When the autodyne oscillator was advanced to self-oscillation, continuous wave Morse code dots and dashes would be clearly heard from the headphones as short or long periods of sound of a particular tone, instead of an all but impossible to decode series of thumps. Spark and chopped CW (ICW) were amplitude modulated signals which didn't require an oscillating detector.
Such a regenerative circuit is capable of receiving weak signals, if carefully coupled to an antenna. Antenna coupling interacts with tuning, making optimum adjustments difficult.
Heterodyne detection
Damped wave transmission
Early transmitters emitted damped waves, which were radio frequency sine wave bursts of a number of cycles duration, of decreasing amplitude with each cycle. These bursts recurred at an audio frequency rate, producing an amplitude modulated transmission. The damped waves were a result of the available technologies to generate radio frequencies. See
|
https://en.wikipedia.org/wiki/Ecological%20thinning
|
Ecological thinning is a silvicultural technique used in forest management that involves cutting trees to improve functions of a forest other than timber production.
Although thinning originated as a man-made forest management tool, aimed at increasing timber yields, the shift from production forests to multifunctional forests brought with it the cutting of trees to manipulate an ecosystem for various reasons, ranging from removing non-native species from a plot to removing poplars growing on a riverside beach aimed at recreational use.
Since the 1970s, leaving the thinned trees on the forest floor has become an increasingly common policy: wood can be decomposed in a more natural fashion, playing an important role in increasing biodiversity by providing habitat to various invertebrates, birds and small mammals. Many fungi (e.g. Calocera viscosa) and mosses are saproxylic or epixylic as well (e.g. Marchantiophyta) – some moss species completing their entire life-cycle on a single log.
Where trees are managed under a commercial regime, competition is reduced by removing adjacent stems that exhibit less favourable timber quality potential. When left in a natural state trees will "self-thin", but this process can be unreliable in some circumstances. Examples of this can be found in the Buxus–Ironbark forests and woodlands of Victoria (Australia) where a large proportion of trees are coppice, resultant from timber cutting in decades gone by.
Ecophysiological repercussions
Thinning decreases canopy closure and increases the penetration of solar radiation into the canopy. The photosynthetic efficiency of this energy is improved, and needle retention is prolonged, especially in the lower parts of the crown. The root system, crown length, crown diameter, and crown area all increase after thinning. Even if soil evaporation and individual tree transpiration increases after thinning, total evapo-transpiration at stand level tends to decrease; canopy water interception is r
|
https://en.wikipedia.org/wiki/Rawflow
|
RawFlow was a provider of live p2p streaming technology that enables internet broadcasting of audio and video. The company's technology is similar to Abacast and Octoshape.
Rawflow was incorporated in 2002 by Mikkel Dissing, Daniel Franklin and Stephen Dicks. Its main office was in London, UK. Velocix acquired RawFlow in July 2008. Alcatel-Lucent acquired Velocix in July 2009. The streaming media CDN solution by Alcatel-Lucent is called Velocix Digital Media Delivery Platform.
A peer-to-peer (or P2P) computer network relies on the computing power and bandwidth of the participants in the network rather than concentrating it in a relatively low number of servers. When using this technology, the bandwidth requirement of the broadcast is intelligently distributed over the entire network of participants, instead of being centralised at the broadcast's origin; as the audience grows so do the network resources available to distribute that broadcast without adding any additional bandwidth costs.
How does it work?
The RawFlow ICD (Intelligent Content Distribution) Server provides the initial contact point for clients in the network. When initially installed, the ICD Server connects to the broadcaster’s existing media server and begins receiving the stream, maintaining a buffer of stream data in memory at all times. It then begins accepting connections from clients and serving them with the stream. When launched by a user, the ICD Client first contacts the ICD Server and begins receiving the stream from it. The media player plays the stream as it is received by the ICD Client. If it has available resources, the ICD Client also accepts connections from other clients in the grid to which it may relay a portion or the whole stream it receives as requested.
The ICD Client monitors the quality of the stream it is receiving and upon any reduction of quality or loss of connection it again searches the grid for available resources while continuing to serve the media player fro
|
https://en.wikipedia.org/wiki/Micral
|
Micral is a series of microcomputers produced by the French company Réalisation d'Études Électroniques (R2E), beginning with the Micral N in early 1973. The Micral N was the first commercially available microprocessor-based computer.
In 1986, three judges at The Computer Museum, Boston – Apple II designer and Apple Inc. co-founder Steve Wozniak, early MITS employee and PC World publisher David Bunnell, and the museum's associate director and curator Oliver Strimpel – awarded the title of "first personal computer using a microprocessor" to the 1973 Micral. The Micral N was the earliest commercial, non-kit personal computer based on a microprocessor (in this case, the Intel 8008).
The Computer History Museum currently says that the Micral is one of the earliest commercial, non-kit personal computers. The 1971 Kenbak-1, invented before the first microprocessor, is considered to be the world's first "personal computer". That machine did not have a one-chip CPU but instead was based purely on small-scale integration TTL chips.
Micral N
R2E founder André Truong Trong Thi (EFREI degree, Paris), a French immigrant from Vietnam, asked Frenchman François Gernelle to develop the Micral N computer for the Institut National de la Recherche Agronomique (INRA), starting in June 1972. Alain Perrier of INRA was looking for a computer for process control in his crop evapotranspiration measurements. The software was developed by Benchetrit. Beckmann designed the I/O boards and controllers for peripheral magnetic storage. Lacombe was responsible for the memory system, I/O high speed channel, power supply and front panel. Gernelle invented the Micral N, which was much smaller than existing minicomputers. The January 1974 Users Manual called it "the first of a new generation of mini-computer whose principal feature is its very low cost," and said, "MICRAL's principal use is in process control. It does not aim to be an universal mini-computer."
The computer was to be delivered in
|
https://en.wikipedia.org/wiki/Linear%20programming%20relaxation
|
In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.
For example, in a 0–1 integer program, all constraints are of the form
.
The relaxation of the original integer program instead uses a collection of linear constraints
The resulting relaxation is a linear program, hence the name. This relaxation technique transforms an NP-hard optimization problem (integer programming) into a related problem that is solvable in polynomial time (linear programming); the solution to the relaxed linear program can be used to gain information about the solution to the original integer program.
Example
Consider the set cover problem, the linear programming relaxation of which was first considered by . In this problem, one is given as input a family of sets F = {S0, S1, ...}; the task is to find a subfamily, with as few sets as possible, having the same union as F.
To formulate this as a 0–1 integer program, form an indicator variable xi for each set Si, that takes the value 1 when Si belongs to the chosen subfamily and 0 when it does not. Then a valid cover can be described by an assignment of values to the indicator variables satisfying the constraints
(that is, only the specified indicator variable values are allowed) and, for each element ej of the union of F,
(that is, each element is covered). The minimum set cover corresponds to the assignment of indicator variables satisfying these constraints and minimizing the linear objective function
The linear programming relaxation of the set cover problem describes a fractional cover in which the input sets are assigned weights such that the total weight of the sets containing each element is at least one and the total weight of all sets is minimized.
As a specific example of the set cover problem, consider the instance F = {{a, b}, {b, c}, {a, c}}. There are three optimal set covers, each of which includes two of the three given se
|
https://en.wikipedia.org/wiki/Roland%20Dobrushin
|
Roland Lvovich Dobrushin () (July 20, 1929 – November 12, 1995) was a mathematician who made important contributions to probability theory, mathematical physics, and information theory.
Life and work
Dobrushin received his Ph.D. at Moscow State University under the supervision of Andrey Kolmogorov.
In statistical mechanics, he introduced (simultaneously with Lanford and Ruelle) the DLR equations for the Gibbs measure. Together with Kotecký and Shlosman, he studied the formation of droplets in Ising-type models, providing mathematical justification of the Wulff construction.
He was a foreign member of the American Academy of Arts and Sciences, Academia Europæa and US National Academy of Sciences.
The Dobrushin prize was established in his honour.
Notes
|
https://en.wikipedia.org/wiki/MULTOS
|
MULTOS is a multi-application smart card operating system, that enables a smart card to carry a variety of applications, from chip and pin application for payment to on-card biometric matching for secure ID and ePassport. MULTOS is an open standard whose development is overseen by the MULTOS Consortium – a body composed of companies which have an interest in the development of the OS and includes smart card and silicon manufacturers, payment card schemes, chip data preparation, card management and personalization system providers, and smart card solution providers. There are more than 30 leading companies involved in the consortium.
One of the key differences of MULTOS with respect to other types of smart card OS, is that it implements a patented public key cryptography-based mechanism by which the manufacture, issuance and dynamic updates of MULTOS smartcards in the field is entirely under an issuer's control using digital certificates rather than symmetric key sharing. This control is enabled through the use of a Key Management Authority (KMA), a special kind of certification authority. The KMA provides card issuers with cryptographic information required to bind the card to the issuer, initialize the card for use, and generate permission certificates for the loading and deleting of applications under the control of the issuer.
Application providers can retrieve and verify the public key certificate of an individual issuer's card, and encrypt their proprietary application code and confidential personalisation data using that card's unique public key. This payload is digitally signed using the private key of the application provider. The KMA, on request from the card issuer, signs the application provider's public key and application code has and creates a digital certificate (the Application Load Certificate) that authorises the application to be loaded to an issuer's card or group of cards. Applications are therefore protected for integrity and confidential
|
https://en.wikipedia.org/wiki/Sacral%20dimple
|
A sacral dimple (also termed pilonidal dimple or spinal dimple) is a small depression in the skin, located just above the buttocks. The name comes from the sacrum, the bone at the end of the spine, over which the dimples are found. A sacral dimple is defined as a midline dimple less than 5 mm in diameter and no further than 2.5 cm from the anus without associated visible drainage or hairy tuft.
Sacral dimples are common benign congenital anomalies found in up to 4% of the population. Other common benign congenital anomalies include supernumerary digits, third nipples and natal teeth. Most sacral dimple cases are minor and do not relate to any underlying medical problem, but some can result from disease, notably spina bifida. If so, this is usually the spina bifida occulta form, which is the least serious kind.
Simple dimples are typically small, measuring less than 5 mm in size. They are positioned in the midline, within 2.5 cm of the anus, and do not have any other associated skin abnormalities. Atypical dimples, on the other hand, have different characteristics. They are larger than 5 mm in size and are located within 25 mm of the anus. Atypical dimples can also be deep, positioned above the gluteal crease, located outside the midline, or occur as multiple dimples.
Sacral dimples are often spotted in post-natal checks by pediatricians, who can check:
whether the floor of the dimple is covered with skin;
whether there is a tuft of hair in the dimple;
whether there are potentially related problems such as weak lower limbs;
the distance from the buttocks to the dimple (closer is better).
For clinicians dealing with infants who have sacral dimples, it is essential to be aware of the characteristics of atypical dimples. Careful examinations should be conducted to identify any atypical features in order to appropriately manage and refer these cases in clinical practice.
Understanding the distinction between simple and atypical sacral dimples is crucial fo
|
https://en.wikipedia.org/wiki/Operator%20grammar
|
Operator grammar is a mathematical theory of human language that explains how language carries information. This theory is the culmination of the life work of Zellig Harris, with major publications toward the end of the last century. Operator grammar proposes that each human language is a self-organizing system in which both the syntactic and semantic properties of a word are established purely in relation to other words. Thus, no external system (metalanguage) is required to define the rules of a language. Instead, these rules are learned through exposure to usage and through participation, as is the case with most social behavior. The theory is consistent with the idea that language evolved gradually, with each successive generation introducing new complexity and variation.
Operator grammar posits three universal constraints: dependency (certain words depend on the presence of other words to form an utterance), likelihood (some combinations of words and their dependents are more likely than others) and reduction (words in high likelihood combinations can be reduced to shorter forms, and sometimes omitted completely). Together these provide a theory of language information: dependency builds a predicate–argument structure; likelihood creates distinct meanings; reduction allows compact forms for communication.
Dependency
The fundamental mechanism of operator grammar is the dependency constraint: certain words (operators) require that one or more words (arguments) be present in an utterance. In the sentence John wears boots, the operator wears requires the presence of two arguments, such as John and boots. (This definition of dependency differs from other dependency grammars in which the arguments are said to depend on the operators.)
In each language the dependency relation among words gives rise to syntactic categories in which the allowable arguments of an operator are defined in terms of their dependency requirements. Class N contains the words that do not re
|
https://en.wikipedia.org/wiki/World%20Carrot%20Museum
|
The World Carrot Museum is a website about the collection, preservation, interpretation and exhibition of objects relating to the carrot. It is a virtual museum which has no brick and mortar existence. The website is maintained by John Stolarczyk of Skipton, England, and is run as a not-for-profit organisation.
The website contains an extensive history of the carrot in its wild and domesticated forms including a timeline, showing how its colour has changed over the millennia, from white and purple to the modern orange. It records the resurgence of popularity of the carrot during World War Two rationing, including information on the propaganda material and the alternative recipes and uses for carrot during the food shortages. The site also contains recipes and cultivation advice.
The World Carrot Museum contains one of the largest collections of fine artworks containing an image of carrots, in their various colors. Paintings have often been used as sources in historical studies of crops, and plant biologists have been able to identify old species using historical artworks.
Writing in 2001, Dave Barry described the website as reflecting "a level of interest in carrots that would probably trouble a psychiatric professional". Stolarczyk was lead author of a paper on "Carrot History and Iconography" in 2011.
See also
List of food and beverage museums
|
https://en.wikipedia.org/wiki/Bivalent%20%28genetics%29
|
A bivalent is one pair of chromosomes (homologous chromosomes) in a tetrad. A tetrad is the association of a pair of homologous chromosomes (4 sister chromatids) physically held together by at least one DNA crossover. This physical attachment allows for alignment and segregation of the homologous chromosomes in the first meiotic division. In most organisms, each replicated chromosome (composed of two identical sisters chromatid) elicits formation of DNA double-strand breaks during the leptotene phase. These breaks are repaired by homologous recombination, that uses the homologous chromosome as a template for repair. The search for the homologous target, helped by numerous proteins collectively referred as the synaptonemal complex, cause the two homologs to pair, between the leptotene and the pachytene phases of meiosis I.
Formation
The formation of a bivalent occurs during the first division of meiosis (in the zygotene stage of meiotic prophase 1). In most organisms, each replicated chromosome (composed of two identical sister chromatids) elicits formation of DNA double-strand breaks during the leptotene phase. These breaks are repaired by homologous recombination, that uses the homologous chromosome as a template for repair. The search for the homologous target, helped by numerous proteins collectively referred as the synaptonemal complex, cause the two homologs to pair, between the leptotene and the pachytene phases of meiosis I. Resolution of the DNA recombination intermediate into a crossover exchanges DNA segments between the two homologous chromosomes at a site called a chiasma (plural: chiasmata). This physical strand exchange and the cohesion between the sister chromatids along each chromosome ensure robust pairing of the homologs in diplotene phase. The structure, visible by microscopy, is called a bivalent. Resolution of the DNA recombination intermediate into a crossover exchanges DNA segments between the two homologous chromosomes at a site called a c
|
https://en.wikipedia.org/wiki/Douglas%20N.%20Jackson
|
Douglas Northrop Jackson II (August 14, 1929 – August 22, 2004) was a Canadian psychology professor best known for his work in human assessment and psychological testing.
Life and career
Born in Merrick, New York, Jackson graduated from Cornell University in 1951 with a BSc in Industrial and Labor Relations and from Purdue University in 1955 with a PhD in Clinical Psychology. Jackson taught at Pennsylvania State University (1956–62) and Stanford University (1962–64) before starting at University of Western Ontario in 1964, where he taught for over 32 years.
Jackson created numerous tests in his life, including:
Multidimensional Aptitude Battery (MAB)
Personality Research Form (PRF)
Jackson Vocational Interest Survey (JVIS)
Employee Screening Questionnaire (ESQ)
These were distributed through two companies he founded, Research Psychologists Press and Sigma Assessment Systems.
He collaborated with Samuel Messick at the Educational Testing Service, examining construct validity. Jackson also published several analyses on sex and intelligence that found males applying to medical schools had a small but nontrivial advantage in general intelligence factor and in reasoning.
Jackson served on the Executive Council of the International Test Commission and was a Fellow of the Royal Society of Canada (1989). He was president of the Society of Multivariate Experimental Research from 1975–1976 and received their Saul Sells Award for Lifetime Contributions in 1997. He was President of APA's Division of Measurement, Evaluation, and Statistics from 1989–1990 and was awarded that division's Samuel J. Messick Award for Distinguished Scientific Contributions in 2004.
In 1994 he was one of 52 signatories on "Mainstream Science on Intelligence," an editorial written by Linda Gottfredson and published in The Wall Street Journal, which declared the consensus of the signing scholars on issues related to the controversy about intelligence research that followed the publication of the
|
https://en.wikipedia.org/wiki/Discontinuous%20deformation%20analysis
|
Discontinuous deformation analysis (DDA) is a type of discrete element method (DEM) originally proposed by Shi in 1988. DDA is somewhat similar to the finite element method for solving stress-displacement problems, but accounts for the interaction of independent particles (blocks) along discontinuities in fractured and jointed rock masses. DDA is typically formulated as a work-energy method, and can be derived using the principle of minimum potential energy or by using Hamilton's principle. Once the equations of motion are discretized, a step-wise linear time marching scheme in the Newmark family is used for the solution of the equations of motion. The relation between adjacent blocks is governed by equations of contact interpenetration and accounts for friction. DDA adopts a stepwise approach to solve for the large displacements that accompany discontinuous movements between blocks. The blocks are said to be "simply deformable". Since the method accounts for the inertial forces of the blocks' mass, it can be used to solve the full dynamic problem of block motion.
Vs DEM
Although DDA and DEM are similar in the sense that they both simulate the behavior of interacting discrete bodies, they are quite different theoretically. While DDA is a displacement method, DEM is a force method. While DDA uses displacement as variables in an implicit formulation with opening-closing iterations within each time step to achieve equilibrium of the blocks under constrains of the contact, DEM employs an explicit, time marching scheme to solve the equations of motion directly (Cundall and Hart). The system of equation in DDA is derived from minimizing the total potential energy of the system being analyzed. This guarantee that equilibrium is satisfied at all times and that energy consumption is natural since it is due to frictional forces. In DEM, unbalanced forces drive the solution process, and damping is used to dissipate energy. If a quasi-static solution is desired in which the i
|
https://en.wikipedia.org/wiki/454%20Life%20Sciences
|
454 Life Sciences was a biotechnology company based in Branford, Connecticut that specialized in high-throughput DNA sequencing. It was acquired by Roche in 2007 and shut down by Roche in 2013 when its technology became noncompetitive, although production continued until mid-2016.
History
454 Life Sciences was founded by Jonathan Rothberg and was originally known as 454 Corporation, a subsidiary of CuraGen. For their method for low-cost gene sequencing, 454 Life Sciences was awarded the Wall Street Journal's Gold Medal for Innovation in the Biotech-Medical category in 2005. The name 454 was the code name by which the project was referred to at CuraGen, and the numbers have no known special meaning.
In November 2006, Rothberg, Michael Egholm, and colleagues at 454 published a cover article with Svante Pääbo in Nature describing the first million base pairs of the Neanderthal genome, and initiated the Neanderthal Genome Project to complete the sequence of the Neanderthal genome by 2009.
In late March 2007, Roche Diagnostics acquired 454 Life Sciences for US$154.9 million. It remained a separate business unit.
In October 2013, Roche announced that it would shut down 454, and stop supporting the platform by mid-2016.
In May 2007, 454 published the results of Project "Jim": the sequencing of the genome of James Watson, co-discoverer of the structure of DNA.
Technology
454 Sequencing used a large-scale parallel pyrosequencing system capable of sequencing roughly 400-600 megabases of DNA per 10-hour run on the Genome Sequencer FLX with GS FLX Titanium series reagents.
The system relied on fixing nebulized and adapter-ligated DNA fragments to small DNA-capture beads in a water-in-oil emulsion. The DNA fixed to these beads was then amplified by PCR. Each DNA-bound bead was placed into a ~29 μm well on a PicoTiterPlate, a fiber optic chip. A mix of enzymes such as DNA polymerase, ATP sulfurylase, and luciferase was also packed into the well. The PicoTiterPlate was the
|
https://en.wikipedia.org/wiki/HTTP%20ETag
|
The ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for Web cache validation, which allows a client to make conditional requests. This mechanism allows caches to be more efficient and saves bandwidth, as a Web server does not need to send a full response if the content has not changed. ETags can also be used for optimistic concurrency control to help prevent simultaneous updates of a resource from overwriting each other.
An ETag is an opaque identifier assigned by a Web server to a specific version of a resource found at a URL. If the resource representation at that URL ever changes, a new and different ETag is assigned. Used in this manner, ETags are similar to fingerprints and can quickly be compared to determine whether two representations of a resource are the same.
ETag generation
The use of ETags in the HTTP header is optional (not mandatory as with some other fields of the HTTP 1.1 header). The method by which ETags are generated has never been specified in the HTTP specification.
Common methods of ETag generation include using a collision-resistant hash function of the resource's content, a hash of the last modification timestamp, or even just a revision number.
In order to avoid the use of stale cache data, methods used to generate ETags should guarantee (as much as is practical) that each ETag is unique. However, an ETag-generation function could be judged to be "usable", if it can be proven (mathematically) that duplication of ETags would be "acceptably rare", even if it could or would occur.
RFC-7232 explicitly states that ETags should be content-coding aware, e.g.
ETag: "123-a" – for no Content-Encoding
ETag: "123-b" – for Content-Encoding: gzip
Some earlier checksum functions that were weaker than CRC32 or CRC64 are known to suffer from hash collision problems. Thus they were not good candidates for use in ETag generation.
Strong and weak validation
The ETag mechan
|
https://en.wikipedia.org/wiki/List%20of%20disk%20partitioning%20software
|
This is a list of utilities for performing disk partitioning.
List
Disk partitioning software
Lists of software
|
https://en.wikipedia.org/wiki/Diabolical%20cube
|
The diabolical cube is a three-dimensional dissection puzzle consisting of six polycubes (shapes formed by gluing cubes together face to face) that can be assembled together to form a single 3 × 3 × 3 cube.
The six pieces are: one dicube, one tricube, one tetracube, one pentacube, one hexacube and one heptacube, that is, polycubes of 2, 3, 4, 5, 6 and 7 cubes.
There are many similar variations of this type of puzzle, including the Soma cube and the Slothouber–Graatsma puzzle, two other dissections of a 3 × 3 × 3 cube into polycubes which use seven and nine pieces respectively. However, writes that the diabolical cube appears to be the oldest puzzle of this type, first appearing in an 1893 book Puzzles Old and New by Professor Hoffmann (Angelo Lewis).
Because all of the pieces have only a single layer of cubes, their shape is unchanged by a mirror reflection, so a mirror reflection of a solution produces either the same solution or another valid solution. The puzzle has 13 different solutions, if mirrored pairs of solutions are not counted as being distinct from each other.
|
https://en.wikipedia.org/wiki/Weyl%20scalar
|
In the Newman–Penrose (NP) formalism of general relativity, Weyl scalars refer to a set of five complex scalars which encode the ten independent components of the Weyl tensor of a four-dimensional spacetime.
Definitions
Given a complex null tetrad and with the convention , the Weyl-NP scalars are defined by
Note: If one adopts the convention , the definitions of should take the opposite values; that is to say, after the signature transition.
Alternative derivations
According to the definitions above, one should find out the Weyl tensors before calculating the Weyl-NP scalars via contractions with relevant tetrad vectors. This method, however, does not fully reflect the spirit of Newman–Penrose formalism. As an alternative, one could firstly compute the spin coefficients and then use the NP field equations to derive the five Weyl-NP scalars
where (used for ) refers to the NP curvature scalar which could be calculated directly from the spacetime metric .
Physical interpretation
Szekeres (1965) gave an interpretation of the different Weyl scalars at large distances:
is a "Coulomb" term, representing the gravitational monopole of the source;
& are ingoing and outgoing "longitudinal" radiation terms;
& are ingoing and outgoing "transverse" radiation terms.
For a general asymptotically flat spacetime containing radiation (Petrov Type I), & can be transformed to zero by an appropriate choice of null tetrad. Thus these can be viewed as gauge quantities.
A particularly important case is the Weyl scalar .
It can be shown to describe outgoing gravitational radiation (in an asymptotically flat spacetime) as
Here, and are the "plus" and "cross" polarizations of gravitational radiation, and the double dots represent double time-differentiation.
There are, however, certain examples in which the interpretation listed above fails. These are exact vacuum solutions of the Einstein field equations with cylindrical symmetry. For instance, a static (infinitel
|
https://en.wikipedia.org/wiki/Beevor%27s%20axiom
|
Beevor's Axiom is the idea that the brain does not know muscles, only movements. In other words, the brain registers the movements that muscles combine to make, not the individual muscles that are making the movements. Hence, this is why one can sign their name (albeit poorly) with their foot. Beevor's Axiom was coined by Dr. Charles Edward Beevor, an English neurologist.
Dr. Beevor presented Beevor's Axiom in a series of four lectures from June 3, 1903 to July 4, 1903 before the Royal College of Physicians of London as part of the Croonian Lectures. His experiments showed that when an area of the cortex was stimulated, the body responded with a movement, not just a single muscle. Dr. Beevor concluded that “only co-ordinated movements are represented in the excitable cortex”
In relation to Beevor's Axiom, it has been found that the brain encodes sequences, such as playing the piano, signing our name, wiping off a counter, and chopping vegetables, and once encoded and practiced, it takes less brain activity to perform them. This supports Beevor's Axiom, because the brain can recall movements easier than it can learn them.
Beevor's Axiom is only partially true, however. Most behavior of muscles is encoded in the primary motor cortex (M1) and separated by muscle group. In an effort to understand the encoding in the M1, researchers observed commands of monkeys. Muscle cells changed firing rate according to the direction of the arm movements. Each neuron has one direction that elicits the greatest response. Some M1 neurons encode muscle contractions, while others react to particular movements, regardless of the muscles used to perform them. The key characteristic of the primary motor cortex is its dynamic nature; the M1 changes based on experience. The supplementary motor area (SMA) plays a key role in initiating motion sequences. The premotor cortex (PMA) plays a key role when motor sequences are guided by external events. They map behaviors as opposed to the M1 whi
|
https://en.wikipedia.org/wiki/Ancestral%20reconstruction
|
Ancestral reconstruction (also known as Character Mapping or Character Optimization) is the extrapolation back in time from measured characteristics of individuals (or populations) to their common ancestors. It is an important application of phylogenetics, the reconstruction and study of the evolutionary relationships among individuals, populations or species to their ancestors. In the context of evolutionary biology, ancestral reconstruction can be used to recover different kinds of ancestral character states of organisms that lived millions of years ago. These states include the genetic sequence (ancestral sequence reconstruction), the amino acid sequence of a protein, the composition of a genome (e.g., gene order), a measurable characteristic of an organism (phenotype), and the geographic range of an ancestral population or species (ancestral range reconstruction). This is desirable because it allows us to examine parts of phylogenetic trees corresponding to the distant past, clarifying the evolutionary history of the species in the tree. Since modern genetic sequences are essentially a variation of ancient ones, access to ancient sequences may identify other variations and organisms which could have arisen from those sequences. In addition to genetic sequences, one might attempt to track the changing of one character trait to another, such as fins turning to legs.
Non-biological applications include the reconstruction of the vocabulary or phonemes of ancient languages, and cultural characteristics of ancient societies such as oral traditions or marriage practices.
Ancestral reconstruction relies on a sufficiently realistic statistical model of evolution to accurately recover ancestral states. These models use the genetic information already obtained through methods such as phylogenetics to determine the route that evolution has taken and when evolutionary events occurred. No matter how well the model approximates the actual evolutionary history, however, one's
|
https://en.wikipedia.org/wiki/Suprapleural%20membrane
|
The suprapleural membrane, eponymously known as Sibson's fascia, is a structure described in human anatomy.
It is named for Francis Sibson.
Anatomy
It refers to a thickening of connective tissue that covers the apex of each human lung. It is an extension of the endothoracic fascia that exists between the parietal pleura and the thoracic cage. Sibson muscular part is originated from scalenus medius muscle. Fascial part is originated from Endothoracic Fascia. It attaches to the internal border of the first rib and the transverse processes of vertebra C7. It extends approximately an inch more superiorly than the superior thoracic aperture, because the lungs themselves extend higher than the top of the ribcage.
Clinical significance
The function of the suprapleural membrane is to protect the apex of the lung (as some of the part which extends outside the rib cage) and to protect the cervical fascia. This helps in resisting intrathoracic pressure changes therefore preventing inflation and deflation of the neck during expiration and inspiration respectively and also providing rigidity to the thoracic inlet.
Herniation of the cervical fascia may result due to injury to suprapleural membrane.
"The thoracic duct traverses Sibson's Fascia of the thoracic-inlet up to the level of C7 before turning around and emptying into the left (major) duct. The right (minor duct) only traverses the thoracic inlet once."<ref>p. 86, p 210, Kuchera, WA.
|
https://en.wikipedia.org/wiki/Genencor
|
Genencor is a biotechnology company based in Palo Alto, CA and a subsidiary of IFF. Genencor is a producer of Industrial enzymes and low-priced bulk protein. The name Genencor originates with Genencor, Inc., the original joint venture between Genentech and Corning Incorporated, which was founded in 1982. It is considered to have pioneered the field of industrial biotechnology, as distinct from traditional applications of biotechnology to health care and agriculture.
In 2005 Genencor was acquired by Danisco.
In 2008 Genencor entered a joint venture with DuPont, called DuPont Danisco Cellulosic Ethanol LLC, to develop and commercialize low cost technology for the production of cellulosic ethanol. In 2008, Genencor and Goodyear announced they were working to develop BioIsoprene.
In 2011, Dupont acquired Danisco for $6.3 billion.
In 2021, portions of Dupont including the Genencor division were acquired by International Flavors & Fragrances.
Awards
Genencor achieved the following awards:
Named No. 2 Best Medium-Sized Company to Work for in America by the Great Place to Work® Institute, Inc. (2004)
Named No. 1 Best Medium-Sized Company to Work for in America by the Great Place to Work® Institute, Inc. (2005)
Named No. 1 Best Place to Work in the Bay Area by the San Francisco Chronicle (2005)
Named No. 11 Best Medium-Sized Company to Work for in America by the Great Place to Work® Institute, Inc. (2011)
See also
Environmental biotechnology
Agricultural biotechnology
|
https://en.wikipedia.org/wiki/Local%20convex%20hull
|
Local convex hull (LoCoH) is a method for estimating size of the home range of an animal or a group of animals (e.g. a pack of wolves, a pride of lions, or herd of buffaloes), and for constructing a utilization distribution. The latter is a probability distribution that represents the probabilities of finding an animal within a given area of its home range at any point in time; or, more generally, at points in time for which the utilization distribution has been constructed. In particular, different utilization distributions can be constructed from data pertaining to particular periods of a diurnal or seasonal cycle.
Utilization distributions are constructed from data providing the location of an individual or several individuals in space at different points in time by associating a local distribution function with each point and then summing and normalizing these local distribution functions to obtain a distribution function that pertains to the data as a whole. If the local distribution function is a parametric distribution, such as a symmetric bivariate normal distribution then the method is referred to as a kernel method, but more correctly should be designated as a parametric kernel method. On the other hand, if the local kernel element associated with each point is a local convex polygon constructed from the point and its k-1 nearest neighbors, then the method is nonparametric and referred to as a k-LoCoH or fixed point LoCoH method. This is in contrast to r-LoCoH (fixed radius) and a-LoCoH (adaptive radius) methods.
In the case of LoCoH utilization distribution constructions, the home range can be taken as the outer boundary of the distribution (i.e. the 100th percentile). In the case of utilization distributions constructed from unbounded kernel elements, such as bivariate normal distributions, the utilization distribution is itself unbounded. In this case the most often used convention is to regard the 95th percentile of the utilization distribution
|
https://en.wikipedia.org/wiki/Heap%20%28mathematics%29
|
In abstract algebra, a semiheap is an algebraic structure consisting of a non-empty set H with a ternary operation denoted that satisfies a modified associativity property:
A biunitary element h of a semiheap satisfies [h,h,k] = k = [k,h,h] for every k in H.
A heap is a semiheap in which every element is biunitary.
The term heap is derived from груда, Russian for "heap", "pile", or "stack". Anton Sushkevich used the term in his Theory of Generalized Groups (1937) which influenced Viktor Wagner, promulgator of semiheaps, heaps, and generalized heaps. Груда contrasts with группа (group) which was taken into Russian by transliteration. Indeed, a heap has been called a groud in English text.)
Examples
Two element heap
Turn into the cyclic group , by defining the identity element, and . Then it produces the following heap:
Defining as the identity element and would have given the same heap.
Heap of integers
If are integers, we can set to produce a heap. We can then choose any integer to be the identity of a new group on the set of integers, with the operation
and inverse
.
Heap of a groupoid with two objects
One may generalize the notion of the heap of a group to the case of a groupoid which has two objects A and B when viewed as a category. The elements of the heap may be identified with the morphisms from A to B, such that three morphisms x, y, z define a heap operation according to:
This reduces to the heap of a group if a particular morphism between the two objects is chosen as the identity. This intuitively relates the description of isomorphisms between two objects as a heap and the description of isomorphisms between multiple objects as a groupoid.
Heterogeneous relations
Let A and B be different sets and the collection of heterogeneous relations between them. For define the ternary operator
where qT is the converse relation of q. The result of this composition is also in so a mathematical structure has been formed by the ternary operat
|
https://en.wikipedia.org/wiki/Health%20freedom%20movement
|
The health freedom movement is a libertarian coalition that opposes regulation of health practices and advocates for increased access to "non-traditional" health care.
The right-wing John Birch Society has been a prominent advocate for health freedom since at least the 1970s, and the specific term "health freedom movement" has been used in the United States since the 1990s.
Vitamins and supplements have been exempted in the US from regulations requiring evidence of safety and efficacy, largely due to the activism of health freedom advocates. The belief that supplements and vitamins can demonstrably improve health or longevity and that there are no negative consequences from their use, is not widely accepted in the medical community. Very rarely, large doses of some vitamins lead to vitamin poisoning (hypervitaminosis).
Roots and support base
Health freedom is a libertarian position not aligned to the conventional left/right political axis. Libertarian Republican Congressman Ron Paul introduced the Health Freedom Protection Act in the U.S. House of Representatives in 2005.
Prominent celebrity supporters of the movement include the musician Sir Paul McCartney, who says that people "have a right to buy legitimate health food supplements" and that "this right is now clearly under threat," and the pop star/actress Billie Piper, who joined a march in London in 2003 to protest planned EU legislation to ban high dosage vitamin supplements.
Legislation
United States
The Dietary Supplement Health and Education Act of 1994 (DSHEA) defines supplements as foods and thus permits marketing unless the United States Food and Drug Administration (FDA) proves that it poses significant or unreasonable risk of harm rather than requiring the manufacturer to prove the supplement’s safety or efficacy. The Food and Drug Administration can take action only if the producers make medical claims about their products or if consumers of the products become seriously ill.
An October 2002 na
|
https://en.wikipedia.org/wiki/Symsagittifera%20roscoffensis
|
Symsagittifera roscoffensis, also called the Roscoff worm, the mint-sauce worm, or the shilly-shally worm, is a marine worm belonging to the phylum Xenacoelomorpha. The origin and nature of the green color of this worm stimulated the curiosity of zoologists early on. It is due to the partnership between the animal and a green micro-algae, the species Tetraselmis convolutae, hosted under its epidermis. It is the photosynthetic activity of the micro-algae in hospite that provides the essential nutrients for the worm. This partnership is called photosymbiosis, from "photo", "light", and symbiosis "who lives with". These photosynthetic marine animals live in colonies (up to several million individuals) on the tidal zone.
Biology and ecology
Although roscoffensis means "who comes from Roscoff", this flatworm is not endemic to Roscoff or North Brittany. Its geographical distribution extends over the Atlantic coast of Europe; colonies were observed from Wales to the south of Portugal.
130 years of history
In 1879, at the Station Biologique de Roscoff founded by Henri de Lacaze-Duthiers, the British biologist Patrick Geddes pondered the nature and origin of the green compound of a local acoela he called Convoluta schultzii. He succinctly described "chlorophyll-containing cells" and the presence of associated starch "as in plant chlorophyll grains".
In 1886, French biologist Yves Delage published a detailed histological study describing (among other things) the nervous system and the sense organs of the same Roscoff acoela, Convoluta schultzii. In this article Delage also inquired as to "the nature of zoochlorella (i.e. micro-algae): are they real algae? Where do they come from? What are the symbiotic relationships that unite them to their commensal?"
In 1891, Ludwig von Graff, a German zoologist from the University of Graz and a specialist in acoela, undertook a taxonomic redescription of the Roscoff acoela at the Station Biologique de Roscoff. His works highlight a
|
https://en.wikipedia.org/wiki/No-broadcasting%20theorem
|
In physics, the no-broadcasting theorem is a result of quantum information theory. In the case of pure quantum states, it is a corollary of the no-cloning theorem. The no-cloning theorem for pure states says that it is impossible to create two copies of an unknown state given a single copy of the state. Since quantum states cannot be copied in general, they cannot be broadcast. Here, the word "broadcast" is used in the sense of conveying the state to two or more recipients. For multiple recipients to each receive the state, there must be, in some sense, a way of duplicating the state. The no-broadcast theorem generalizes the no-cloning theorem for mixed states.
The theorem also includes a converse: if two quantum states do commute, there is a method for broadcasting them: they must have a common basis of eigenstates diagonalizing them simultaneously, and the map that clones every state of this basis is a legitimate quantum operation, requiring only physical resources independent of the input state to implement—a completely positive map. A corollary is that there is a physical process capable of broadcasting every state in some set of quantum states if, and only if, every pair of states in the set commutes. This broadcasting map, which works in the commuting case, produces an overall state in which the two copies are perfectly correlated in their eigenbasis.
Remarkably, the theorem does not hold if more than one copy of the initial state is provided: for example, broadcasting six copies starting from four copies of the original state is allowed, even if the states are drawn from a non-commuting set. The purity of the state can even be increased in the process, a phenomenon known as superbroadcasting.
Generalized No-Broadcast Theorem
The generalized quantum no-broadcasting theorem, originally proven by Barnum, Caves, Fuchs, Jozsa and Schumacher for mixed states of finite-dimensional quantum systems, says that given a pair of quantum states which do not commu
|
https://en.wikipedia.org/wiki/Immunotoxicology
|
Immunotoxicology (sometimes abbreviated as ITOX) is the study of the toxicity of foreign substances called xenobiotics and their effects on the immune system. Some toxic agents that are known to alter the immune system include: industrial chemicals, heavy metals, agrochemicals, pharmaceuticals, drugs, ultraviolet radiation, air pollutants and some biological materials. The effects of these immunotoxic substances have been shown to alter both the innate and adaptive parts of the immune system. Consequences of xenobiotics affect the organ initially in contact (often the lungs or skin). Some commonly seen problems that arise as a result of contact with immunotoxic substances are: immunosuppression, hypersensitivity, and autoimmunity. The toxin-induced immune dysfunction may also increase susceptibility to cancer.
The study of immunotoxicology began in the 1970s. However, the idea that some substances have a negative effect on the immune system was not a novel concept as people have observed immune system alterations as a result of contact toxins since ancient Egypt. Immunotoxicology has become increasingly important when considering the safety and effectiveness of commercially sold products. In recent years, guidelines and laws have been created in the effort to regulate and minimize the use of immunotoxic substances in the production of agricultural products, drugs, and consumer products. One example of these regulations are FDA guidelines mandate that all drugs must be tested for toxicity to avoid negative interactions with the immune system, and in-depth investigations are required whenever a drug shows signs of affecting the immune system. Scientists use both in vivo and in vitro techniques when determining the immunotoxic effects of a substance.
Immunotoxic agents can damage the immune system by destroying immune cells and changing signaling pathways. This has wide-reaching consequences in both the innate and adaptive immune systems. Changes in the adaptive immu
|
https://en.wikipedia.org/wiki/David%20Lubinski
|
David J. Lubinski is an American psychology professor known for his work in applied research, psychometrics, and individual differences. His work (with Camilla Benbow) has focussed on exceptionally able children: the nature of exceptional ability, the development of people with exceptional ability (in particular meeting the educational needs of gifted children to maximise their talent). He has published widely on the impact of extremely high ability on outputs such as publications, creative writing and art, patents etc. This work disconfirmed the "threshold hypothesis" which suggested that a certain minimum of IQ might be needed, but higher IQ did not translate into greater productivity or creativity. Instead his work shows that higher intelligence leads to higher outcomes with no apparent threshold or dropping off of its impact.
Education
He earned his B.A. and PhD from the University of Minnesota in 1981 and 1987 respectively. He was a Postdoctoral Fellow at University of Illinois at Urbana-Champaign from 1987 to 1990 with Lloyd G. Humphreys. He taught at Iowa State University from 1990 to 1998 and took a position at Vanderbilt University in 1998, where he currently co-directs the Study of Mathematically Precocious Youth (SMPY), a longitudinal study of intellectual talent, with Camilla Benbow.
In 1994, he was one of 52 signatories on "Mainstream Science on Intelligence", an editorial written by Linda Gottfredson and published in The Wall Street Journal, which declared the consensus of the signing scholars on issues related to intelligence research following the publication of the book The Bell Curve.
In 1996, he won the American Psychological Association Distinguished Scientific Award for Early Career Contribution to Psychology (Applied Research/Psychometrics). In 2006, he received the Distinguished Scholar Award from the National Association for Gifted Children (NAGC). In addition to this, his work has earned several Mensa Awards for Research Excellence and
|
https://en.wikipedia.org/wiki/Multivalued%20dependency
|
In database theory, a multivalued dependency is a full constraint between two sets of attributes in a relation.
In contrast to the functional dependency, the multivalued dependency requires that certain tuples be present in a relation. Therefore, a multivalued dependency is a special case of tuple-generating dependency. The multivalued dependency plays a role in the 4NF database normalization.
A multivalued dependency is a special case of a join dependency, with only two sets of values involved, i.e. it is a binary join dependency.
A multivalued dependency exists when there are at least three attributes (like X,Y and Z) in a relation and for a value of X there is a well defined set of values of Y and a well defined set of values of Z. However, the set of values of Y is independent of set Z and vice versa.
Formal definition
The formal definition is as follows:
Let be a relation and let and be sets of attributes. The multivalued dependency (" multidetermines ") holds on if, for any legal relation and all pairs of tuples and in such that , there exist tuples and in such that:
Informally, if one denotes by the tuple having values for collectively equal to , then whenever the tuples and exist in , the tuples and should also exist in .
The multivalued dependency can be schematically depicted as shown below:
Example
Consider this example of a relation of university courses, the books recommended for the course, and the lecturers who will be teaching the course:
Because the lecturers attached to the course and the books attached to the course are independent of each other, this database design has a multivalued dependency; if we were to add a new book to the AHA course, we would have to add one record for each of the lecturers on that course, and vice versa.
Put formally, there are two multivalued dependencies in this relation: {course} {book} and equivalently {course} {lecturer}.
Databases with multivalued dependencies thus exhibit redun
|
https://en.wikipedia.org/wiki/Oval%20electric%20ray
|
The oval electric ray (Typhlonarke aysoni) is a little-known species of sleeper ray in the family Narkidae. It is endemic to New Zealand, where it is generally found on the sea floor at a depth of . Seldom exceeding in length, this species has a thick, oval pectoral fin disc and a short, stout tail with a single dorsal fin. It is blind, as its tiny eyes are covered by skin. Its pelvic fins are divided in two, with the anterior portion forming a limb-like appendage. These appendages likely allow the ray, which may not be able to swim at all, to "walk" along the bottom. The claspers of adult males extend beyond the disc. Polychaete worms are known to be part of its diet, and its reproduction is aplacental viviparous. It can produce an electric shock for defense. The International Union for Conservation of Nature presently lacks the data to assess its conservation status.
Taxonomy
The oval electric ray was described by William John Phillipps of the Dominion Museum, in a 1929 volume of the New Zealand Journal of Science and Technology. The type specimen was collected off Island Bay, Wellington. A specimen of this ray had been illustrated earlier by Augustus Hamilton, in his 1909 description of the related T. aysoni. Hamilton obtained the specimen at the fish market at Dunedin and made note of its different shape, but did not recognize it as a distinct species. Along with T. aysoni, this species may also be called the blind electric ray.
Distribution and habitat
Confusion between the two Typhlonarke species has led to uncertainty regarding the extent of the oval electric ray's distribution. Both species are found off the eastern coast of New Zealand, between the East Cape of North Island and the Snares Shelf south of South Island, including the Cook and Foveaux Straits and the Stewart and Chatham Islands. Bottom-dwelling in nature, this species is generally found at a depth of but has been recorded from as shallow as and as deep as .
Description
The pectoral fin d
|
https://en.wikipedia.org/wiki/Music%20on%20demand
|
Music-on-demand (MOD) is a music recording industry certified multi-billion dollar music distribution & subscriber-based industry model conceived with the growth of two-way computing and telecommunications in the early 1990s originally architected by Dale Schalow. Primarily, high-quality music is made available to purchase, access by search, and play back instantly using software on set-top boxes (6MHz separated guard band channels), coaxial, fiber optics, cellular mobile devices, Apple Macintosh, Microsoft Windows, from an available distribution point, such as a computer host or server located at a telephone, cable TV & wireless data center facility.
History
Band uut Gruusbek!
In 1992, computer modem speeds were limited to less than 28 thousand bits per second (28 kbit/s), compared with uncompressed, pulse code modulated (PCM), music on compact disc (CD) that required 150 thousand bytes per second. As a result, additional bandwidth is required to accommodate delivering real-time audio at CD quality standards: 16-bit frame, 44.1 kHz sampling rate, stereophonic (two channel audio). This prompted telephony, CATV, cellular and satellite providers (Virginia Tech DoD cooperative) to consider changing standards, in terms of building higher capacity for existing telecommunications infrastructures and considering business use cases to offer supplemental, U.S. based private, affordable monthly on-demand service subscription plans with revenue split for compensation to music artists representation, licensing groups, telecommunications provider and music-on-demand solutions technology provider.
Early design, long range planning, and development of music-on-demand technology, in accordance with the laws of the United States such as the Home Recording Act of 1992, mechanical, copyright licensing include Access Music Network (AMN) by inventor & technology owner Dale Schalow. Mr. Schalow, in the early 1990s, was an independent audio engineer and programmer in Los Angeles, C
|
https://en.wikipedia.org/wiki/Synthetic%20vaccine
|
A synthetic vaccine is a vaccine consisting mainly of synthetic peptides, carbohydrates, or antigens. They are usually considered to be safer than vaccines from bacterial cultures. Creating vaccines synthetically has the ability to increase the speed of production. This is especially important in the event of a pandemic.
History
The world's first synthetic vaccine was created in 1796 from diphtheria toxin by Louis Chedid (scientist) from the Pasteur Institute and Michael Sela from the Weizmann Institute.
In 1986, Manuel Elkin Patarroyo created the SPf66, the first version of a synthetic vaccine for Malaria.
During the H1N1 outbreak in 2009, vaccines only became available in large quantities after the peak of human infections. This was a learning experience for vaccination companies. Novartis Vaccine and Diagnostics, among other companies, developed a synthetic approach that very rapidly generates vaccine viruses from sequence data in order to be able to administer vaccinations early in the pandemic outbreak. Philip Dormatizer, the leader of viral vaccine research at Novartis, says they have "developed a way of chemically synthesizing virus genomes and growing them in tissue culture cells".
Phase I data of UB-311, a synthetic peptide vaccine targeting amyloid beta, showed that the drug was able to generate antibodies to specific amyloid beta oligomers and fibrils with no decrease in antibody levels in patients of advanced age. Results from the Phase II trial are expected in the second half of 2018.
|
https://en.wikipedia.org/wiki/Luca%20Pagano
|
Luca Pagano (born July 28, 1978 in Treviso) is an Italian-born poker player who finished third place in the Barcelona Open, a European Poker Tour (EPT) event, in 2004. Since then he has reached six more EPT final tables, finishing in the money 20 times, placing him top of the EPT All-Time Leaderboard. He has also placed in two events at the 2006 World Series of Poker.
He was a Team PokerStar PRO member for almost 15 years and, as of 2017, his total live tournament winnings exceed $2,200,000.
Biography
Pagano attended the information-science branch at the Ca' Foscari University of Venice; quickly his managerial skills took over and the nightlife family-run business was profitably administrated.
The big love for poker comes a little later, during a trip in Slovenia where he played a game in the Casino of Nova Gorica.
He has been a poker professional since 2004, year in which he scored the third place at the EPT Barcelona Open. This rising career takes him to play in the major tournaments having places in the most eminent cities, so that PokerStars chooses to make him the perfect Italian testimonial of the Team Pro. Pagano and PokerStars parted their ways in May 2017, consensually.
Pagano adds to his palmares many successes and builds up a very solid bankroll, rapidly becoming the player with the most in the money placements in all of Italian history.
Together with poker, Pagano is an entrepreneur, Angel Investor, TV commentator and host, main host of the Italian talent and reality show La Casa degli Assi, sponsored by PokerStars and co-owner of Italian professional eSports Team QLASH.
European Poker Tour
In the European Poker Tour, Pagano gained 20 ITM placements with 7 Final Tables. Despite the strong perseverance, he never scored any first place; his best EPT result is third place in Barcelona in 2004.
On September 11, 2008, Pagano was awarded by the European Poker Tour Awards as "The Player of the Year".
Pagano holds the first position in the EPT All-Time
|
https://en.wikipedia.org/wiki/Power%20Drift
|
is a kart racing game released in arcades by Sega in 1988. More technologically advanced than Sega's earlier 2.5D racing games, like Hang-On (1985) and Out Run (1986), in Power Drift the entire world and track consist of sprites. The upgraded hardware of the Sega Y Board allows individual sprites and the background to be rotated–even while being scaled–making the visuals more dynamic.
Designed and directed by Yu Suzuki, the game was a critical and commercial success upon release in arcades. It was subsequently ported to various home computers in Europe by Activision in 1989, followed by a PC Engine port published in Japan by Asmik Ace in 1990. It was not released on Sega consoles until the Sega Ages release for the Sega Saturn in 1998.
Gameplay
The objective is to finish each race in third place or better in order to advance to the next stage. Players have the option of continuing if they finish the race in fourth place or lower before the game is over. However, the player's score will not increase upon continuing the game.
The tracks have a roller coaster feel to them, with many steep climbs and falls, as well as the ability to "fall" off higher levels. To add to this feeling, the sit-down cabinet was built atop a raised hydraulic platform, and the machine would tilt and shake quite violently. Each circuit, labeled from "A" to "E" has a certain theme to it (for example, circuit A has cities, circuit B has deserts, circuit C has beaches, etc.) in a series of five tracks. There are also four laps for each course.
Course A was Springfield Ovalshape, Foofy Hilltop, Snowhill Drive, Octopus Oval and Curry De Parl, Course B was Swingshot City, Phantom Riverbend, Octangular Ovalshape, Charlotte Beach and Highland Spheres, Course C was Bum Beach, Jason Bendyline, Nighthawk City, Zanussi Island and Wasteman Freefall, Course D was Mexico Colours, Oxygen Desert, Jamie Road, Monaco Da Farce and Blow Hairpin, Course E was Aisthorpe Springrose Valley, Patterson Nightcity, Lyd
|
https://en.wikipedia.org/wiki/Wireless%20intrusion%20prevention%20system
|
In computing, a wireless intrusion prevention system (WIPS) is a network device that monitors the radio spectrum for the presence of unauthorized access points (intrusion detection), and can automatically take countermeasures (intrusion prevention).
Purpose
The primary purpose of a WIPS is to prevent unauthorized network access to local area networks and other information assets by wireless devices. These systems are typically implemented as an overlay to an existing Wireless LAN infrastructure, although they may be deployed standalone to enforce no-wireless policies within an organization. Some advanced wireless infrastructure has integrated WIPS capabilities.
Large organizations with many employees are particularly vulnerable to security breaches caused by rogue access points. If an employee (trusted entity) in a location brings in an easily available wireless router, the entire network can be exposed to anyone within range of the signals.
In July 2009, the PCI Security Standards Council published wireless guidelines for PCI DSS recommending the use of WIPS to automate wireless scanning for large organizations.
Intrusion detection
A wireless intrusion detection system (WIDS) monitors the radio spectrum for the presence of unauthorized, rogue access points and the use of wireless attack tools. The system monitors the radio spectrum used by wireless LANs, and immediately alerts a systems administrator whenever a rogue access point is detected. Conventionally it is achieved by comparing the MAC address of the participating wireless devices.
Rogue devices can spoof MAC address of an authorized network device as their own. New research uses fingerprinting approach to weed out devices with spoofed MAC addresses. The idea is to compare the unique signatures exhibited by the signals emitted by each wireless device against the known signatures of pre-authorized, known wireless devices.
Intrusion prevention
In addition to intrusion detection, a WIPS also inclu
|
https://en.wikipedia.org/wiki/Quantum%20aesthetics
|
Quantum Aesthetics is a movement that was inaugurated by Gregorio Morales at the end of the 1990s with his work “El cadaver de Balzac” or Balzac's Corpse (1998). Here he defined the objectives of the movement in the phrase “mystery and difference”.
Later the Quantum Aesthetics Group arose, formed by novelists, poets, painters, photographers, film producers, models... Gregorio Morales has applied these aesthetics not only to his novels, such as “Nómadas del Tiempo” or Nomads of Time (2005), but also to his poetry books such as “Canto Cuántico” or Quantum Song (2003). Here he enters emotively in the world of subatomic physics and the human mind. Other known artists in this movement are the painter Xaverio, the poets Francisco Plata and Miguel Ángel Contreras, the film director Julio Medem and the North American musician Lawrence Axerold.
External links
Brief Introduction to Quantum Aesthestics
Estetica Cuantica: A New Approach to Culture
Gregorio Morales' Quantum Song
See also
Quantum singularity (fiction)
Literary movements
Spanish literature
Arts in Spain
Physics in fiction
|
https://en.wikipedia.org/wiki/Deceleration%20parameter
|
The deceleration parameter in cosmology is a dimensionless measure of the cosmic acceleration of the expansion of space in a Friedmann–Lemaître–Robertson–Walker universe. It is defined by:
where is the scale factor of the universe and the dots indicate derivatives by proper time. The expansion of the universe is said to be "accelerating" if (recent measurements suggest it is), and in this case the deceleration parameter will be negative. The minus sign and name "deceleration parameter" are historical; at the time of definition was expected to be negative, so a minus sign was inserted in the definition to make positive in that case. Since the evidence for the accelerating universe in the 1998–2003 era, it is now believed that is positive therefore the present-day value is negative (though was positive in the past before dark energy became dominant). In general varies with cosmic time, except in a few special cosmological models; the present-day value is denoted .
The Friedmann acceleration equation can be written as
where the sum extends over the different components, matter, radiation and dark energy, is the equivalent mass density of each component, is its pressure, and is the equation of state for each component. The value of is 0 for non-relativistic matter (baryons and dark matter), 1/3 for radiation, and −1 for a cosmological constant; for more general dark energy it may differ from −1, in which case it is denoted or simply .
Defining the critical density as
and the density parameters , substituting in the acceleration equation gives
where the density parameters are at the relevant cosmic epoch.
At the present day is negligible, and if (cosmological constant) this simplifies to
where the density parameters are present-day values; with ΩΛ + Ωm ≈ 1, and ΩΛ = 0.7 and then Ωm = 0.3, this evaluates to for the parameters estimated from the Planck spacecraft data. (Note that the CMB, as a high-redshift measurement, does not directly
|
https://en.wikipedia.org/wiki/Ultracold%20atom
|
In condensed matter physics, an ultracold atom is an atom with a temperature near absolute zero. At such temperatures, an atom's quantum-mechanical properties become important.
To reach such low temperatures, a combination of several techniques typically has to be used. First, atoms are trapped and pre-cooled via laser cooling in a magneto-optical trap. To reach the lowest possible temperature, further cooling is performed using evaporative cooling in a magnetic or optical trap. Several Nobel prizes in physics are related to the development of the techniques to manipulate quantum properties of individual atoms (e.g. 1995-1997, 2001, 2005, 2012, 2017).
Experiments with ultracold atoms study a variety of phenomena, including quantum phase transitions, Bose–Einstein condensation (BEC), bosonic superfluidity, quantum magnetism, many-body spin dynamics, Efimov states, Bardeen–Cooper–Schrieffer (BCS) superfluidity and the BEC–BCS crossover. Some of these research directions utilize ultracold atom systems as quantum simulators to study the physics of other systems, including the unitary Fermi gas and the Ising and Hubbard models. Ultracold atoms could also be used for realization of quantum computers.
History
Samples of ultracold atoms are typically prepared through the interactions of a dilute gas with a laser field. Evidence for radiation pressure, force due to light on atoms, was demonstrated independently by Lebedev, and Nichols and Hull in 1901. In 1933, Otto Frisch demonstrated the deflection of individual sodium particles by light generated from a sodium lamp.
The invention of the laser spurred the development of additional techniques to manipulate atoms with light. Using laser light to cool atoms was first proposed in 1975 by taking advantage of the Doppler effect to make the radiation force on an atom dependent on its velocity, a technique known as Doppler cooling. Similar ideas were also proposed to cool samples of trapped ions. Applying Doppler cooling in th
|
https://en.wikipedia.org/wiki/Embedded%20atom%20model
|
In computational chemistry and computational physics, the embedded atom model, embedded-atom method or EAM, is an approximation describing the energy between atoms
and is a type of interatomic potential. The energy is a function of a sum of functions of the separation between an atom and its neighbors. In the original model, by Murray Daw and Mike Baskes, the latter functions represent the electron density. The EAM is related to the second moment approximation to tight binding theory, also known as the Finnis-Sinclair model. These models are particularly appropriate for metallic systems. Embedded-atom methods are widely used in molecular dynamics simulations.
Model simulation
In a simulation, the potential energy of an atom, , is given by
,
where is the distance between atoms and , is a pair-wise potential function, is the contribution to the electron charge density from atom of type at the location of atom , and is an embedding function that represents the energy required to place atom of type into the electron cloud.
Since the electron cloud density is a summation over many atoms, usually limited by a cutoff radius, the EAM potential is a multibody potential. For a single element system of atoms, three scalar functions must be specified: the embedding function, a pair-wise interaction, and an electron cloud contribution function. For a binary alloy, the EAM potential requires seven functions: three pair-wise interactions (A-A, A-B, B-B), two embedding functions, and two electron cloud contribution functions. Generally these functions are provided in a tabularized format and interpolated by cubic splines.
See also
Interatomic potential
Lennard-Jones potential
Bond order potential
Force field (chemistry)
|
https://en.wikipedia.org/wiki/Mental%20age
|
Mental age is a concept related to intelligence. It looks at how a specific individual, at a specific age, performs intellectually, compared to average intellectual performance for that individual's actual chronological age (i.e. time elapsed since birth). The intellectual performance is based on performance in tests and live assessments by a psychologist. The score achieved by the individual is compared to the median average scores at various ages, and the mental age (x, say) is derived such that the individual's score equates to the average score at age x.
However, mental age depends on what kind of intelligence is measured. For instance, a child's intellectual age can be average for their actual age, but the same child's emotional intelligence can be immature for their physical age. Psychologists often remark that girls are more emotionally mature than boys at around the age of puberty. Also, a six-year-old child intellectually gifted can remain a three-year-old child in terms of emotional maturity. Mental age can be considered a controversial concept.
History
Early theories
During much of the 19th century, theories of intelligence focused on measuring the size of human skulls. Anthropologists well known for their attempts to correlate cranial size and capacity with intellectual potential were Samuel Morton and Paul Broca.
The modern theories of intelligence began to emerge along with experimental psychology. This is when much of psychology was moving from philosophical to more biology and medical science basis. In 1890, James Cattell published what some consider the first "mental test". Cattell was more focused on heredity rather than environment. This spurs much of the debate about the nature of intelligence.
Mental age was first defined by the French psychologist Alfred Binet, who introduced the intelligence test in 1905, with the assistance of Theodore Simon. Binet's experiments on French schoolchildren laid the framework for future experiments into
|
https://en.wikipedia.org/wiki/Germ-free%20animal
|
Germ-free organisms are multi-cellular organisms that have no microorganisms living in or on them. Such organisms are raised using various methods to control their exposure to viral, bacterial or parasitic agents. When known microbiota are introduced to a germ-free organism, it usually is referred to as a gnotobiotic organism, however technically speaking, germ-free organisms are also gnotobiotic because the status of their microbial community is known. Due to lacking a microbiome, many germ-free organisms exhibit health deficits such as defects in the immune system and difficulties with energy acquisition. Typically germ-free organisms are used in the study of a microbiome where careful control of outside contaminants is required.
Generation and cultivation
Germ-free organisms are generated by a variety of different means, but a common practice shared by many of them is some form of sterilization step followed by seclusion from the surrounding environment to prevent contamination.
Poultry
Germ-free poultry typically undergo multiple sterilization steps while still at the egg life-stage. This can involve either washing with bleach or an antibiotic solution to surface sterilize the egg. The eggs are then transferred to a sterile incubator where they are grown until hatching. Once hatched, they are provided with sterilized water and a gamma-irradiated feed. This prevents introduction of foreign microbes into their intestinal tracts. The incubators and animals' waste products are continuously monitored for possible contamination. Typically, when being used in experiments, a known microbiome is introduced to the animals at a few days of age. Contamination is still monitored and controlled for after this point, but the presence of microbes is expected.
Mice
Mice undergo a slightly different process due to lacking an egg life-stage. To create a germ-free mouse, an embryo is created through in vitro fertilization and then transplanted into a germ-free mother. If thi
|
https://en.wikipedia.org/wiki/Intabulation
|
Intabulation, from the Italian word intavolatura, refers to an arrangement of a vocal or ensemble piece for keyboard, lute, or other plucked string instrument, written in tablature.
History
Intabulation was a common practice in 14th–16th century keyboard and lute music. A direct effect of intabulation was one of the early advantages of keyboards, the ability to render multiple instruments' music on one instrument. The earliest intabulation is from the mid-14th century Robertsbridge Codex, also one of the first sources of keyboard music still in existence. Some other early sources of intabulated music are the Faenza Codex and the Reina manuscripts (from the 14th century) and the Buxheim manuscript (from the 15th century). The Faenza manuscript, the largest of these early manuscripts, written circa 1400, contains pieces written or transcribed in the 14th century, such as those by Francesco Landini and Guillaume de Machaut. More than half of its pieces are intabulations. The large Buxheim manuscript is dominated by intabulations, mainly of prominent composers of the time, including John Dunstaple, Gilles Binchois, Walter Frye, and Guillaume Dufay. The term "intabulation" continued to be popular through the 16th century, but fell out of use in the early 17th century, though the practice continued. The exception is the 16th- and 17th-century Italian keyboard pieces which included both vocal and instrumental music. Intabulations contain all the vocal lines of a polyphonic piece, for the most part, although they are sometimes combined or redistributed in order to work better on the instrument the intabulation is intended for, and idiomatic ornaments are sometimes added.
Intabulations are an important source of information for historically informed performance because they show ornaments as they would have been played on various instruments, and they are a huge clue as to the actual performance of musica ficta, since tablature shows where a musician places their fingers,
|
https://en.wikipedia.org/wiki/Teller%20assist%20unit
|
Teller Assist Units (TAU), also known as Automatic Teller Safes (ATS) or Teller Cash Dispensers (TCD), are devices used in retail banking for the disbursement of money at a bank teller wicket or a centralized area. Other areas of application of TAU include the automation of starting and reconciling teller or cashier drawers (tills) in retail, check cashing, payday loan / advance, grocery, and casino operations.
Cash supplies are held in a vault or safe. Disbursements and acceptance of money take place by means of inputting information through a separate computer to the cash dispensing mechanism inside the vault, which is similar in construction to an automatic teller machine vault.
A TAU provides a secure and auditable way of handling large amounts of cash by tellers without undue risk from robbery. Some TAUs can be networked and monitored remotely, from a central location - thereby reducing oversight and management resources.
Special security considerations
TAUs may delay dispensing of large amounts of money up to minutes to discourage bank robberies. It is however very likely that someone present on the premises has the means to open the cash vault of the device. TAUs may be accessed by keys, combination, or a mix of the two.
Construction
A TAU consists of:
A vault
Cash handling mechanism
Alarm sensors
In the TAU's cash handling mechanisms are several money cartridges. These can be equipped with different cash notes or coinage. The input into the controlling computer makes possible for this unit to disburse the correct amounts. Notes are tested to ensure that they are removed correctly from the cartridges and that no surplus notes are removed. False disbursements are possible, although very rare.
Modern TAUs can be used also for depositing and recycling of banknotes. They use bill validation technology to help ensure the authenticity and fitness of the received cash before it is accepted and recycled to be presented to the customer.
Differences from
|
https://en.wikipedia.org/wiki/Honeyd
|
Honeyd is an open source computer program created by Niels Provos that allows a user to set up and run multiple virtual hosts on a computer network. These virtual hosts can be configured to mimic several different types of servers, allowing the user to simulate an infinite number of computer network configurations. Honeyd is primarily used in the field of computer security.
Primary Applications
Distraction
Honeyd is used primarily for two purposes. Using the software's ability to mimic many different network hosts at once (up to 65536 hosts at once), Honeyd can act as a distraction to potential hackers. If a network only has 3 real servers, but one server is running Honeyd, the network will appear running hundreds of servers to a hacker. The hacker will then have to do more research (possibly through social engineering) in order to determine which servers are real, or the hacker may get caught in a honeypot. Either way, the hacker will be slowed down or possibly caught.
Honeypot
Honeyd gets its name for its ability to be used as a honeypot. On a network, all normal traffic should be to and from valid servers only. Thus, a network administrator running Honeyd can monitor their logs to see if there is any traffic going to the virtual hosts set up by Honeyd. Any traffic going to these virtual servers can be considered highly suspicious. The network administrator can then take preventative action, perhaps by blocking the suspicious IP address or by further monitoring the network for suspicious traffic.
|
https://en.wikipedia.org/wiki/Karatsuba%20algorithm
|
The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It is a divide-and-conquer algorithm that reduces the multiplication of two n-digit numbers to three multiplications of n/2-digit numbers and, by repeating this reduction, to at most single-digit multiplications. It is therefore asymptotically faster than the traditional algorithm, which performs single-digit products.
The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm.
The Toom–Cook algorithm (1963) is a faster generalization of Karatsuba's method, and the Schönhage–Strassen algorithm (1971) is even faster, for sufficiently large n.
History
The standard procedure for multiplication of two n-digit numbers requires a number of elementary operations proportional to , or in big-O notation. Andrey Kolmogorov conjectured that the traditional algorithm was asymptotically optimal, meaning that any algorithm for that task would require elementary operations.
In 1960, Kolmogorov organized a seminar on mathematical problems in cybernetics at the Moscow State University, where he stated the conjecture and other problems in the complexity of computation. Within a week, Karatsuba, then a 23-year-old student, found an algorithm that multiplies two n-digit numbers in elementary steps, thus disproving the conjecture. Kolmogorov was very excited about the discovery; he communicated it at the next meeting of the seminar, which was then terminated. Kolmogorov gave some lectures on the Karatsuba result at conferences all over the world (see, for example, "Proceedings of the International Congress of Mathematicians 1962", pp. 351–356, and also "6 Lectures delivered at the International Congress of Mathematicians in Stockholm, 1962") and published the method in 1962, in the Proceedings of the USSR Academy of Sciences. The article had been written by Kolmogorov and contained
|
https://en.wikipedia.org/wiki/Dwarf%20planet
|
A dwarf planet is a small planetary-mass object that is in direct orbit of the Sun, smaller than any of the eight classical planets. The prototypical dwarf planet is Pluto. The interest of dwarf planets to planetary geologists is that they may be geologically active bodies, an expectation that was borne out in 2015 by the Dawn mission to and the New Horizons mission to Pluto.
Astronomers are in general agreement that at least the nine largest candidates are dwarf planets – in rough order of size, , , , , , , , , and – although there is some doubt for Orcus. Of these nine plus the tenth-largest candidate , two have been visited by spacecraft (Pluto and Ceres) and seven others have at least one known moon (Eris, Haumea, Makemake, Gonggong, Quaoar, Orcus, and Salacia), which allows their masses and thus an estimate of their densities to be determined. Mass and density in turn can be fit into geophysical models in an attempt to determine the nature of these worlds. Only one, Sedna, has neither been visited nor has any known moons, making an accurate estimate of mass difficult. Some astronomers include many smaller bodies as well, but there is no consensus that these are likely to be dwarf planets.
The term dwarf planet was coined by planetary scientist Alan Stern as part of a three-way categorization of planetary-mass objects in the Solar System: classical planets, dwarf planets, and satellite planets. Dwarf planets were thus conceived of as a category of planet. In 2006, however, the concept was adopted by the International Astronomical Union (IAU) as a category of sub-planetary objects, part of a three-way recategorization of bodies orbiting the Sun: planets, dwarf planets and small Solar System bodies. Thus Stern and other planetary geologists consider dwarf planets and large satellites to be planets, but since 2006, the IAU and perhaps the majority of astronomers have excluded them from the roster of planets.
History of the concept
Starting in 1801, astronom
|
https://en.wikipedia.org/wiki/Mars%20regional%20atmospheric%20modeling%20system
|
The Mars Regional Atmospheric Modeling System (MRAMS) is a computer program that simulates the circulations of the Martian atmosphere at regional and local scales. MRAMS, developed by Scot Rafkin and Timothy Michaels, is derived from the Regional Atmospheric Modeling System (RAMS) developed by William R. Cotton and Roger A. Pielke to study atmospheric circulations on the Earth.
Key features of MRAMS include a non-hydrostatic, fully compressible dynamics, explicit bin dust, water, and carbon dioxide ice atmospheric physics model, and a fully prognostic regolith model that includes carbon dioxide deposition and sublimation. Several Mars exploration projects, including the Mars Exploration Rovers, the Phoenix Scout Mission, and the Mars Science Laboratory have used MRAMS to study a variety of atmospheric circulations.
The MRAMS operates at the mesoscale and microscale, modeling and simulating the Martian atmosphere. The smaller scale modeling of the MRAMS gives it higher resolution data points and models over complex terrain and topography. It is able to identify topography driven flows like katabatic and anabatic winds through valleys and mountains that produce changes in atmospheric circulation.
Structure
Dynamic Core
The dynamic core's role is to solve fluid mechanic equations related to atmospheric dynamics. The equations in the dynamic core of the MRAMS are based on primitive grid-volume Reynolds-averaged equations. The related equations are meant to solve for momentum, thermodynamics, tracers, and conservation of mass. The MRAMS dynamical core integrates equations for momentum, thermodynamics (atmosphere-surface heat exchange), tracers, and conservation of mass.
Parameterizations
The MRAMS dynamical core was developed from RAMS and has been changed excessively to account for the large difference in atmospheres between Mars and Earth. Some MRAMS models parameterize numerous features including dust and dust lifting, cloud microphysics, radiative transfer, an
|
https://en.wikipedia.org/wiki/Scheil%20equation
|
In metallurgy, the Scheil-Gulliver equation (or Scheil equation) describes solute redistribution during solidification of an alloy.
Assumptions
Four key assumptions in Scheil analysis enable determination of phases present in a cast part. These assumptions are:
No diffusion occurs in solid phases once they are formed ()
Infinitely fast diffusion occurs in the liquid at all temperatures by virtue of a high diffusion coefficient, thermal convection, Marangoni convection, etc. ()
Equilibrium exists at the solid-liquid interface, and so compositions from the phase diagram are valid
Solidus and liquidus are straight segments
The fourth condition (straight solidus/liquidus segments) may be relaxed when numerical techniques are used, such as those used in CALPHAD software packages, though these calculations rely on calculated equilibrium phase diagrams. Calculated diagrams may include odd artifacts (i.e. retrograde solubility) that influence Scheil calculations.
Derivation
The hatched areas in the figure represent the amount of solute in the solid and liquid. Considering that the total amount of solute in the system must be conserved, the areas are set equal as follows:
.
Since the partition coefficient (related to solute distribution) is
(determined from the phase diagram)
and mass must be conserved
the mass balance may be rewritten as
.
Using the boundary condition
at
the following integration may be performed:
.
Integrating results in the Scheil-Gulliver equation for composition of the liquid during solidification:
or for the composition of the solid:
.
Applications of the Scheil equation: Calphad Tools for the Metallurgy of Solidification
Nowadays, several Calphad softwares are available - in a framework of computational thermodynamics - to simulate solidification in systems with more than two components; these have recently been defined as Calphad Tools for the Metallurgy of Solidification. In recent years, Calphad-based methodolo
|
https://en.wikipedia.org/wiki/Dysbiosis
|
Dysbiosis (also called dysbacteriosis) is characterized by a disruption to the microbiome resulting in an imbalance in the microbiota, changes in their functional composition and metabolic activities, or a shift in their local distribution. For example, a part of the human microbiota such as the skin flora, gut flora, or vaginal flora, can become deranged, with normally dominating species underrepresented and normally outcompeted or contained species increasing to fill the void. Dysbiosis is most commonly reported as a condition in the gastrointestinal tract.
Typical microbial colonies found on or in the body are benign or beneficial. These appropriately sized microbial colonies carry out a series of helpful and necessary functions, such as aiding in digestion. They also help protect the body from infiltration by pathogenic microbes. These beneficial microbial colonies compete with each other for space and resources. However, when this balance is disturbed, these colonies exhibit a decreased ability to check each other's growth, which can then lead to overgrowth of one or more of the disturbed colonies which may further damage some of the other smaller beneficial ones in a vicious cycle. As more beneficial colonies are damaged, making the imbalance more pronounced, more overgrowth issues occur because the damaged colonies are less able to check the growth of the overgrowing ones. If this goes unchecked long enough, a pervasive and chronic imbalance between colonies will set in, which ultimately minimizes the beneficial nature of these colonies as a whole.
Potential causes of dysbiosis
Any disruption of the body’s microbiota is able to lead to dysbiosis. Dysbiosis in the gut happens when the bacteria in the gastrointestinal tract become unbalanced. There are many causes for dysbiosis in the gut. Some reasons include, but are not limited to:
Dietary changes
Antibiotics that affect the gut flora
Psychological and physical stress (weakens immune system)
Use
|
https://en.wikipedia.org/wiki/Synbiotics
|
Synbiotics refer to food ingredients or dietary supplements combining probiotics and prebiotics in a form of synergism, hence synbiotics. The synbiotic concept was first introduced as "mixtures of probiotics and prebiotics that beneficially affect the host by improving the survival and implantation of live microbial dietary supplements in the gastrointestinal tract, by selectively stimulating the growth and/or by activating the metabolism of one or a limited number of health-promoting bacteria, thus improving host welfare". As of 2018, the research on this concept is preliminary, with no high-quality evidence from clinical research that such benefits exist.
Synbiotics may be complementary synbiotics, where each component is independently chosen for its potential effect on host health, or synergistic synbiotics, where the prebiotic component is chosen to support the activity of the chosen probiotic. Research is evaluating if synbiotics can be optimized, (known as 'optibiotics') which are purported to enhance the growth and health benefits of existing probiotics.
Probiotics are live bacteria which are intended to colonize the large intestine, although as of 2018, there is no evidence that adding dietary bacteria to healthy people has any added effect. A prebiotic is a food or dietary supplement product that may induce the growth or activity of beneficial microorganisms. A prebiotic may be a fiber, but a fiber is not necessarily a prebiotic.
Using prebiotics and probiotics in combination may be described as synbiotic, but the United Nations Food & Agriculture Organization recommends that the term "synbiotic" be used only if the net health benefit is synergistic. Synbiotic formulations in combination with pasteurized breast milk are under preliminary clinical research for their potential to ameliorate necrotizing enterocolitis in infants, although there was insufficient evidence to warrant recommending synbiotics for this use as of 2016.
Examples
Bifidobacteria an
|
https://en.wikipedia.org/wiki/1p36%20deletion%20syndrome
|
1p36 deletion syndrome is a congenital genetic disorder characterized by moderate to severe intellectual disability, delayed growth, hypotonia, seizures, limited speech ability, malformations, hearing and vision impairment, and distinct facial features. The symptoms may vary, depending on the exact location of the chromosomal deletion.
The condition is caused by a genetic deletion (loss of a segment of DNA) on the outermost band on the short arm (p) of chromosome 1. It is one of the most common deletion syndromes. The syndrome is thought to affect one in every 5,000 to 10,000 births.
Signs and symptoms
There are a number of signs and symptoms characteristic of monosomy 1p36, but no one individual will display all of the possible features. In general, children will exhibit failure to thrive and global delays.
Developmental and behavioral
Most young children with 1p36 deletion syndrome have delayed development of speech and motor skills. Speech is severely affected, with many children learning only a few words or having no speech at all. Behavioral problems are also common, and include temper outbursts, banging or throwing objects, striking people, screaming episodes, and self-injurious behavior (wrist biting, head striking/banging). A significant proportion of affected people are on the autism spectrum, and many exhibit stereotypy.
Neurologic
Most people with 1p36 deletion syndrome have some structural abnormality of the brain, and approximately half have epilepsy or other seizures. Almost all children exhibit some degree of hypotonia. Common structural brain abnormalities include agenesis of the corpus callosum, cerebral cortical atrophy, gait abnormalities, and ventriculomegaly. Dysphagia, esophageal reflux, and other feeding difficulties are also common.
Vision
The most common visual abnormalities associated with 1p36 deletion syndrome include farsightedness (hypermetropia), myopia (nearsightedness), and strabismus (cross-eyes). Less common but still rec
|
https://en.wikipedia.org/wiki/Bucky%20bit
|
In computing, a bucky bit is a bit in a binary representation of a character that is set by pressing on a keyboard modifier key other than the shift key.
Overview
Setting a bucky bit changes the output character. A bucky bit allows the user to type a wider variety of characters and commands while maintaining a reasonable number of keys on a keyboard.
Some of the keys corresponding to bucky bits on modern keyboards are the alt key, control key, meta key, command key (⌘), super key, and option key.
In ASCII, the bucky bit is usually the 8th bit (also known as meta bit). However, in older character representations wider than 8 bits, more high bits could be used as bucky bits. In the modern X Window System, bucky bits are bits 18–23 of an event code.
History
The term was invented at Stanford and is based on Niklaus Wirth's nickname "Bucky". Niklaus Wirth was first to suggest an EDIT key to set the eighth bit of a 7-bit ASCII character sometime in 1964 or 1965.
Bucky bits were used heavily on keyboards designed by Tom Knight at MIT, including space-cadet keyboards used on LISP machines. These could contain as many as seven modifier keys: SHIFT, CTRL, META, HYPER, SUPER, TOP, and GREEK (also referred to as FRONT).
See also
Knight keyboard
|
https://en.wikipedia.org/wiki/Trefoil%20knot%20fold
|
The trefoil knot fold is a protein fold in which the protein backbone is twisted into a trefoil knot shape. "Shallow" knots in which the tail of the polypeptide chain only passes through a loop by a few residues are uncommon, but "deep" knots in which many residues are passed through the loop are extremely rare. Deep trefoil knots have been found in the SPOUT superfamily. including methyltransferase proteins involved in posttranscriptional RNA modification in all three domains of life, including bacterium Thermus thermophilus and proteins, in archaea and in eukaryota.
In many cases the trefoil knot is part of the active site or a ligand-binding site and is critical to the activity of the enzyme in which it appears. Before the discovery of the first knotted protein, it was believed that the process of protein folding could not efficiently produce deep knots in protein backbones. Studies of the folding kinetics of a dimeric protein from Haemophilus influenzae have revealed that the folding of trefoil knot proteins may depend on proline isomerization. Computational algorithms have been developed to identify knotted protein structures, both to canvas the Protein Data Bank for previously undetected natural knots and to identify knots in protein structure predictions, where they are unlikely to accurately reproduce the native-state structure due to the rarity of knots in known proteins.
Knottins are small, diverse and stable proteins with important drug design potential. They can be classified in 30 families which cover a wide range of sequences (1621 sequenced), three-dimensional structures (155 solved) and functions (> 10). Inter knottin similarity lies mainly between 20% and 40% sequence identity and 1.5 to 4 A backbone deviations although they all share a tightly knotted disulfide core. This important variability is likely to arise from the highly diverse loops which connect the successive knotted cysteines. The prediction of structural models for all knottin seque
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.