source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Phytotoxicity
|
Phytotoxicity describes any adverse effects on plant growth, physiology, or metabolism caused by a chemical substance, such as high levels of fertilizers, herbicides, heavy metals, or nanoparticles. General phytotoxic effects include altered plant metabolism, growth inhibition, or plant death. Changes to plant metabolism and growth are the result of disrupted physiological functioning, including inhibition of photosynthesis, water and nutrient uptake, cell division, or seed germination.
Fertilizers
High concentrations of mineral salts in solution within the plant growing medium can result in phytotoxicity, commonly caused by excessive application of fertilizers. For example, urea is used in agriculture as a nitrogenous fertilizer. However, if too much is applied, phytotoxic effects can result from urea toxicity directly or ammonia production from hydrolysis of urea. Organic fertilizers, such as compost, also have the potential to be phytotoxic if not sufficiently humified, as intermediate products of this process are harmful to plant growth.
Herbicides
Herbicides are designed and used to control unwanted plants such as agricultural weeds. However, the use of herbicides can cause phytotoxic effects on non-targeted plants through wind-blown spray drift or from the use of herbicide-contaminated material (such as straw or manure) being applied to the soil. Herbicides can also cause phytotoxicity in crops if applied incorrectly, in the wrong stage of crop growth, or in excess. The phytotoxic effects of herbicides are an important subject of study in the field of ecotoxicology.
Heavy Metals
Heavy metals are high-density metallic compounds which are poisonous to plants at low concentrations, although toxicity depends on plant species, specific metal and its chemical form, and soil properties. The most relevant heavy metals contributing to phytotoxicity in crops are silver (Ag), arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), iron (Fe), nickel (Ni), lead (Pb)
|
https://en.wikipedia.org/wiki/System%20time
|
In computer science and computer programming, system time represents a computer system's notion of the passage of time. In this sense, time also includes the passing of days on the calendar.
System time is measured by a system clock, which is typically implemented as a simple count of the number of ticks that have transpired since some arbitrary starting date, called the epoch. For example, Unix and POSIX-compliant systems encode system time ("Unix time") as the number of seconds elapsed since the start of the Unix epoch at 1 January 1970 00:00:00 UT, with exceptions for leap seconds. Systems that implement the 32-bit and 64-bit versions of the Windows API, such as Windows 9x and Windows NT, provide the system time as both , represented as a year/month/day/hour/minute/second/milliseconds value, and , represented as a count of the number of 100-nanosecond ticks since 1 January 1601 00:00:00 UT as reckoned in the proleptic Gregorian calendar.
System time can be converted into calendar time, which is a form more suitable for human comprehension. For example, the Unix system time seconds since the beginning of the epoch translates into the calendar time 9 September 2001 01:46:40 UT. Library subroutines that handle such conversions may also deal with adjustments for time zones, daylight saving time (DST), leap seconds, and the user's locale settings. Library routines are also generally provided that convert calendar times into system times.
Many implementations that currently store system times as 32-bit integer values will suffer from the impending Year 2038 problem. These time values will overflow ("run out of bits") after the end of their system time epoch, leading to software and hardware errors. These systems will require some form of remediation, similar to efforts required to solve the earlier Year 2000 problem. This will also be a potentially much larger problem for existing data file formats that contain system timestamps stored as 32-bit values.
Other tim
|
https://en.wikipedia.org/wiki/Friedrichs%27s%20inequality
|
In mathematics, Friedrichs's inequality is a theorem of functional analysis, due to Kurt Friedrichs. It places a bound on the Lp norm of a function using Lp bounds on the weak derivatives of the function and the geometry of the domain, and can be used to show that certain norms on Sobolev spaces are equivalent. Friedrichs's inequality generalizes the Poincaré–Wirtinger inequality, which deals with the case k = 1.
Statement of the inequality
Let be a bounded subset of Euclidean space with diameter . Suppose that lies in the Sobolev space , i.e., and the trace of on the boundary is zero. Then
In the above
denotes the Lp norm;
α = (α1, ..., αn) is a multi-index with norm |α| = α1 + ... + αn;
Dαu is the mixed partial derivative
See also
Poincaré inequality
|
https://en.wikipedia.org/wiki/Animal%20culture
|
Animal culture can be defined as the ability of non-human animals to learn and transmit behaviors through processes of social or cultural learning.
Culture is increasingly seen as a process, involving the social transmittance of behavior among peers and between generations. It can involve the transmission of novel behaviors or regional variations that are independent of genetic or ecological factors.
The existence of culture in non-humans has been a contentious subject, sometimes forcing researchers to rethink "what it is to be human".
The notion of culture in other animals dates back to Aristotle in classical antiquity, and more recently to Charles Darwin, but the association of other animals' actions with the actual word 'culture' originated with Japanese primatologists' discoveries of socially-transmitted food behaviours in the 1940s. Evidence for animal culture is often based on studies of
feeding behaviors, vocalizations, predator avoidance, mate selection, and migratory routes.
An important area of study for animal culture is vocal learning, the ability to make new sounds through imitation. Most species cannot learn to imitate sounds. Some can learn how to use innate vocalizations in new ways. Only a few species can learn new calls. The transmission of vocal repertoires, including some types of bird vocalization, can be viewed as social processes involving cultural transmission. Some evidence suggests that the ability to engage in vocal learning depends on the development of specialized brain circuitry, detected in humans, dolphins, bats and some birds. The lack of common ancestors suggests that the basis for vocal learning has evolved independently through evolutionary convergence.
Animal culture can be an important consideration in conservation management. As of 2020, culture and sociality were included in the aspects of the management framework of the Convention on the Conservation of Migratory Species of Wild Animals (CMS).
Background
Culture
|
https://en.wikipedia.org/wiki/Plant%20reproduction
|
Plant reproduction is the production of new offspring in plants, which can be accomplished by sexual or asexual reproduction. Sexual reproduction produces offspring by the fusion of gametes, resulting in offspring genetically different from either parent. Asexual reproduction produces new individuals without the fusion of gametes, resulting in clonal plants that are genetically identical to the parent plant and each other, unless mutations occur.
Asexual reproduction
Asexual reproduction does not involve the production and fusion of male and female gametes. Asexual reproduction may occur through budding, fragmentation, spore formation, regeneration and vegetative propagation.
Asexual reproduction is a type of reproduction where the offspring comes from one parent only, thus inheriting the characteristics of the parent. Asexual reproduction in plants occurs in two fundamental forms, vegetative reproduction and agamospermy. Vegetative reproduction involves a vegetative piece of the original plant producing new individuals by budding, tillering, etc. and is distinguished from apomixis, which is a replacement of sexual reproduction, and in some cases involves seeds. Apomixis occurs in many plant species such as dandelions (Taraxacum species) and also in some non-plant organisms. For apomixis and similar processes in non-plant organisms, see parthenogenesis.
Natural vegetative reproduction is a process mostly found in perennial plants, and typically involves structural modifications of the stem or roots and in a few species leaves. Most plant species that employ vegetative reproduction do so as a means to perennialize the plants, allowing them to survive from one season to the next and often facilitating their expansion in size. A plant that persists in a location through vegetative reproduction of individuals gives rise to a clonal colony. A single ramet, or apparent individual, of a clonal colony is genetically identical to all others in the same colony. The dist
|
https://en.wikipedia.org/wiki/Performance%20engineering
|
Performance engineering encompasses the techniques applied during a systems development life cycle to ensure the non-functional requirements for performance (such as throughput, latency, or memory usage) will be met. It may be alternatively referred to as systems performance engineering within systems engineering, and software performance engineering or application performance engineering within software engineering.
As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventive and perfective role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.
The term performance engineering encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of IT service management (see also ITIL).
Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to systems engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the information technology organization.
Performance engineering objectives
Increase business revenue by ensuring the system can process transactions within the requisite timeframe
Eliminate system failure requiring scrapping and writing off the system development effort due to performance objective failure
Eliminate late system deployment due to performance issues
Eliminate avoidable system rework due to performance issues
Eliminate avoidable system tuning efforts
Avoid a
|
https://en.wikipedia.org/wiki/Sackler%20Prize
|
The Sackler Prize is named for the Sackler family and can indicate any of the following three awards established by Raymond Sackler and his wife Beverly Sackler currently bestowed by the Tel Aviv University. The Sackler family is known for its role in the opioid epidemic in the United States, has been the subject of numerous lawsuits and critical media coverage, and been dubbed the "most evil family in America", and "the worst drug dealers in history". The family has engaged in extensive efforts to promote the Sackler name, that has been characterized as reputation laundering. In 2023 the Sackler family's name was removed from the name of the Tel Aviv University Faculty of Medicine.
Sackler Prize in the Physical Sciences
The Raymond and Beverly Sackler International Prize in the Physical Sciences is a $40,000 prize in the disciplines of either physics or chemistry awarded by Tel Aviv University each year for young scientists who have made outstanding and fundamental contributions in their fields.
There is an age limit for all nominees. Nominations for the Sackler Prize can be made by individuals in any of the following categories:
1) Faculty of Physics, Astronomy or Chemistry departments in institutions of higher learning worldwide.
2) Presidents, Rectors, vice-presidents, Provosts and Deans, of institutions of higher learning worldwide.
3) Directors of laboratories worldwide.
4) Sackler Prize laureates.
For 2008, the age limit has been raised to 45 and the prize money to $50,000.
Winners
Source: Chemistry – Tel Aviv University
Physics – Tel Aviv University
2000 prize for Physics (Theoretical High Energy Physics): Michael R. Douglas (Rutgers University) and Juan Martin Maldacena (Institute for Advanced Study, Princeton), for work "beyond the 1975 synthesis known as the 'Standard Model' and within the framework of (supersymmetrical) String or M-theory."
2001 prize for Chemistry (Physical Chemistry of Advanced Materials): Moungi G. Bawendi (MIT) and James R. Hea
|
https://en.wikipedia.org/wiki/A%20Different%20Universe
|
A Different Universe: Reinventing Physics from the Bottom Down is a 2005 physics book by Robert B. Laughlin, a winner of the Nobel Prize in Physics for the fractional quantum Hall effect. Its title is a play on the P. W. Anderson manifesto More is Different, historically important in claiming that condensed-matter physics deserves greater respect. The book extends his articles The Middle Way and The Theory of Everything, arguing the limits of reductionism. A key concept in Laughlin's works is protectorates, meaning robust physical regimes of behavior that do not depend on (that is, they are protected from the fickle details of) the underlying smaller-scale physics such as quantum noise. Such robust or reliable behavior at macroscopic scales makes possible higher-level entities, from biological life to nanotechnology. The book emphasizes more study of such macroscopic phenomena, sometimes called emergence, over the ever-downward dive into theoretically fundamental ideas such as string theory, which at some point become empirically irrelevant by having no observable consequences in our world. The arguments come full circle with modern dark energy ideas suggesting that spacetime or the vacuum may not be empty, but rather (for all we can observe) a medium, a possibility ironically glimpsed even by Einstein whose career began with demolishing the similar but too-simplistic notion of ether with his special relativity work.
|
https://en.wikipedia.org/wiki/Lycian%20alphabet
|
The Lycian alphabet was used to write the Lycian language of the Asia Minor region of Lycia. It was an extension of the Greek alphabet, with half a dozen additional letters for sounds not found in Greek. It was largely similar to the Lydian and the Phrygian alphabets.
The alphabet
The Lycian alphabet contains letters for 29 sounds. Some sounds are represented by more than one symbol, which is considered one "letter". There are six vowel letters, one for each of the four oral vowels of Lycian, and separate letters for two of the four nasal vowels. Nine of the Lycian letters do not appear to derive from the Greek alphabet.
Numbers
Lycian uses the following number symbols: I (vertical stroke) = 1, < ("less than" sign) (or, rarely, L or C or V or Y) = 5, O (circle) = 10; a horizontal stroke — is one half; a symbol somewhat like our letter H may mean 100.
The number 128½ would therefore be expressed as HOO<III—.
Unicode
The Lycian alphabet was added to the Unicode Standard in April, 2008 with the release of version 5.1.
It is encoded in Plane 1 (Supplementary Multilingual Plane).
The Unicode block for Lycian is U+10280–U+1029F:
See also
Letoon trilingual
Lycian language
Notes
|
https://en.wikipedia.org/wiki/Lydian%20alphabet
|
Lydian script was used to write the Lydian language. Like other scripts of Anatolia in the Iron Age, the Lydian alphabet is based on the Phoenician alphabet. It is related to the East Greek alphabet, but it has unique features.
The first modern codification of the Lydian alphabet was made by Roberto Gusmani in 1964, in a combined lexicon, grammar, and text collection.
Early Lydian texts were written either from left to right or from right to left. Later texts all run from right to left. One surviving text is in the bi-directional boustrophedon manner. Spaces separate words except in one text that uses dots instead. Lydian uniquely features a quotation mark in the shape of a right triangle.
Alphabet
The Lydian alphabet is closely related to the other alphabets of Asia Minor as well as to the Greek alphabet. It contains letters for 26 sounds. Some are represented by more than one symbol, which is considered one "letter." Unlike the Carian alphabet, which had an f derived from Φ, the Lydian f has the peculiar 8 shape also found in the Neo-Etruscan alphabet and in Italic alphabets of Osco-Umbrian languages such as Oscan, Umbrian, Old Sabine and South Picene (Old Volscian), and it is thought to be an invention of speakers of a Sabellian language (Osco-Umbrian languages).
In addition, two digraphs, aa and ii, appear to be allophones of [a] and [i] under speculative circumstances, such as lengthening from stress. Complex consonant clusters often appear in the inscriptions and, if present, an epenthetic schwa was evidently not written: 𐤥𐤹𐤯𐤣𐤦𐤣 wctdid [wt͡stθiθ], 𐤨𐤮𐤡𐤷𐤯𐤬𐤨 kśbλtok- [kspʎ̩tok].
Note: a newer transliteration employing p for b, s for ś, š for s, and/or w for v appears in recent publications and the online Dictionary of the Minor Languages of Ancient Anatolia (eDiAna), as well as Melchert's Lydian corpus.
Examples of words
ora [ora] "month"
laqriša [lakʷriʃa] "wall, dromos" or "inscription"
pira [pira] "house, home"
wcbaqẽnt [w̩t͡spaˈkʷãnd] "to trample
|
https://en.wikipedia.org/wiki/BKL%20singularity
|
A Belinski–Khalatnikov–Lifshitz (BKL) singularity is a model of the dynamic evolution of the universe near the initial gravitational singularity, described by an anisotropic, chaotic solution of the Einstein field equation of gravitation. According to this model, the universe is chaotically oscillating around a gravitational singularity in which time and space become equal to zero or, equivalently, the spacetime curvature becomes infinitely big. This singularity is physically real in the sense that it is a necessary property of the solution, and will appear also in the exact solution of those equations. The singularity is not artificially created by the assumptions and simplifications made by the other special solutions such as the Friedmann–Lemaître–Robertson–Walker, quasi-isotropic, and Kasner solutions.
The model is named after its authors Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, then working at the Landau Institute for Theoretical Physics.
The picture developed by BKL has several important elements. These are:
Near the singularity the evolution of the geometry at different spatial points decouples so that the solutions of the partial differential equations can be approximated by solutions of ordinary differential equations with respect to time for appropriately defined spatial scale factors. This is called the BKL conjecture.
For most types of matter the effect of the matter fields on the dynamics of the geometry becomes negligible near the singularity. Or, in the words of John Wheeler, "matter doesn't matter" near a singularity. The original BKL work posed a negligible effect for all matter but later they theorized that "stiff matter" (equation of state p = ε) equivalent to a massless scalar field can have a modifying effect on the dynamics near the singularity.
The ordinary differential equations describing the asymptotics come from a class of spatially homogeneous solutions which constitute the Mixmaster dynamics: a complicated oscillat
|
https://en.wikipedia.org/wiki/Homovanillic%20acid
|
Homovanillic acid (HVA) is a major catecholamine metabolite that is produced by a consecutive action of monoamine oxidase and catechol-O-methyltransferase on dopamine. Homovanillic acid is used as a reagent to detect oxidative enzymes, and is associated with dopamine levels in the brain.
In psychiatry and neuroscience, brain and cerebrospinal fluid levels of HVA are measured as a marker of metabolic stress caused by 2-deoxy-D-glucose. HVA presence supports a diagnosis of neuroblastoma and malignant pheochromocytoma.
Fasting plasma levels of HVA are known to be higher in females than in males. This does not seem to be influenced by adult hormonal changes, as the pattern is retained in the elderly and post-menopausal as well as transgender people according to their genetic sex, both before and during cross-sex hormone administration. Differences in HVA have also been correlated to tobacco usage, with smokers showing significantly lower amounts of plasma HVA.
See also
Homovanillyl alcohol
|
https://en.wikipedia.org/wiki/Uti%20possidetis%20juris
|
Uti possidetis juris or uti possidetis iuris (Latin for "as [you] possess under law") is a principle of international law which provides that newly-formed sovereign states should retain the internal borders that their preceding dependent area had before their independence.
History
Uti possidetis juris is a modified form of uti possidetis; created for the purpose of avoiding terra nullius, the original version of uti possidetis began as a Roman law governing the rightful possession of property. During the medieval period it evolved into a law governing international relations and has recently (1820s) been modified for situations related to newly independent states.
Application
Uti possidetis juris has been applied in modern history to such regions as South America, Africa, the Middle East, and the Soviet Union, and numerous other regions where centralized governments were broken up, where imperial rulers were overthrown, or where League of Nations mandates ended, e.g. Palestine and Nauru. It is often applied to prevent foreign intervention by eliminating any contested terra nullius, or no man's land, that foreign powers could claim, or to prevent disputes that could emerge with the possibility of redrawing the borders of new states after their independence.
The principle was also applied by the Badinter Arbitration Committee in opinions related to the disintegration of Yugoslavia, specifically no. 2, on self-determination, and no. 3, on the nature of the boundaries between Croatia and Serbia and between Bosnia and Herzegovina and Serbia.
Argentina and Chile base their territorial claims in Antarctica on the uti possidetis juris principle in the same manner as their now recognized Patagonian claims.
See also
Uti possidetis
|
https://en.wikipedia.org/wiki/9999%20%28number%29
|
9999 is the natural number following 9998 and preceding 10000.
9999 is an auspicious number in Chinese folklore. Many estimations of the rooms contained in the Forbidden City point to 9999. Chinese tomb contracts often involved being buried with 9999 coins, a practice related to Joss paper, as it was believed the dead would need that amount to buy the burial plot from the Earth goddess.
9999 is also the emergency telephone number in Oman.
Mathematics
9999 can be used as a divisor to generate 4-digit decimal recurrences. For example, 1234 / 9999 = 0.123412341234... .
9999 is a Kaprekar number.
Computer and software
9999 was the last possible line number in some older programming languages such as BASIC. Often the line "9999 END" was the first line written for a new program.
Some very old software used "9999" as end of file, however no problems occurred on September 9, 1999.
Videogames
The King of Fighters character K9999 has the number on his name, although it is read as "kay-four-nine".
In Final Fantasy and other RPGs, 9999 is often the maximum damage or healing number the game is allowed to calculate.
|
https://en.wikipedia.org/wiki/Peltric%20set
|
Peltric set is a term referring to the combination of a Pelton wheel and an electric generator, and is a useful water-powered turbine for mountainous regions where the head available is generally high but the flow is low. This set can be economically connected in an existing break pressure tank of a drinking water supply line.
Electrical generators
Water turbines
|
https://en.wikipedia.org/wiki/Wireless%20ad%20hoc%20network
|
A wireless ad hoc network (WANET) or mobile ad hoc network (MANET) is a decentralized type of wireless network. The network is ad hoc because it does not rely on a pre-existing infrastructure, such as routers or wireless access points. Instead, each node participates in routing by forwarding data for other nodes. The determination of which nodes forward data is made dynamically on the basis of network connectivity and the routing algorithm in use.
Such wireless networks lack the complexities of infrastructure setup and administration, enabling devices to create and join networks "on the fly".
Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. Each must forward traffic unrelated to its own use, and therefore be a router. The primary challenge in building a MANET is equipping each device to continuously maintain the information required to properly route traffic. This becomes harder as the scale of the MANET increases due to 1) the desire to route packets to/through every other node, 2) the percentage of overhead traffic needed to maintain real-time routing status, 3) each node has its own goodput to route independent and unaware of others needs, and 4) all must share limited communication bandwidth, such as a slice of radio spectrum.
Such networks may operate by themselves or may be connected to the larger Internet. They may contain one or multiple and different transceivers between nodes. This results in a highly dynamic, autonomous topology. MANETs usually have a routable networking environment on top of a link layer ad hoc network.
History
Packet radio
The earliest wireless data network was called PRNET, the packet radio network, and was sponsored by Defense Advanced Research Projects Agency (DARPA) in the early 1970s. Bolt, Beranek and Newman Inc. (BBN) and SRI International designed, built, and experimented with these earliest systems. Experimenters included Robert Kahn,
|
https://en.wikipedia.org/wiki/Lindenbaum%27s%20lemma
|
In mathematical logic, Lindenbaum's lemma, named after Adolf Lindenbaum, states that any consistent theory of predicate logic can be extended to a complete consistent theory. The lemma is a special case of the ultrafilter lemma for Boolean algebras, applied to the Lindenbaum algebra of a theory.
Uses
It is used in the proof of Gödel's completeness theorem, among other places.
Extensions
The effective version of the lemma's statement, "every consistent computably enumerable theory can be extended to a complete consistent computably enumerable theory," fails (provided Peano arithmetic is consistent) by Gödel's incompleteness theorem.
History
The lemma was not published by Adolf Lindenbaum; it is originally attributed to him by Alfred Tarski.
Notes
|
https://en.wikipedia.org/wiki/Comparison%20theorem
|
In mathematics, comparison theorems are theorems whose statement involves comparisons between various mathematical objects of the same type, and often occur in fields such as calculus, differential equations and Riemannian geometry.
Differential equations
In the theory of differential equations, comparison theorems assert particular properties of solutions of a differential equation (or of a system thereof), provided that an auxiliary equation/inequality (or a system thereof) possesses a certain property.
Chaplygin inequality
Grönwall's inequality, and its various generalizations, provides a comparison principle for the solutions of first-order ordinary differential equations.
Sturm comparison theorem
Aronson and Weinberger used a comparison theorem to characterize solutions to Fisher's equation, a reaction--diffusion equation.
Hille-Wintner comparison theorem
Riemannian geometry
In Riemannian geometry, it is a traditional name for a number of theorems that compare various metrics and provide various estimates in Riemannian geometry.
Rauch comparison theorem relates the sectional curvature of a Riemannian manifold to the rate at which its geodesics spread apart.
Toponogov's theorem
Myers's theorem
Hessian comparison theorem
Laplacian comparison theorem
Morse–Schoenberg comparison theorem
Berger comparison theorem, Rauch–Berger comparison theorem
Berger–Kazdan comparison theorem
Warner comparison theorem for lengths of N-Jacobi fields (N being a submanifold of a complete Riemannian manifold)
Bishop–Gromov inequality, conditional on a lower bound for the Ricci curvatures
Lichnerowicz comparison theorem
Eigenvalue comparison theorem
Cheng's eigenvalue comparison theorem
See also: Comparison triangle
Other
Limit comparison theorem, about convergence of series
Comparison theorem for integrals, about convergence of integrals
Zeeman's comparison theorem, a technical tool from the theory of spectral sequences
|
https://en.wikipedia.org/wiki/Animal%20track
|
__notoc__
An animal track is an imprint left behind in soil, snow, or mud, or on some other ground surface, by an animal walking across it. Animal tracks are used by hunters in tracking their prey and by naturalists to identify animals living in a given area.
Books are commonly used to identify animal tracks, which may look different based on the weight of the particular animal and the type of strata in which they are made.
Tracks can be fossilized over millions of years. It is for this reason we are able to see fossilized dinosaur tracks in some types of rock formations. These types of fossils are called trace fossils since they are a trace of an animal left behind rather than the animal itself. In paleontology, tracks often preserve as sandstone infill, forming a natural mold of the track.
Gallery
See also
Flukeprint, track of whale on ocean surface
Footprint
Pugmark
Spoor (animal)
|
https://en.wikipedia.org/wiki/Transfersome
|
Transfersome is a proprietary drug delivery technology, an artificial vesicle designed to exhibit the characteristics of a cell vesicle suitable for controlled and potentially targeted drug delivery. Some evidence has shown efficacy for its use for drug delivery without causing skin irritation, potentially being used to treat skin cancer. Transfersome is made by the German company IDEA AG.
|
https://en.wikipedia.org/wiki/FDOA
|
Frequency difference of arrival (FDOA) or differential Doppler (DD), is a technique analogous to TDOA for estimating the location of a radio emitter based on observations from other points. (It can also be used for estimating one's own position based on observations of multiple emitters). TDOA and FDOA are sometimes used together to improve location accuracy and the resulting estimates are somewhat independent. By combining TDOA and FDOA measurements, instantaneous geolocation can be performed in two dimensions.
It differs from TDOA in that the FDOA observation points must be in relative motion with respect to each other and the emitter. This relative motion results in different doppler shifts observations of the emitter at each location in general. The relative motion can be achieved by using airborne observations in aircraft, for example. The emitter location can then be estimated with knowledge of the observation points' location and vector velocities and the observed relative doppler shifts between pairs of locations.
A disadvantage of FDOA is that large amounts of data must be moved between observation points or to a central location to do the cross-correlation that is necessary to estimate the doppler shift.
The accuracy of the location estimate is related to the bandwidth of the emitter's signal, the signal-to-noise ratio at each observation point, and the geometry and vector velocities of the emitter and the observation points.
See also
Multilateration
Further reading
Ho, K.C.; Chan, Y.T.;, "Geolocation of a known altitude object from TDOA and FDOA measurements," IEEE Transactions on Aerospace and Electronic Systems, vol.33, no.3, pp.770-783, July 1997. , IEEE XPlore.
Digital signal processing
|
https://en.wikipedia.org/wiki/Keyboard%20protector
|
A keyboard protector or keyboard cover is a device which is placed on top of a computer keyboard in order to reduce contact with the environment. Keyboards are susceptible to corrosion damage from liquid spills and build up of dust and debris, requiring frequent cleaning and maintenance. The protector serves as a barrier to eliminate ingress from these materials.
Composition
A keyboard protector is usually made from plastic, polyurethane or silicone. It is in the form of a flexible sheet, moulded to fit the key profiles and arrangements on the keyboard.
Working principle
A keyboard protector is placed on top of a keyboard, acting as a physical barrier to the environment. When a key is depressed, the protector material deforms with the key, allowing full key travel and tactile feedback. Some models of have the sides of the protector extend to the underside of the keyboard, which are secured with adhesive tape. When dirty, the protector can be removed and cleaned.
Advantages and inconvenience
Computer users who are unfamiliar with keyboard protectors may take some time to become accustomed, since the keystrokes are dampened and the force needed to depress the keys is different. These factors may also affect their typing speed and accuracy.
Some applications can be a disadvantage, for example laptops and luggables. On laptops, the computer may not close properly with the protector fitted, and can transfer dirt and debris to the display.
Compatibility
Since there are several major types of keyboards in the market, some with different layouts, the compatibility of keyboard protectors is also important in order to have the keyboard fully and well protected. Different keyboards will often feature slightly different key spacing or arrangement, leading to ill-fitting protectors.
|
https://en.wikipedia.org/wiki/Transportation%20theory%20%28mathematics%29
|
In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.
In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically. In 1930, in the collection Transportation Planning Volume I for the National Commissariat of Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal Kilometrage in Cargo-transportation in space".
Major advances were made in the field during World War II by the Soviet mathematician and economist Leonid Kantorovich. Consequently, the problem as it is stated is sometimes known as the Monge–Kantorovich transportation problem. The linear programming formulation of the transportation problem is also known as the Hitchcock–Koopmans transportation problem.
Motivation
Mines and factories
Suppose that we have a collection of m mines mining iron ore, and a collection of n factories which use the iron ore that the mines produce. Suppose for the sake of argument that these mines and factories form two disjoint subsets M and F of the Euclidean plane R2. Suppose also that we have a cost function c : R2 × R2 → [0, ∞), so that c(x, y) is the cost of transporting one shipment of iron from x to y. For simplicity, we ignore the time taken to do the transporting. We also assume that each mine can supply only one factory (no splitting of shipments) and that each factory requires precisely one shipment to be in operation (factories cannot work at half- or double-capacity). Having made the above assumptions, a transport plan is a bijection T : M → F.
In other words, each mine m ∈ M supplies precisely one target factory T(m) ∈ F and each factory is supplied by precisely one mine.
We wish to find the optimal transport plan, the plan T whose total cost
is the least of all possible transport plans from M to F. This motivating sp
|
https://en.wikipedia.org/wiki/CyberSource
|
Cybersource is a United Kingdom based payment gateway founded in 1994.
In November 2007, Cybersource acquired the U.S. small business payment services provider Authorize.net for $565 million.
On April 22, 2010, Visa Inc. acquired Cybersource for $2 billion.
See also
List of on-line payment service providers
|
https://en.wikipedia.org/wiki/Germplasm%20Resources%20Information%20Network
|
Germplasm Resources Information Network or GRIN is an online USDA National Genetic Resources Program software project to comprehensively manage the computer database for the holdings of all plant germplasm collected by the National Plant Germplasm System.
GRIN has extended its role to manage information on the germplasm reposits of insect (invertebrate), microbial, and animal species (see sub-projects).
Description
The site is a resource for identifying taxonomic information (scientific names) as well as common names on more than 500,000 accessions (distinct varieties, cultivars etc.) of plants covering 10,000 species; both economically important ones and wild species. It profiles plants that are invasive or noxious weeds, threatened or endangered, giving out data on worldwide distribution of its habitat; as well as passport information. GRIN also incorporates an Economic Plants Database.
The network is maintained by GRIN's Database Management Unit (GRIN/DBMU). GRIN is under the oversight of National Germplasm Resources Laboratory (NGRL) in Beltsville, Maryland, which in 1990 replaced its forerunner, the Germplasm Services Laboratory (GSL), that had formerly run GRIN. Since November, 2015 GRIN has been running on GRIN-Global software produced by a collaborative project between the USDA and the Global Crop Diversity Trust.
Sub-projects
A stated mission of GRIN is to support the following projects:
National Plant Germplasm System (NPGS)
National Animal Germplasm Program (NAGP)
National Microbial Germplasm Program (NMGP)
National Invertebrate Germplasm Program (NIGP)
See also
International Plant Names Index
List of electronic Floras (for online flora databases)
Multilingual Multiscript Plant Name Database
Natural Resources Conservation Service
|
https://en.wikipedia.org/wiki/Line%20source
|
A line source, as opposed to a point source, area source, or volume source, is a source of air, noise, water contamination or electromagnetic radiation that emanates from a linear (one-dimensional) geometry. The most prominent linear sources are roadway air pollution, aircraft air emissions, roadway noise, certain types of water pollution sources that emanate over a range of river extent rather than from a discrete point, elongated light tubes, certain dose models in medical physics and electromagnetic antennas. While point sources of pollution were studied since the late nineteenth century, linear sources did not receive much attention from scientists until the late 1960s, when environmental regulations for highways and airports began to emerge. At the same time, computers with the processing power to accommodate the data processing needs of the computer models required to tackle these one-dimensional sources became more available.
In addition, this era of the 1960s saw the first emergence of environmental scientists who spanned the disciplines required to accomplish these studies. For example, meteorologists, chemists, and computer scientists in the air pollution field were required to build complex models to address roadway air dispersion modeling. Prior to the 1960s, these specialities tended to work within their own disciplines, but with the advent of NEPA, the Clean Air Act, the Noise Control Act in the United States, and other seminal legislation, the era of multidisciplinary environmental science had begun.
For electromagnetic linear sources, the principal early advances in computer modeling arose in the Soviet Union and USA when the end of World War II and the Cold War were fought partially by progress in electronic warfare, including the technologies of active antenna arrays.
Linear air pollution source
Air pollution levels near major highways and urban arterials are in violation of U.S. National Ambient Air Quality Standards where millions of Amer
|
https://en.wikipedia.org/wiki/Demographic%20dividend
|
Demographic dividend, as defined by the United Nations Population Fund (UNFPA), is "the economic growth potential that can result from shifts in a population’s age structure, mainly when the share of the working-age population (15 to 64) is larger than the non-working-age share of the population (14 and younger, and 65 and older)". In other words, it is “a boost in economic productivity that occurs when there are growing numbers of people in the workforce relative to the number of dependents”. UNFPA stated that “A country with both increasing numbers of young people and declining fertility has the potential to reap a demographic dividend."
Demographic dividend occurs when the proportion of working people in the total population is high because this indicates that more people have the potential to be productive and contribute to growth of the economy.
Due to the dividend between young and old, many argue that there is great potential for economic gains, which has been termed the "demographic gift". In order for economic growth to occur the younger population must have access to quality education, adequate nutrition and health including access to sexual and reproductive health.
However, this drop in fertility rates is not immediate. The lag between produces a generational population bulge that surges through society. For a period of time this “bulge” is a burden on society and increases the dependency ratio. Eventually this group begins to enter the productive labor force. With fertility rates continuing to fall and older generations having longer life expectancies, the dependency ratio declines dramatically. This demographic shift initiates the demographic dividend. With fewer younger dependents, due to declining fertility and child mortality rates, and fewer older dependents, due to the older generations having shorter life expectancies, and the largest segment of the population of productive working age, the dependency ratio declines dramatically leading to the
|
https://en.wikipedia.org/wiki/Primitive%20recursive%20arithmetic
|
Primitive recursive arithmetic (PRA) is a quantifier-free formalization of the natural numbers. It was first proposed by Norwegian mathematician , as a formalization of his finitistic conception of the foundations of arithmetic, and it is widely agreed that all reasoning of PRA is finitistic. Many also believe that all of finitism is captured by PRA, but others believe finitism can be extended to forms of recursion beyond primitive recursion, up to ε0, which is the proof-theoretic ordinal of Peano arithmetic. PRA's proof theoretic ordinal is ωω, where ω is the smallest transfinite ordinal. PRA is sometimes called Skolem arithmetic.
The language of PRA can express arithmetic propositions involving natural numbers and any primitive recursive function, including the operations of addition, multiplication, and exponentiation. PRA cannot explicitly quantify over the domain of natural numbers. PRA is often taken as the basic metamathematical formal system for proof theory, in particular for consistency proofs such as Gentzen's consistency proof of first-order arithmetic.
Language and axioms
The language of PRA consists of:
A countably infinite number of variables x, y, z,....
The propositional connectives;
The equality symbol =, the constant symbol 0, and the successor symbol S (meaning add one);
A symbol for each primitive recursive function.
The logical axioms of PRA are the:
Tautologies of the propositional calculus;
Usual axiomatization of equality as an equivalence relation.
The logical rules of PRA are modus ponens and variable substitution.
The non-logical axioms are, firstly:
;
where always denotes the negation of so that, for example, is a negated proposition.
Further, recursive defining equations for every primitive recursive function may be adopted as axioms as desired. For instance, the most common characterization of the primitive recursive functions is as the 0 constant and successor function closed under projection, composition and primitive
|
https://en.wikipedia.org/wiki/List%20of%20whale%20vocalizations
|
Whale vocalizations are the sounds made by whales to communicate. The word "song" is used in particular to describe the pattern of regular and predictable sounds made by some species of whales (notably the humpback) in a way that is reminiscent of human singing.
Humans produce sound by expelling air through the larynx. The vocal cords within the larynx open and close as necessary to separate the stream of air into discrete pockets of air. These pockets are shaped by the throat, tongue, and lips into the desired sound.
Cetacean sound production differs markedly from this mechanism. The precise mechanism differs in the two major suborders of cetaceans: the Odontoceti (toothed whales—including dolphins) and the Mysticeti (baleen whales—including the largest whales, such as the blue whale).
Blue whale (Balaenoptera musculus)
Estimates made by Cummings and Thompson (1971) and Richardson et al. (1995) suggest that source level of sounds made by blue whales are between 155 and 188 decibels with reference to one micropascal metre. All blue whale groups make calls at a fundamental frequency of between 10 and 40 Hz, and the lowest frequency sound a human can typically perceive is 20 Hz. Blue whale calls last between ten and thirty seconds. Additionally blue whales off the coast of Sri Lanka have been recorded repeatedly making "songs" of four notes duration lasting about two minutes each, reminiscent of the well-known humpback whale songs.
All of the baleen whale sound files on this page (with the exception of the humpback vocalizations) are reproduced at 10x speed to bring the sound into the human auditory band.
Vocalizations produced by the Eastern North Pacific population have been well studied. This population produces long-duration, low frequency pulses ("A") and tonal calls ("B"), upswept tones that precede type B calls ("C"), moderate-duration downswept tones ("D"), and variable amplitude-modulated and frequency-modulated sounds. A and B calls are often produce
|
https://en.wikipedia.org/wiki/Cordance
|
Cordance, a measure of brain activity, is a quantitative electroencephalographic (QEEG) method, developed in Los Angeles in the 1990s.
It combines complementary information from absolute (the amount of power in a frequency band at a given electrode) and relative power (the percentage of power contained in a frequency band relative to the total spectrum) of EEG spectra.
Cordance is a measure of regional brain activity, computed using QEEG measures of brain wave patterns in an algorithm developed at the UCLA Laboratory of Brain, Behavior, and Pharmacology by Drs. Andrew Leuchter and Ian Cook.
The cordance algorithm includes steps of (a) reattribution of EEG power, (b) spatial normalization of absolute and relative power, and (c) combination of the transformed absolute and transformed relative power to yield the cordance values themselves.
In comparison with other QEEG measures, such as absolute power or relative power, cordance appears to have a superior correlation with regional brain perfusion, one of the other standard measures of regional brain activity. Because cordance is derived from EEG signals, assessments of brain function with cordance do not require the use of radioactive tracer molecules, as is the case with some other functional neuroimaging methods (PET or SPECT scanning).
Cordance has been applied to studying brain activity in a variety of neurological and psychiatric disorders. A major area of study has been major depressive disorder, in efforts to develop biomarkers that could help guide treatment. This line of work was begun in the mid 1990s at UCLA and is now the subject of replication studies at other medical centers.
|
https://en.wikipedia.org/wiki/List%20of%20bulldog%20mascots
|
This is a list of organizations that use the bulldog as a mascot.
Because of its tenacity, the bulldog is a symbol of the United Kingdom and is a popular mascot for professional sports teams, universities, secondary schools, military institutions, and other organizations, including the following:
Sports teams
This section includes professional and semi-professional teams, as well as amateur teams not affiliated with an educational institution. School teams are listed in the sections for Universities and Secondary schools below.
Australia
Canterbury-Bankstown Bulldogs (NRL)
Capalaba Bulldogs (BPL)
Central District Football Club (SANFL)
South Fremantle Football Club (WAFL)
Western Bulldogs (AFL)
Canada
Alberni Valley Bulldogs, British Columbia Hockey League
Antigonish Bulldogs, Maritime Junior A Hockey League
Halton Hills Bulldogs, OLA Junior B Lacrosse League
Hamilton Bulldogs, American Hockey League 1996–2015
Hamilton Bulldogs, Ontario Hockey League
Kincardine Bulldogs, Western Junior C Hockey League
Quebec Bulldogs, one-time professional ice hockey team
Tri-City Bulldogs, Canadian Junior Football League 1994–2004
United Kingdom
Barnsley F.C., Football League Championship (mascot name: Toby Tyke)
Batley Bulldogs, Rugby League Championships
Birmingham City F.C., Football League Championship (mascot name: Beau Brummie)
Great Britain men's national Australian rules football team
United States
Boston Bulldogs, American Football League 1926
Boston Bulldogs, National Football League 1929
Boston Bulldogs, A-League and USL Pro Soccer League 1999–2001
Canton Bulldogs, Ohio League and National Football League 1905–1926
Cleveland Bulldogs, National Football League 1924–1927
Dayton Bulldogs, National Indoor Football League
Denver Bulldogs, United States Australian Football League
Flint Bulldogs, Colonial Hockey League 1991–93
Los Angeles Bulldogs, 2nd American Football League and Pacific Coast Professional Football League 1936–1948
New York Bull
|
https://en.wikipedia.org/wiki/Island%20growth
|
Island growth is a physical model of deposited film growth and chemical vapor deposition.
Introduction
When atoms are deposited slowly onto a flat surface, the first one undergoes a random walk on that surface. Eventually a second atom is deposited; in all likelihood it will eventually meet the first atom. Once the two atoms meet they may bond to form a particle with a higher mass and a lower random walk velocity. Because the bonded particles are now more stable and less mobile than before, they are called an "island." Subsequent atoms deposited on the substrate eventually meet and bond with the island, further increasing its size and stability. Eventually the island can grow to fill the entire substrate with a single large grain.
The faster the atoms are deposited, the greater amount of atoms on the substrate before any large stable islands form. As these atoms meet, they will bond to their local neighbors before having the chance to migrate to a distant island. In this way a large number of separate islands are formed and can grow independently. Eventually the separate islands will grow to become separate grains in the final film.
The island growth model is used to explain how fast deposition techniques (such as sputter deposition) can produce films with many randomly oriented grains, whereas slow deposition techniques (such as MBE) tend to produce larger grains with more uniform structure.
See also
Stranski–Krastanov growth
|
https://en.wikipedia.org/wiki/Wang%20B-machine
|
As presented by Hao Wang (1954, 1957), his basic machine B is an extremely simple computational model equivalent to the Turing machine. It is "the first formulation of a Turing-machine theory in terms of computer-like models" (Minsky, 1967: 200). With only 4 sequential instructions it is very similar to, but even simpler than, the 7 sequential instructions of the Post–Turing machine. In the same paper, Wang introduced a variety of equivalent machines, including what he called the W-machine, which is the B-machine with an "erase" instruction added to the instruction set.
Description
As defined by Wang (1954) the B-machine has at its command only 4 instructions:
(1) → : Move tape-scanning head one tape square to the right (or move tape one square left), then continue to next instruction in numerical sequence;
(2) ← : Move tape-scanning head one tape square to the left (or move tape one square right), then continue to next instruction in numerical sequence;
(3) * : In scanned tape-square print mark * then go to next instruction in numerical sequence;
(4) Cn: Conditional "transfer" (jump, branch) to instruction "n": If scanned tape-square is marked then go to instruction "n" else (if scanned square is blank) continue to next instruction in numerical sequence.
A sample of a simple B-machine instruction is his example (p. 65):
1. *, 2. →, 3. C2, 4. →, 5. ←
He rewrites this as a collection of ordered pairs:
{ ( 1, * ), ( 2, → ), ( 3, C2 ), ( 4, → ), ( 5, ← ) }
Wang's W-machine is simply the B-machine with the one additional instruction
(5) E : In scanned tape-square erase the mark * (if there is one) then go to next instruction in numerical sequence.
See also
Codd's cellular automaton
Counter-machine model
|
https://en.wikipedia.org/wiki/Survival%20rate
|
Survival rate is a part of survival analysis. It is the proportion of people in a study or treatment group still alive at a given period of time after diagnosis. It is a method of describing prognosis in certain disease conditions, and can be used for the assessment of standards of therapy. The survival period is usually reckoned from date of diagnosis or start of treatment. Survival rates are based on the population as a whole and cannot be applied directly to an individual. There are various types of survival rates (discussed below). They often serve as endpoints of clinical trials and should not be confused with mortality rates, a population metric.
Overall survival
Patients with a certain disease (for example, colorectal cancer) can die directly from that disease or from an unrelated cause (for example, a car accident). When the precise cause of death is not specified, this is called the overall survival rate or observed survival rate. Doctors often use mean overall survival rates to estimate the patient's prognosis. This is often expressed over standard time periods, like one, five, and ten years. For example, prostate cancer has a much higher one-year overall survival rate than pancreatic cancer, and thus has a better prognosis.
Sometimes the overall survival is reported as a death rate (%) without specifying the period the % applies to (possibly one year) or the period it is averaged over (possibly five years), e.g. Obinutuzumab: A Novel Anti-CD20 Monoclonal Antibody for Chronic Lymphocytic Leukemia.
Net survival rate
When someone is interested in how survival is affected by the disease, there is also the net survival rate, which filters out the effect of mortality from other causes than the disease. The two main ways to calculate net survival are relative survival and cause-specific survival or disease-specific survival.
Relative survival has the advantage that it does not depend on accuracy of the reported cause of death; cause specific survival has the
|
https://en.wikipedia.org/wiki/Olfactory%20glands
|
Olfactory glands, also known as Bowman's glands, are a type of nasal gland situated in the part of the olfactory mucosa beneath the olfactory epithelium, that is the lamina propria, a connective tissue also containing fibroblasts, blood vessels and bundles of fine axons from the olfactory neurons.
An olfactory gland consists of an acinus in the lamina propria and a secretory duct going out through the olfactory epithelium.
Electron microscopy studies show that olfactory glands contain cells with large secretory vesicles. Olfactory glands secrete the gel-forming mucin protein MUC5B. They might secrete proteins such as lactoferrin, lysozyme, amylase and IgA, similarly to serous glands. The exact composition of the secretions from olfactory glands is unclear, but there is evidence that they produce odorant-binding protein.
Function
The olfactory glands are tubuloalveolar glands surrounded by olfactory receptors and sustentacular cells in the olfactory epithelium. These glands produce mucous to lubricate the olfactory epithelium and dissolve odorant-containing gases. Several olfactory binding proteins are produced from the olfactory glands that help facilitate the transportation of odorants to the olfactory receptors. These cells exhibit the mRNA to transform growth factor α, stimulating the production of new olfactory receptor cells.
See also
William Bowman
List of distinct cell types in the adult human body
|
https://en.wikipedia.org/wiki/MRC%20%28file%20format%29
|
MRC is a file format that has become an industry standard in cryo-electron microscopy (cryoEM) and electron tomography (ET), where the result of the technique is a three-dimensional grid of voxels each with a value corresponding to electron density or electric potential. It was developed by the MRC (Medical Research Council, UK) Laboratory of Molecular Biology. In 2014, the format was standardised. The format specification is available on the CCP-EM website.
The MRC format is supported by many of the software packages listed in b:Software Tools For Molecular Microscopy.
See also
CCP4 (file format)
|
https://en.wikipedia.org/wiki/Supraclavicular%20fossa
|
The Supraclavicular fossa is an indentation (fossa) immediately above the clavicle.
In terminologia anatomica, it is divided into fossa supraclavicularis major and fossa supraclavicularis minor
Fullness in the supraclavicular fossa can be a sign of upper extremity deep venous thrombosis.
Additional Images
|
https://en.wikipedia.org/wiki/Infraclavicular%20fossa
|
The Infraclavicular fossa is an indentation, or fossa, immediately below the clavicle, above the third rib and between the deltoid muscle laterally and medioclavicular line medially.
See also
Supraclavicular fossa
|
https://en.wikipedia.org/wiki/Frog%20%28models%29
|
Frog was a well-known British brand of flying model aircraft and scale model construction kits from the 1930s to the 1970s. The company's first model, an Interceptor Mk. 4, was launched in 1932, followed in 1936 by a range of 1:72 scale model aircraft kits made from cellulose acetate, which were the world's first.
Polystyrene models were introduced in 1955, which offered kits of aircraft, ships and cars in various scales. By the 1970s, Frog's catalogue included a large number of lesser-known aircraft types, manufactured only by the company, as well as a number of ship kits.
The last Frog-branded kits were produced in 1976, whereupon many of the Frog moulds were sold to the Soviet Union and marketed under the Novo name.
History
Founded in 1931 by Charles Wilmot and Joe Mansour, International Model Aircraft Ltd. (IMA) originally used the Frog brand name (said to stand for "Flies Right Off the Ground") on the Interceptor Mk.4 semi-scale rubber-band powered flying model, launched the following year. Also in 1932, a marketing partnership with the toy company Lines Bros Ltd. was formed and other Frog brand flying models followed. In 1936, a range of 1:72 scale aircraft models in kit or pre-built form, moulded in cellulose acetate, was launched under the Frog Penguin name (alluding to the non-flying nature of these models). These were the world's first plastic model construction kits. An early release was the No.21P Empire Flying Boat, issued in 1938.
During the Second World War, the company produced flying models for target purposes and 1:72 scale aircraft recognition models. The Penguin range was dropped in 1949 but a new range of Frog polystyrene kits was introduced in 1955. A wide variety of aircraft, ship and car subjects in various scales were issued during the 1950s and 60s, 1:72 scale being standardised from 1963 onwards for aircraft models.
Production of scale and non-scale flying models continued into the early 1960s.
Frog's 1:72 line-up by the 1970s inclu
|
https://en.wikipedia.org/wiki/Ente%20Scambi%20Coloniali%20Internazionali
|
Ente Scambi Coloniali Internazionali (), mostly known for its acronym ESCI, was an Italian scale model kit manufacturer based in Lombardy. Established in 1930, the company produced model cars and model aircraft.
In 1987, it merged to American manufacturer Ertl to become "ESCI-ERTL SpA", remaining in the business until it was liquidated in 1993.
History
ESCI was originally founded in the 1930s by Moses Agiman (b. Benghazi 1896), an Italian merchant of Libyan and Jewish descent. Initially, the company dealt in import/export between Italy and its African colonies. The advent of racial laws during World War II forced the owner to move to Switzerland with his family until the end of the conflict. Returning to Italy, he resumed business activities undertaken previously and expanded, thanks to the economic boom of the 1960s. In the mid-1960s, Agiman entered the scale model market with the first imports of kits from Japan. The business expanded, and at the end of the decade, alongside the founder's son, Daniel Agiman (successor of his father), joined two new partners: Dino Coppola and Franco Baldrighi. E.S.C.I. became "ESCI Modellistica snc.", located in the industrial area of Via Torino in Cernusco sul Naviglio, MiIan.
Compared to other modelling companies at the time, ESCI was primarily a sales office, able to make contacts and cultivate global business. The production division of the company relied on third-party companies and technicians, who commissioned craftsmen, moulds, and sometimes production and packaging. The company's production was diverse and ranged from talented producers in the industry as Italaerei (today Italeri), Otaki, LS, as well as local artisans. Thus, production ranged from moulds of high-level kits to less detailed moulds with raised panel lines.
Real production began in 1972, with the launch of additional decal sheets which allowed modellers to finish the kits in different liveries for the first time. Each decal sheet was packaged in a plasti
|
https://en.wikipedia.org/wiki/Jo-Han
|
Jo-Han was a manufacturer of plastic scale promotional model cars and kits originally based in Detroit. The company was founded in 1947 by tool and die maker John Hanley a year before West Gallogly's competing company AMT was formed and about the same time as PMC. After changing ownership a few times, Jo-Han models were sporadically manufactured by Okey Spaulding in Covington, Kentucky, but apparently none have been offered for several years.
History
Originally called Ideal Models, Hanley's first products were mid-1950s model aircraft and other promotional items. Some of the early projects included scale model kitchen sets and a training model of Chrysler's fluid drive transmission. This awarded Hanley a contract to produce models for Chrysler.
During the 1950s, the U.S. automakers were commissioning models of their cars from suppliers that included AMT and Jo-Han. Automobile sales people realized that, as one slogan of the time put it, "the little ones sell the big ones". The promise of a free toy car for the kids would entice families into showrooms to view the latest car designs and take them for test drives.
Contracts with General Motors soon followed, and Jo-Han produced Pontiac models for the 1955 model year. Over time, Jo-Han became known more for Chrysler models, though Oldsmobile, Cadillac, Studebaker (often Larks), and American Motors were also well represented making Jo-Han a strong competitor to AMT and later to MPC. Oldsmobile and Cadillac models appeared through the 1960s and 1970s, including the 1962 Oldsmobile compact Cutlass F-85. Their last promotional model made was the 1979 Cadillac Coupe de Ville.
Eventually the company name was changed to Jo-Han Models because of the already existing Ideal Toy Company. The new name reflected the first two letters of the founder's first name and the first three letters of his last name. Similar to how AMT simultaneously used the SMP brand name, Jo-Han's 1955 Pontiac Star Chief two door and four door sedan p
|
https://en.wikipedia.org/wiki/List%20of%20model%20aircraft%20manufacturers
|
The following companies manufacture, or have manufactured, model aircraft.
Flying
Ready-to-fly
A ready-to-fly model has the airframe assembled but may or may not have an engine or other equipment fitted.
Almost Ready-to-fly
Guillow
Kits
A flying model kit requires a good deal of work to construct the airframe before it can fly.
Black Horse Models (Vietnam)
Diels Engineering (USA)
Easy Built Models (USA)
FMK Model kits (UK & Bulgaria)
Frog (UK)
Flite Test (USA)
Keil-Kraft (UK)
Kipera Craft (Japan)
Kyosho (Japan)
DiWings Aeromodelismo (ARG)
Mercury (UK)
Minamikawachi Aero Models (Japan) - ex-Lattle Snake
OK Model (Japan)
Peck-Polymers (USA)
Seagull Models (Vietnam)
SIG (USA)
Studio Mid (Japan)
The Miniature Aircraft Factory (UK & Bulgaria)
The World Models (Hong Kong, China)
Tiger Seisakusyo (Japan)
Tough Jets (USA)
Tubame Gangu (Japan)
Veron (UK)
Roban (China)
Yamada (Japan)
Static
To scale
Static scale models are used for collection or display.
Aero le Plane (Hong Kong SAR)
AeroClassics (USA)
Atlantic-Models, Inc. (USA)
Calibre Wings (Singapore)
Corgi Toys (UK)
Blue Box
Dragon Wings (China)
Easy Model (China)
Factory Direct Models (USA)
Fiberworks International (Philippines)
Flight Miniatures (USA)
Shopping Zone Plus (Canada)
GeminiJets (USA)
Herpa Wings (Germany)
Hobby Master (China)
Hogan Wings(UK)
IDT Jets (USA)
Inflight200
JC Wings (Hong Kong SAR)
Konishi Model (Japan)
Long Prosper (China)
Lupa Aircraft Models (Netherlands)
Matchbox (UK)
Mastercraft Collection
PacMin (Pacific Miniatures) (USA)
Modelworks Direct (USA)
ModelBuffs (Philippines)
NG Model (China)
Phoenix Model (Taiwan)
Pinfei Model Aircraft (China)
Postage Stamp (USA)
Showcase Models
Skymarks (UK)
Socatec Aircraft Models
Squadron Nostalgia LLC (USA)
Squadron Toys (U.S.A. Officially Licensed by the U.S. Navy)
Toys and Models Corporation
Wing Factory (Japan)
Wooster
Velocity Models
YourCraftsman/BigBird
Not to scale
These models are not scale replicas of any full-size design.
T
|
https://en.wikipedia.org/wiki/Dennis%20DeTurck
|
Dennis M. DeTurck (born July 15, 1954) is an American mathematician known for his work in partial differential equations and Riemannian geometry, in particular contributions to the theory of the Ricci flow and the prescribed Ricci curvature problem. He first used the DeTurck trick to give an alternative proof of the short time existence of the Ricci flow, which has found other uses since then.
Education
DeTurck received a B.S. (1976) from Drexel University. He received an M.A. (1978) and Ph.D. (1980) in mathematics from the University of Pennsylvania. His Ph.D. supervisor was Jerry Kazdan.
Career
DeTurck is currently Robert A. Fox Leadership Professor and Professor of Mathematics at the University of Pennsylvania, where he has been the Dean of the College of Arts and Sciences since 2005 and Faculty Director of Riepe College House. In 2002, DeTurck won the Haimo Award from the Mathematical Association of America for his teaching. Despite being recognized for excellence in teaching, he has been criticized for his belief that fractions are "as obsolete as Roman numerals" and suggesting that they not be taught to younger students.
In January 2012, he shared the Chauvenet Prize with three mathematical collaborators. In 2012, he became a fellow of the American Mathematical Society.
Selected publications
(explains the DeTurck trick; also see the improved version)
|
https://en.wikipedia.org/wiki/Animal%20migration%20tracking
|
Animal migration tracking is used in wildlife biology, conservation biology, ecology, and wildlife management to study animals' behavior in the wild. One of the first techniques was bird banding, placing passive ID tags on birds legs, to identify the bird in a future catch-and-release. Radio tracking involves attaching a small radio transmitter to the animal and following the signal with a RDF receiver. Sophisticated modern techniques use satellites to track tagged animals, and GPS tags which keep a log of the animal's location. With the Emergence of IoT the ability to make devices specific to the species or what is to be tracked is possible. One of the many goals of animal migration research has been to determine where the animals are going; however, researchers also want to know why they are going "there". Researchers not only look at the animals' migration but also what is between the migration endpoints to determine if a species is moving to new locations based on food density, a change in water temperature, or other stimulus, and the animal's ability to adapt to these changes. Migration tracking is a vital tool in efforts to control the impact of human civilization on populations of wild animals, and prevent or mitigate the ongoing extinction of endangered species.
Technologies
In the fall of 1803, American Naturalist John James Audubon wondered whether migrating birds returned to the same place each year. So he tied a string around the leg of a bird before it flew south. The following spring, Audubon saw the bird had indeed come back.
Scientists today still attach tags, such as metal bands, to track movement of animals. Metal bands require the re-capture of animals for the scientists to gather data; the data is thus limited to the animal's release and destination points.
Recent technologies have helped solve this problem. Some electronic tags give off repeating signals that are picked up by radio devices or satellites while other electronic tags could inc
|
https://en.wikipedia.org/wiki/Genetic%20assimilation
|
Genetic assimilation is a process described by Conrad H. Waddington by which a phenotype originally produced in response to an environmental condition, such as exposure to a teratogen, later becomes genetically encoded via artificial selection or natural selection. Despite superficial appearances, this does not require the (Lamarckian) inheritance of acquired characters, although epigenetic inheritance could potentially influence the result. Waddington stated that genetic assimilation overcomes the barrier to selection imposed by what he called canalization of developmental pathways; he supposed that the organism's genetics evolved to ensure that development proceeded in a certain way regardless of normal environmental variations.
The classic example of genetic assimilation was a pair of experiments in 1942 and 1953 by Waddington. He exposed Drosophila fruit fly embryos to ether, producing an extreme change in their phenotype: they developed a double thorax, resembling the effect of the bithorax gene. This is called a homeotic change. Flies which developed halteres (the modified hindwings of true flies, used for balance) with wing-like characteristics were chosen for breeding for 20 generations, by which point the phenotype could be seen without other treatment.
Waddington's explanation has been controversial, and has been accused of being Lamarckian. More recent evidence appears to confirm the existence of genetic assimilation in evolution; in yeast, when a stop codon is lost by mutation, the reading frame is preserved much more often than would be expected.
History
Waddington's experiments
Conrad H. Waddington's classic experiment (1942) induced an extreme environmental reaction in the developing embryos of Drosophila. In response to ether vapor, a proportion of embryos developed a radical phenotypic change, a second thorax. At this point in the experiment bithorax is not innate; it is induced by an unusual environment. Waddington then repeatedly selected Dr
|
https://en.wikipedia.org/wiki/Billiard-ball%20computer
|
A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
Simulating circuits with billiard balls
This model can be used to simulate Boolean circuits in which the wires of the circuit correspond to paths on which one of the balls may travel, the signal on a wire is encoded by the presence or absence of a ball on that path, and the gates of the circuit are simulated by collisions of balls at points where their paths cross. In particular, it is possible to set up the paths of the balls and the buffers around them to form a reversible Toffoli gate, from which any other Boolean logic gate may be simulated. Therefore, suitably configured billiard-ball computers may be used to perform any computational task.
Simulating billiard balls in other models of computation
It is possible to simulate billiard-ball computers on several types of reversible cellular automaton, including block cellular automata and second-order cellular automata. In these simulations, the balls are only allowed to move at a constant speed in an axis-parallel direction, assumptions that in any case were already present in the use of the billiard ball model to simulate logic circuits. Both the balls and the buffers are simulated by certain patterns of live cells, and the field across which the balls move is simulated by regions of dead cells, in these cellular automaton simulations.
Logic gates based on billiard-ball computer designs have also been made to operate using live soldier crabs of the species Mictyris guinotae in place of the billiard balls.
|
https://en.wikipedia.org/wiki/Alternatives%20to%20general%20relativity
|
Alternatives to general relativity are physical theories that attempt to describe the phenomenon of gravitation in competition with Einstein's theory of general relativity. There have been many different attempts at constructing an ideal theory of gravity.
These attempts can be split into four broad categories based on their scope. In this article, straightforward alternatives to general relativity are discussed, which do not involve quantum mechanics or force unification. Other theories which do attempt to construct a theory using the principles of quantum mechanics are known as theories of quantized gravity. Thirdly, there are theories which attempt to explain gravity and other forces at the same time; these are known as classical unified field theories. Finally, the most ambitious theories attempt to both put gravity in quantum mechanical terms and unify forces; these are called theories of everything.
None of these alternatives to general relativity have gained wide acceptance. General relativity has withstood many tests, remaining consistent with all observations so far. In contrast, many of the early alternatives have been definitively disproven. However, some of the alternative theories of gravity are supported by a minority of physicists, and the topic remains the subject of intense study in theoretical physics.
History of gravitational theory through general relativity
At the time it was published in the 17th century, Isaac Newton's theory of gravity was the most accurate theory of gravity. Since then, a number of alternatives were proposed. The theories which predate the formulation of general relativity in 1915 are discussed in history of gravitational theory.
General relativity
This theory is what we now call "general relativity" (included here for comparison). Discarding the Minkowski metric entirely, Einstein gets:
which can also be written
Five days before Einstein presented the last equation above, Hilbert had submitted a paper containing a
|
https://en.wikipedia.org/wiki/Acid%20mantle
|
The acid mantle is a very fine, slightly acidic film on the surface of human skin acting as a barrier to bacteria, viruses and other potential contaminants that might penetrate the skin. Sebum is secreted by the sebaceous gland and when mixed with sweat becomes the acid mantle. Unlike the acid mantle on the skin’s surface, the viable epidermis (ie., layers below the stratum corneum) has a neutral pH around 7.0. The general assumption is that skin surface pH is on average between 5.0 and 6.0. However the pH of the skin’s acid mantle is a broad range that depends on the condition of the skin and other estimates deem the range to be between 4.5 and 6.5 as slightly acidic. More recent research has challenged the proposed ranges. When healthy human skin has not had contact with skin products or water for extended periods it has been found to naturally return to acidity levels below that of 5.0. A value of 4.7 was considered to be the natural average and ideal. Some subjects within the standard deviation reached values as low as 4.3. The study subjects with a skin pH below 5.0 showed statistically significant less scaling, higher hydration levels, and had better resident flora presence than subjects with skin pH above 5.0; concluding that people with a natural pH below 5.0 have a better condition than individuals with skin at a naturally higher pH.
The acidic surface pH is an important determinant for the growth conditions of resident microflora (i.e. normally found on the skin). Human skin has a mutualistic symbiotic relationship with its microflora. The skin provides the right environmental condition for the resident flora and the resident flora in turn strengthen skin’s defence by prevention of the colonization of harmful bacteria as well as playing a role in the acidification of the skin. Using skincare products to alter the skin pH down to 4.0-4.5 kept the resident bacterial flora attached to the skin while using alkaline personal care products on the skin promote
|
https://en.wikipedia.org/wiki/Rejuvenation%20Research
|
Rejuvenation Research is a bimonthly peer-reviewed scientific journal published by Mary Ann Liebert that covers research on rejuvenation and biogerontology. The journal was established in 1998. The current acting editor-in-chief is Ben Zealley. It is the official journal of the European Society of Preventive, Regenerative and Anti-Aging Medicine as well as PYRAMED: World Federation and World Institute of Preventive & Regenerative Medicine.
The journal exhibited unusual levels of self-citation and its journal impact factor of 2019 was suspended from Journal Citation Reports in 2020, a sanction which hit 33 journals in total. However, the journal's 2020 impact factor was made available again in June 2021.
History
The journal was established in 1998 as the Journal of Anti-Aging Medicine with Michael Fossel (Michigan State University) as editor-in-chief. It obtained its current title in 2004, when Aubrey de Grey took over as editor-in-chief. The current acting editor-in-chief is Ben Zealley.
SENS conferences
The journal publishes the abstracts of the biennial conferences of the SENS Research Foundation.
Abstracting and indexing
Rejuvenation Research is abstracted and indexed in:
MEDLINE
Current Contents/Clinical Medicine
Science Citation Index Expanded
EMBASE/Excerpta Medica
Scopus
CAB Abstracts
See also
Strategies for engineered negligible senescence
Timeline of senescence research
|
https://en.wikipedia.org/wiki/Akshay%20Venkatesh
|
Akshay Venkatesh (born 21 November 1981) is an Australian mathematician and a professor (since 15 August 2018) at the School of Mathematics at the Institute for Advanced Study. His research interests are in the fields of counting, equidistribution problems in automorphic forms and number theory, in particular representation theory, locally symmetric spaces, ergodic theory, and algebraic topology.
He was the first Australian to have won medals at both the International Physics Olympiad and International Mathematical Olympiad, which he did at the age of 12.
In 2018, he was awarded the Fields Medal for his synthesis of analytic number theory, homogeneous dynamics, topology, and representation theory. He is the second Australian and the second person of Indian descent to win the Fields Medal. He was on the Mathematical Sciences jury for the Infosys Prize in 2020.
Early years
Akshay Venkatesh was born in Delhi, India, and his family emigrated to Perth in Western Australia when he was two years old. He attended Scotch College. His mother, Svetha, is a computer science professor at Deakin University. A child prodigy, Akshay attended extracurricular training classes for gifted students in the state mathematical olympiad program, and in 1993, whilst aged only 11, he competed at the 24th International Physics Olympiad in Williamsburg, Virginia, winning a bronze medal. The following year, he switched his attention to mathematics and, after placing second in the Australian Mathematical Olympiad, he won a silver medal in the 6th Asian Pacific Mathematics Olympiad, before winning a bronze medal at the 1994 International Mathematical Olympiad held in Hong Kong. He completed his secondary education the same year, turning 13 before entering the University of Western Australia as its youngest ever student. Venkatesh completed the four-year course in three years and became, at 16, the youngest person to earn First Class Honours in pure mathematics from the university. He was aw
|
https://en.wikipedia.org/wiki/Cat%20intelligence
|
Cat intelligence is the capacity of the domesticated cat to solve problems and adapt to its environment. Research has shown that feline intelligence includes the ability to acquire new behavior that applies knowledge to new situations, communicating needs and desires within a social group and responding to training cues.
The brain
Brain size
The brain of the domesticated cat is about long and weighs . If a typical cat is taken to be long with a weight of , then the brain would be at 0.91% of its total body mass, compared to 2.33% of total body mass in the average human. Within the encephalization quotient proposed by Jerison in 1973, values above one are classified big-brained, while values lower than one are small-brained. The domestic cat is attributed a value of between 1–1.71 (for comparison: human values range between 7.44–7.8).
The largest brains in the family Felidae are those of the tigers in Java and Bali. It is debated whether there exists a causal relationship between brain size and intelligence in vertebrates. Most experiments involving the relevance of brain size to intelligence hinge on the assumption that complex behavior requires a complex (and therefore intelligent) brain; however, this connection has not been consistently demonstrated.
The surface area of a cat's cerebral cortex is approximately ; furthermore, a theoretical cat weighing has a cerebellum weighing , 0.17% of the total weight.
Brain structures
According to researchers at Tufts University School of Veterinary Medicine, the physical structure of the brains of humans and cats is very similar. The human brain and the cat brain both have cerebral cortices with similar lobes.
The number of cortical neurons contained in the brain of the cat is reported to be 203 million. Area 17 of the visual cortex was found to contain about 51,400 neurons per mm3. Area 17 is the primary visual cortex.
Feline brains are gyrencephalic, i.e. they have a surface folding as human brains do.
Analyse
|
https://en.wikipedia.org/wiki/Flexible%20display
|
A flexible display or rollable display is an electronic visual display which is flexible in nature, as opposed to the traditional flat screen displays used in most electronic devices. In recent years there has been a growing interest from numerous consumer electronics manufacturers to apply this display technology in e-readers, mobile phones and other consumer electronics. Such screens can be rolled up like a scroll without the image or text being distorted. Technologies involved in building a rollable display include electronic ink, Gyricon, Organic LCD, and OLED.
Electronic paper displays which can be rolled up have been developed by E Ink. At CES 2006, Philips showed a rollable display prototype, with a screen capable of retaining an image for several months without electricity. In 2007, Philips launched a 5-inch, 320 x 240-pixel rollable display based on E Ink’s electrophoretic technology. Some flexible organic light-emitting diode displays have been demonstrated.The first commercially sold flexible display was an electronic paper wristwatch. A rollable display is an important part of the development of the roll-away computer.
Applications
With the flat panel display having already been widely used more than 40 years, there have been many desired changes in the display technology, focusing on developing a lighter, thinner product that was easier to carry and store. Through the development of rollable displays in recent years, scientists and engineers agree that flexible flat panel display technology has huge market potential in the future.
Rollable displays can be used in many places:
Mobile devices.
Laptops and PDAs.
A permanently conformed display that securely fits around the wrists.
A child's mask for Halloween and other uses.
An odd-shaped display integrated in a steering wheel or automobile.
History
Flexible electronic paper based displays
Flexible electronic paper (e-paper) based displays were the first flexible displays conceptualized and prototy
|
https://en.wikipedia.org/wiki/Shallow%20trench%20isolation
|
Shallow trench isolation (STI), also known as box isolation technique, is an integrated circuit feature which prevents electric current leakage between adjacent semiconductor device components. STI is generally used on CMOS process technology nodes of 250 nanometers and smaller. Older CMOS technologies and non-MOS technologies commonly use isolation based on LOCOS.
STI is created early during the semiconductor device fabrication process, before transistors are formed. The key steps of the STI process involve etching a pattern of trenches in the silicon, depositing one or more dielectric materials (such as silicon dioxide) to fill the trenches, and removing the excess dielectric using a technique such as chemical-mechanical planarization.
Certain semiconductor fabrication technologies also include deep trench isolation, a related feature often found in analog integrated circuits.
The effect of the trench edge has given rise to what has recently been termed the "reverse narrow channel effect" or "inverse narrow width effect". Basically, due to the electric field enhancement at the edge, it is easier to form a conducting channel (by inversion) at a lower voltage. The threshold voltage is effectively reduced for a narrower transistor width. The main concern for electronic devices is the resulting subthreshold leakage current, which is substantially larger after the threshold voltage reduction.
Process flow
Stack deposition (oxide + protective nitride)
Lithography print
Dry etch (Reactive-ion etching)
Trench fill with oxide
Chemical-mechanical polishing of the oxide
Removal of the protective nitride
Adjusting the oxide height to Si
See also
FEOL
|
https://en.wikipedia.org/wiki/NLRP3
|
NLR family pyrin domain containing 3 (NLRP3) (previously known as NACHT, LRR and PYD domains-containing protein 3 [NALP3] and cryopyrin), is a protein that in humans is encoded by the NLRP3 gene located on the long arm of chromosome 1.
NLRP3 is expressed predominantly in macrophages and as a component of the inflammasome, detects products of damaged cells such as extracellular ATP and crystalline uric acid. Activated NLRP3 in turn triggers an immune response. Mutations in the NLRP3 gene are associated with a number of organ specific autoimmune diseases.
Nomenclature
NACHT, LRR, and PYD are respectively acronyms for:
NACHT – NAIP (neuronal apoptosis inhibitor protein), C2TA [class 2 transcription activator, of the MHC, HET-E (heterokaryon incompatibility) and TP1 (telomerase-associated protein 1)
LRR – "leucine-rich repeat" and is synonymous with NLR, for or nucleotide-binding domain, leucine-rich repeat"
PYD – "PYRIN domain," after the pyrin proteins The NLRP3 gene name abbreviates "NLR family, pyrin domain containing 3," where NLR refers to "nucleotide-binding domain, leucine-rich repeat."
The NACHT, LRR and PYD domains-containing protein 3 is also called:
cold induced autoinflammatory syndrome 1 (CIAS1),
caterpiller-like receptor 1.1 (CLR1.1), and
PYRIN-containing APAF1-like protein 1 (PYPAF1).
Structure
This gene encodes a pyrin-like protein which contains a pyrin domain, a nucleotide-binding site (NBS) domain, and a leucine-rich repeat (LRR) motif. This protein interacts with pyrin domain (PYD) of apoptosis-associated speck-like protein containing a CARD (ASC). Proteins which contain the caspase recruitment domain, CARD, have been shown to be involved in inflammation and immune response.
Function
NLRP3 is a component of the innate immune system that functions as a pattern recognition receptor (PRR) that recognizes pathogen-associated molecular patterns (PAMPs). NLRP3 belongs to the NOD-like receptor (NLR) subfamily of PRRs and NLRP3 together w
|
https://en.wikipedia.org/wiki/Dragon%27s%20Eye%20%28symbol%29
|
According to Rudolf Koch, the Dragon's Eye is an ancient Germanic symbol. The Dragon's Eye is an isosceles or equilateral triangle pointing downward, with a "Y" in the middle connecting the three points of the triangle together. According to Carl G. Liungman's Dictionary of Symbols, it combines the triangle meaning "threat" and the "Y" meaning a choice between good and evil.
The dragon's eye resembles a two dimensional projection of a tetrahedron viewed from directly above one of its vertices. Such a 2-D representation has been part of the logo of the Citgo Petroleum Company ever since 1965, when it was spun off from Cities Services Company.
The shape has been incorporated in the logo for the video game Ingress since its original public release in 2013. According to the in-universe mythology, the triangle represents humanity, while the hexagon represents the "Shapers".
The Dragon's Eye is also found in the Destiny series of video games, associated with the character Ikora Rey.
Gallery
See also
Junkers, the logo for which resembles the dragon's eye
|
https://en.wikipedia.org/wiki/Quasinorm
|
In linear algebra, functional analysis and related areas of mathematics, a quasinorm is similar to a norm in that it satisfies the norm axioms, except that the triangle inequality is replaced by
for some
Definition
A on a vector space is a real-valued map on that satisfies the following conditions:
:
: for all and all scalars
there exists a real such that for all
If then this inequality reduces to the triangle inequality. It is in this sense that this condition generalizes the usual triangle inequality.
A is a quasi-seminorm that also satisfies:
Positive definite/: if satisfies then
A pair consisting of a vector space and an associated quasi-seminorm is called a .
If the quasi-seminorm is a quasinorm then it is also called a .
Multiplier
The infimum of all values of that satisfy condition (3) is called the of
The multiplier itself will also satisfy condition (3) and so it is the unique smallest real number that satisfies this condition.
The term is sometimes used to describe a quasi-seminorm whose multiplier is equal to
A (respectively, a ) is just a quasinorm (respectively, a quasi-seminorm) whose multiplier is
Thus every seminorm is a quasi-seminorm and every norm is a quasinorm (and a quasi-seminorm).
Topology
If is a quasinorm on then induces a vector topology on whose neighborhood basis at the origin is given by the sets:
as ranges over the positive integers.
A topological vector space with such a topology is called a or just a .
Every quasinormed topological vector space is pseudometrizable.
A complete quasinormed space is called a . Every Banach space is a quasi-Banach space, although not conversely.
Related definitions
A quasinormed space is called a if the vector space is an algebra and there is a constant such that
for all
A complete quasinormed algebra is called a .
Characterizations
A topological vector space (TVS) is a quasinormed space if and only if it has a bounded neighborhood of the
|
https://en.wikipedia.org/wiki/Dorn%20method
|
The Dorn method is a form of manual, holistic alternative therapy used to correct misalignments in the spinal column and other joints.
During a treatment, the practitioner palpates the patient's spine. If any 'unbalanced' areas are found, possible underlying misalignments are palpated with gentle pressure using the thumb or hand against the spinous processes, while the patient enacts guided movements such as swinging the leg or arms to distract the muscles' inertia; this is similar to the principle of mechanics known as 'counter pressure'. In case of pain, the patient is advised to stop the procedure in order to avoid any damage to the body.
|
https://en.wikipedia.org/wiki/Electrologica%20X8
|
The Electrologica X8 (or EL X8) was a digital computer designed as a successor to the Electrologica X1 and manufactured in the Netherlands by Electrologica NV between 1964 and 1968.
Like its predecessor, the X1, the X8 system included core memory, 27-bit word length, and drum memory as secondary storage (not as primary storage). The memory address was increased from 15 to 18 bits, for a theoretical maximum memory size of 256k words. The X8 included an independent peripheral processor called CHARON (Centraal Hulporgaan Autonome Regeling Overdracht Nevenapparatuur, or Central Coprocessor Autonomous Regulation Transfer Peripherals) which handled I/O. Other features included up to 48 input/output channels designed for low speed devices such as paper tape, plotters and printers. Unlike the X1, the arithmetic unit of the X8 included floating point arithmetic, with a 41-bit mantissa and 12-bit exponent (which adds up to 53 bits rather than 54; the reason is that there are two copies of the mantissa sign bit).
The system is most notable as the target processor for Edsger Dijkstra's implementation of the THE multiprogramming system. This includes the invention of semaphores, enabled by a specific instruction in the X8 instruction set. Semaphores were used not only as a synchronization mechanism within the THE operating system, but also in the request and response data structures for I/O requests processed by the CHARON coprocessor.
|
https://en.wikipedia.org/wiki/Rio%20de%20Janeiro%20Botanical%20Garden
|
The Rio de Janeiro Botanical Garden or Jardim Botânico is located at the Jardim Botânico district in the South Zone of Rio de Janeiro.
The Botanical Garden shows the diversity of Brazilian and foreign flora. There are around 6,500 species (some endangered) distributed throughout an area of as well as numerous greenhouses. The garden also houses monuments of historical, artistic, and archaeological significance. There is an important research center, which includes the most complete library in the country specializing in botany with over 32,000 volumes.
It was founded in 1808 by King John VI of Portugal. Originally intended for the acclimatization of spices like nutmeg, pepper and cinnamon imported from the West Indies, the garden was opened to the public in 1822, and is now open during daylight hours every day except 25 December and 1 January.
The park lies at the foot of the Corcovado Mountain, far below the right arm of the statue of Christ the Redeemer and contains more than 6,000 different species of tropical and subtropical plants and trees, including 900 varieties of palm trees. A line of 134 palms forms the Avenue of Royal Palms leading from the entrance into the gardens. These palms all descended from a single tree, the Palma Mater, long since destroyed by lightning. Only about 40% of the park is cultivated, the remainder being Atlantic Forest rising up the slopes of Corcovado. The park is protected by the Patrimônio Histórico e Artístico Nacional and was designated as a biosphere reserve by UNESCO in 1992.
The Botanical Garden has an important research institute, which develops a wide range of botanical studies in Brazil. The institute has taxonomists who specialize in the identification and conservation of the neotropical flora.
The gardens house collections that include bromeliads, orchids, carnivorous plants, and cacti. These include Brazil’s largest botanical library and collections of dried fruits, rare Brazilian plants, and many photographs. T
|
https://en.wikipedia.org/wiki/Gromov%27s%20compactness%20theorem%20%28geometry%29
|
In the mathematical field of metric geometry, Mikhael Gromov proved a fundamental compactness theorem for sequences of metric spaces. In the special case of Riemannian manifolds, the key assumption of his compactness theorem is automatically satisfied under an assumption on Ricci curvature. These theorems have been widely used in the fields of geometric group theory and Riemannian geometry.
Metric compactness theorem
The Gromov–Hausdorff distance defines a notion of distance between any two metric spaces, thereby setting up the concept of a sequence of metric spaces which converges to another metric space. This is known as Gromov–Hausdorff convergence. Gromov found a condition on a sequence of compact metric spaces which ensures that a subsequence converges to some metric space relative to the Gromov–Hausdorff distance:
Let be a sequence of compact metric spaces with uniformly bounded diameter. Suppose that for every positive number there is a natural number and, for every , the set can be covered by metric balls of radius . Then the sequence has a subsequence which converges relative to the Gromov–Hausdorff distance.
The role of this theorem in the theory of Gromov–Hausdorff convergence may be considered as analogous to the role of the Arzelà–Ascoli theorem in the theory of uniform convergence. Gromov first formally introduced it in his 1981 resolution of the Milnor–Wolf conjecture in the field of geometric group theory, where he applied it to define the asymptotic cone of certain metric spaces. These techniques were later extended by Gromov and others, using the theory of ultrafilters.
Riemannian compactness theorem
Specializing to the setting of geodesically complete Riemannian manifolds with a fixed lower bound on the Ricci curvature, the crucial covering condition in Gromov's metric compactness theorem is automatically satisfied as a corollary of the Bishop–Gromov volume comparison theorem. As such, it follows that:
Consider a sequence of closed Riemann
|
https://en.wikipedia.org/wiki/Gromov%27s%20compactness%20theorem%20%28topology%29
|
In the mathematical field of symplectic topology, Gromov's compactness theorem states that a sequence of pseudoholomorphic curves in an almost complex manifold with a uniform energy bound must have a subsequence which limits to a pseudoholomorphic curve which may have nodes or (a finite tree of) "bubbles". A bubble is a holomorphic sphere which has a transverse intersection with the rest of the curve. This theorem, and its generalizations to punctured pseudoholomorphic curves, underlies the compactness results for flow lines in Floer homology and symplectic field theory.
If the complex structures on the curves in the sequence do not vary, only bubbles can occur; nodes can occur only if the complex structures on the domain are allowed to vary. Usually, the energy bound is achieved by considering a symplectic manifold with compatible almost-complex structure as the target, and assuming that curves to lie in a fixed homology class in the target. This is because the energy of such a pseudoholomorphic curve is given by the integral of the target symplectic form over the curve, and thus by evaluating the cohomology class of that symplectic form on the homology class of the curve. The finiteness of the bubble tree follows from (positive) lower bounds on the energy contributed by a holomorphic sphere.
|
https://en.wikipedia.org/wiki/Design%20computing
|
The terms design computing and other relevant terms including design and computation and computational design refer to the study and practice of design activities through the application and development of novel ideas and techniques in computing. One of the early groups to coin this term was the Key Centre of Design Computing and Cognition at the University of Sydney in Australia, which for nearly fifty years (late 1960s to today) pioneered the research, teaching, and consulting of design and computational technologies. This group organised the academic conference series "Artificial Intelligence in Design (AID)" published by Springer during that period. AID was later renamed "Design Computing and Cognition (DCC)" and is currently a leading biannual conference in the field. Other notable groups in this area are the Design and Computation group at Massachusetts Institute of Technology's School of Architecture + Planning and the Computational Design group at Georgia Tech.
Whilst these terms share in general an interest in computational technologies and design activity, there are important differences in the various approaches, theories, and applications. For example, while in some circles the term "computational design" refers in general to the creation of new computational tools and methods in the context of computational thinking, design computing is concerned with bridging these two fields in order to build an increased understanding of design.
The Bachelor of Design Computing (BDesComp) was created in 2003 at the University of Sydney and continues to be a leading programme in interaction design and creative technologies, now hosted by the Design Lab. In that context, design computing is defined to be the use and development of computational models of design processes and digital media to assist and/or automate various aspects of the design process with the goal of producing higher quality and new design forms.
Areas
In recent years a number of research and educa
|
https://en.wikipedia.org/wiki/Built%20to%20Rule
|
Built to Rule is a building blocks toyline from Hasbro that is compatible with such leading brands as Lego. These sets were released from 2003 to 2005. Sets are usually based upon existing toys and characters from the Hasbro brand, such as Tonka, G.I. Joe and Transformers: Armada.
G.I. Joe
Built To Rule was marketed as "Action Building Sets". All sets came with one set of building blocks you could build into a full sized vehicle, and one specially designed 3 3/4 G.I. Joe figure. The forearms and the calves of the figures sport places where blocks could be attached.
2003
The 2003 Built To Rule followed the G.I. Joe: Spy Troops story line.
Armadillo Assault with Duke
Depth Ray with Wet Suit
Forest Fox with Frostbite
Locust with Hollow Point
Raging Typhoon with Blowtorch
Rock Crusher with Gung-Ho
Cobra Moccasin with Cobra Moray
Cobra Raven with Wild Weasel
2004
Some of the figures in 2004 featured additional articulation with a mid-thigh cut joint.
Ground Striker with Flint
Patriot Grizzly with Hi-Tech
Rapid Runner with Chief Torpedo
Rising Tide with Barrel Roll
Sledgehammer with Heavy Duty
Headquarters Attack with Snake Eyes - Includes Cobra Firebat with A.V.A.C.
Cobra H.I.S.S. with Cobra Commander
Cobra Night Prowler with Shadow Viper
Cobra Sand Snake with Firefly
Cobra Venom Striker with Firefly
2005
Freedom Defense Outpost with Duke
Transformers
Transformers Built to Rule toys were predicted by some groups to be big sellers in 2003. A small number of Transformers: Energon Built to Rule sets had a limited, test market release, but the entire line performed poorly, so it was dropped in its entirety in 2004.
There are some significant differences between the Armada and Energon sets, though both are based on the same basic premise. Each Transformers kit is centered on a "Trans-Skeleton", a very simple humanoid body that folds up for vehicle mode without dis-assembly. From there, extra parts are added to the Trans-Skeleton for either mode. For A
|
https://en.wikipedia.org/wiki/Comanche%20%28horse%29
|
Comanche was a mixed-breed horse who survived George Armstrong Custer's detachment of the United States 7th Cavalry at the Battle of the Little Bighorn (June 25, 1876).
Biography
The horse was bought by the U.S. Army in 1868 in St. Louis, Missouri and sent to Fort Leavenworth, Kansas. His ancestry and date of birth were both uncertain. Captain Myles Keogh of the 7th Cavalry liked the gelding and bought him for his personal mount, to be ridden only in battle. He has alternatively been described as bay or bay dun. In 1868, while the army was fighting the Comanche in Kansas, the horse was wounded in the hindquarters by an arrow but continued to carry Keogh in the fight. He named the horse “Comanche” to honor his bravery. Comanche was wounded many more times but always exhibited the same toughness.
On June 25, 1876, Captain Keogh rode Comanche at the Battle of the Little Bighorn, led by Lt. Col. George Armstrong Custer. The battle was notable as their entire detachment was killed. US soldiers found Comanche, badly wounded, two days after the battle. After being transported to Fort Lincoln, he was slowly nursed back to health. After a lengthy convalescence, Comanche was retired. In April 1878, Colonel Samuel D. Sturgis issued the following order:
The ceremonial order inspired a reporter for the Bismarck Tribune to go to Fort Abraham Lincoln to interview Comanche. He "asked the usual questions which his subject acknowledged with a toss of his head, a stamp of his foot and a flourish of his beautiful tail."
His official keeper, the farrier John Rivers of Company I, Keogh's old troop, saved "Comanche's reputation" by answering more fully. Here is the gist of what the reporter learned (Bismarck Tribune, May 10, 1878): Comanche was a veteran, 21 years old, and had been with the 7th Cavalry since its Organization in '66.... He was found by Sergeant [Milton J.] DeLacey [Co. I] in a ravine where he had crawled, there to die and feed the Crows. He was raised up an
|
https://en.wikipedia.org/wiki/Agro-terrorism
|
Agroterrorism, also known as agriterrorism and agricultural terrorism, is a malicious attempt to disrupt or destroy the agricultural industry and/or food supply system of a population through "the malicious use of plant or animal pathogens to cause devastating disease in the agricultural sectors". It is closely related to the concepts of biological warfare, chemical warfare and entomological warfare, except carried out by non-state parties.
A hostile attack, towards an agricultural environment, including infrastructures and processes, in order to significantly damage national or international political interests.
Nomenclature
The terms agroterrorism, along with agroterror and agrosecurity, were coined by veterinarian pathologist Corrie Brown and writer Esmond Choueke in September 1999 as a means to spread the importance of this topic. The first public use of agroterrorism was in a report by Dr. Brown which was then reprinted in a front-page article of The New York Times on September 22, 1999, by reporter Judith Miller. Dr. Brown's article in 2,000 for Emerging Diseases of Animals (American Society for Microbiology) made these words a permanent fixture, and they soon ended up as part of everyday use. The Oxford Dictionary now recognizes the word agroterrorism and its derivatives. An initial debate by Dr. Brown and Mr. Choueke involved the spellings agriterror vs. agroterror. The spelling with the "o" won, as it was closest to bioterrorism and thus would be easier to remember.
Theory
Clemson University's Regulatory and Public Service Program listed "diseases vectored by insects" among bioterrorism scenarios considered "most likely". Because invasive species are already a problem worldwide one University of Nebraska entomologist considered it likely that the source of any sudden appearance of a new agricultural pest would be difficult, if not impossible, to determine. Lockwood considers insects a more effective means of transmitting biological agents for acts of bio
|
https://en.wikipedia.org/wiki/Fluorescence%20anisotropy
|
Fluorescence anisotropy or fluorescence polarization is the phenomenon where the light emitted by a fluorophore has unequal intensities along different axes of polarization. Early pioneers in the field include Aleksander Jablonski, Gregorio Weber, and Andreas Albrecht. The principles of fluorescence polarization and some applications of the method are presented in Lakowicz's book.
Definition of fluorescence anisotropy
The anisotropy (r) of a light source is defined as the ratio of the polarized component to the total intensity ():
When the excitation is polarized along the z-axis, emission from the fluorophore is symmetric around the z-axis(Figure). Hence statistically we have . As , and , we have
.
Principle – Brownian motion and photoselection
In fluorescence, a molecule absorbs a photon and gets excited to a higher energy state. After a short delay (the average represented as the fluorescence lifetime ), it comes down to a lower state by losing some of the energy as heat and emitting the rest of the energy as another photon. The excitation and de-excitation involve the redistribution of electrons about the molecule. Hence, excitation by a photon can occur only if the electric field of the light is oriented in a particular axis about the molecule. Also, the emitted photon will have a specific polarization with respect to the molecule.
The first concept to understand for anisotropy measurements is the concept of Brownian motion. Although water at room temperature contained in a glass to the eye may look very still, on the molecular level each water molecule has kinetic energy and thus there are many collisions between water molecules in any amount of time. A nanoparticle (yellow dot in the figure) suspended in solution will undergo a random walk due to the summation of these underlying collisions. The rotational correlation time (Φr), the time it takes for the molecule to rotate 1 radian, is dependent on the viscosity (η), temperature (T), Boltzmann constant
|
https://en.wikipedia.org/wiki/Laccase
|
Laccases () are multicopper oxidases found in plants, fungi, and bacteria. Laccases oxidize a variety of phenolic substrates, performing one-electron oxidations, leading to crosslinking. For example, laccases play a role in the formation of lignin by promoting the oxidative coupling of monolignols, a family of naturally occurring phenols. Other laccases, such as those produced by the fungus Pleurotus ostreatus, play a role in the degradation of lignin, and can therefore be classed as lignin-modifying enzymes. Other laccases produced by fungi can facilitate the biosynthesis of melanin pigments. Laccases catalyze ring cleavage of aromatic compounds.
Laccase was first studied by Hikorokuro Yoshida in 1883 and then by Gabriel Bertrand in 1894 in the sap of the Japanese lacquer tree, where it helps to form lacquer, hence the name laccase.
Active site
The active site consists of four copper centers, which adopt structures classified as type I, type II, and type III. A tricopper ensemble contains types II and III copper (see figure). It is this center that binds O2 and reduces it to water. Each Cu(I,II) couple delivers one electron required for this conversion. The type I copper does not bind O2, but functions solely as an electron transfer site. The type I copper center consists of a single copper atom that is ligated to a minimum of two histidine residues and a single cysteine residue, but in some laccases produced by certain plants and bacteria, the type I copper center contains an additional methionine ligand. The type III copper center consists of two copper atoms that each possess three histidine ligands and are linked to one another via a hydroxide bridging ligand. The final copper center is the type II copper center, which has two histidine ligands and a hydroxide ligand. The type II together with the type III copper center forms the tricopper ensemble, which is where dioxygen reduction takes place. The type III copper can be replaced by Hg(II), which causes
|
https://en.wikipedia.org/wiki/Shifting%20baseline
|
A shifting baseline (also known as a sliding baseline) is a type of change to how a system is measured, usually against previous reference points (baselines), which themselves may represent significant changes from an even earlier state of the system.
The concept arose in landscape architect Ian McHarg's 1969 manifesto Design With Nature in which the modern landscape is compared to that on which ancient people once lived. The concept was then considered by the fisheries scientist Daniel Pauly in his paper "Anecdotes and the shifting baseline syndrome of fisheries". Pauly developed the concept in reference to fisheries management where fisheries scientists sometimes fail to identify the correct "baseline" population size (e.g. how abundant a fish species population was before human exploitation) and thus work with a shifted baseline. He describes the way that radically depleted fisheries were evaluated by experts who used the state of the fishery at the start of their careers as the baseline, rather than the fishery in its untouched state. Areas that swarmed with a particular species hundreds of years ago, may have experienced long term decline, but it is the level of decades previously that is considered the appropriate reference point for current populations. In this way large declines in ecosystems or species over long periods of time were, and are, masked. There is a loss of perception of change that occurs when each generation redefines what is "natural".
Most modern fisheries' stock assessments do not ignore historical fishing and account for it by either including the historical catch or use other techniques to reconstruct the depletion level of the population at the start of the period for which adequate data is available. Anecdotes about historical populations levels can be highly unreliable and result in severe mismanagement of the fishery.
The concept was further refined and applied to the ecology of kelp forests by Paul Dayton and others from the Scri
|
https://en.wikipedia.org/wiki/Clearing%20the%20neighbourhood
|
"Clearing the neighbourhood" (or dynamical dominance) around a celestial body's orbit describes the body becoming gravitationally dominant such that there are no other bodies of comparable size other than its natural satellites or those otherwise under its gravitational influence.
"Clearing the neighbourhood" is one of three necessary criteria for a celestial body to be considered a planet in the Solar System, according to the definition adopted in 2006 by the International Astronomical Union (IAU). In 2015, a proposal was made to extend the definition to exoplanets.
In the end stages of planet formation, a planet, as so defined, will have "cleared the neighbourhood" of its own orbital zone, i.e. removed other bodies of comparable size. A large body that meets the other criteria for a planet but has not cleared its neighbourhood is classified as a dwarf planet. This includes Pluto, whose orbit intersects with Neptune's orbit and shares its orbital neighbourhood with many Kuiper belt objects. The IAU's definition does not attach specific numbers or equations to this term, but all IAU-recognised planets have cleared their neighbourhoods to a much greater extent (by orders of magnitude) than any dwarf planet or candidate for dwarf planet.
The phrase stems from a paper presented to the 2000 IAU general assembly by the planetary scientists Alan Stern and Harold F. Levison. The authors used several similar phrases as they developed a theoretical basis for determining if an object orbiting a star is likely to "clear its neighboring region" of planetesimals based on the object's mass and its orbital period. Steven Soter prefers to use the term "dynamical dominance", and Jean-Luc Margot notes that such language "seems less prone to misinterpretation".
Prior to 2006, the IAU had no specific rules for naming planets, as no new planets had been discovered for decades, whereas there were well-established rules for naming an abundance of newly discovered small bodies such as
|
https://en.wikipedia.org/wiki/Blue%20Pill%20%28software%29
|
Blue Pill is the codename for a rootkit based on x86 virtualization. Blue Pill originally required AMD-V (Pacifica) virtualization support, but was later ported to support Intel VT-x (Vanderpool) as well. It was designed by Joanna Rutkowska and originally demonstrated at the Black Hat Briefings on August 3, 2006, with a reference implementation for the Microsoft Windows Vista kernel.
The name is a reference to the red pill and blue pill concept from the 1999 film The Matrix.
Overview
The Blue Pill concept is to trap a running instance of the operating system by starting a thin hypervisor and virtualizing the rest of the machine under it. The previous operating system would still maintain its existing references to all devices and files, but nearly anything, including hardware interrupts, requests for data and even the system time could be intercepted (and a fake response sent) by the hypervisor. The original concept of Blue Pill was published by another researcher at IEEE Oakland in May 2006, under the name VMBR (virtual-machine based rootkit).
Rutkowska claims that, since any detection program could be fooled by the hypervisor, such a system could be "100% undetectable". Since AMD virtualization is seamless by design, a virtualized guest is not supposed to be able to query whether it is a guest or not. Therefore, the only way Blue Pill could be detected is if the virtualization implementation were not functioning as specified.
This assessment, repeated in numerous press articles, is disputed: AMD issued a statement dismissing the claim of full undetectability. Some other security researchers and journalists also dismissed the concept as implausible. Virtualization could be detected by a timing attack relying on external sources of time.
In 2007, a group of researchers challenged Rutkowska to put Blue Pill against their rootkit detector software at that year's Black Hat conference, but the deal was deemed a no-go following Rutkowska's request for $384,000 in
|
https://en.wikipedia.org/wiki/Extensor%20retinaculum%20of%20the%20hand
|
The extensor retinaculum (dorsal carpal ligament, or posterior annular ligament) is a thickened portion of the antebrachial fascia that holds the tendons of the extensor muscles in place. It is located on the back of the forearm, just proximal to the hand. It is continuous with the palmar carpal ligament (which is located on the anterior side of the forearm).
Structure
The extensor retinaculum is a strong, fibrous band, extending obliquely downward and medialward across the back of the wrist. It consists of part of the deep fascia of the back of the forearm, strengthened by the addition of some transverse fibers.
Relations
There are six separate synovial sheaths run beneath the extensor retinaculum: (1st) abductor pollicis longus and extensor pollicis brevis tendons, (2nd) extensor carpi radialis lungus and brevis tendons, (3rd) extensor pollicis longus tendon, (4th) extensor digitorium communis and extensor indicis proprius tendons, (5th) extensor digiti minimi tendon and (6th) extensor carpi ulnaris tendon.
On the dorsal side of the hand, the palmar carpal ligament corresponds in location and structure to the extensor retinaculum, both being formations of the antebrachial fascia and therefore continuous. Consequently, the flexor retinaculum is commonly referred to as the transverse carpal ligament to avoid confusion.
Histology
Structurally, the retinaculum consists of three layers. The deepest layer, the gliding layer, consists of hyaluronic acid-secreting cells. The thick middle layer consists of interspersed elastin fibers, collagen bundles, and fibroblasts. The most superficial layer is made up of loose connective tissue which contains vascular channels. Combined these three layers create a smooth gliding surface as well as mechanically strong tissue which prevents tendon bowstringing. The extensor retinaculum of the foot has similar structure.
Clinical significance
Studies conducted on the retinaculum have exhibited it to have several possible surgic
|
https://en.wikipedia.org/wiki/T.38
|
T.38 is an ITU recommendation for allowing transmission of fax over IP networks (FoIP) in real time.
History
The T.38 fax relay standard was devised in 1998 as a way to permit faxes to be transported across IP networks between existing Group 3 (G3) fax terminals. T.4 and related fax standards were published by the ITU in 1980, before the rise of the Internet. In the late 1990s, VoIP, or Voice over IP, began to gain ground as an alternative to the conventional Public Switched Telephone Network (PSTN). However, because most VoIP systems are optimized (through their use of aggressive lossy bandwidth-saving compression) for voice rather than data calls, conventional fax machines worked poorly or not at all on them due to the network impairments such as delay, jitter, packet loss, and so on. Thus, some way of transmitting fax over IP was needed.
Overview
In practical scenarios, a T.38 fax call has at least part of the call being carried over PSTN, although this is not required by the T.38 definition, and two T.38 devices can send faxes to each other. This particular type of device is called Internet-Aware Fax device, or IAF, and it is capable of initiating or completing a fax call towards the IP network.
The typical scenario where T.38 is used is - T.38 Fax relay - where a T.30 fax device sends a fax over PSTN to a T.38 Fax gateway which converts or encapsulates the T.30 protocol into T.38 data stream. This is then sent either to a T.38 enabled end point such as fax machine or fax server or another T.38 Gateway that converts it back to PSTN PCM or analog signal and terminates the fax on a T.30 device.
The T.38 recommendation defines the use of both TCP and UDP to transport T.38 packets. Implementations tend to use UDP, due to TCP's requirement for acknowledgement packets and resulting retransmission during packet loss, which introduces delays. When using UDP, T.38 copes with packet loss by using redundant data packets.
T.38 is not a call setup protocol, thus th
|
https://en.wikipedia.org/wiki/Deep%20circumflex%20iliac%20artery
|
The deep circumflex iliac artery (or deep iliac circumflex artery) is an artery in the pelvis that travels along the iliac crest of the pelvic bone.
Course
The deep circumflex iliac artery arises from the lateral aspect of the external iliac artery nearly opposite the origin of the inferior epigastric artery.
It ascends obliquely and laterally, posterior to the inguinal ligament, contained in a fibrous sheath formed by the junction of the transversalis fascia and iliac fascia. It travels to the anterior superior iliac spine, where it anastomoses with the ascending branch of the lateral femoral circumflex artery.
It then pierces the transversalis fascia and passes medially along the inner lip of the crest of the ilium to a point where it perforates the transversus abdominis muscle. From there, it travels posteriorly between the transversus abdominis muscle and the internal oblique muscle to anastomose with the iliolumbar artery and the superior gluteal artery.
Opposite the anterior superior iliac spine of the ilium, it gives off a large ascending branch. This branch ascends between the internal oblique muscle and the transversus abdominis muscle, supplying them, and anastomosing with the lumbar arteries and inferior epigastric artery.
The deep circumflex artery serves as the primary blood supply to the anterior iliac crest bone flap.
Additional images
|
https://en.wikipedia.org/wiki/Perforating%20arteries
|
The perforating arteries are branches of the deep artery of the thigh, usually three in number, so named because they perforate the tendon of the adductor magnus to reach the back of the thigh. They pass backward near the linea aspera of the femur underneath the small tendinous arches of the adductor magnus muscle.
The first perforating artery arises from the deep artery of the thigh above the adductor brevis, the second in front of this muscle, and the third immediately below it.
First
The first perforating artery (a. perforans prima) passes posteriorly between the pectineus and adductor brevis (sometimes it perforates the latter); it then pierces the adductor magnus close to the linea aspera.
It gives branches to the adductores brevis and magnus, biceps femoris, and gluteus maximus, and anastomoses with the inferior gluteal, medial and lateral femoral circumflex and second perforating arteries.
Second
The second perforating artery (a. perforans secunda), larger than the first, pierces the tendons of the adductores brevis and magnus, and divides into ascending and descending branches, which supply the posterior femoral muscles, anastomosing with the first and third perforating.
The second artery frequently arises in common with the first.
The nutrient artery of the femur is usually given off from the second perforating artery; when two nutrient arteries exist, they usually spring from the first and third perforating vessels.
Third/fourth
The third perforating artery (a. perforans tertia) is given off below the Adductor brevis; it pierces the Adductor magnus, and divides into branches which supply the posterior femoral muscles; anastomosing above with the higher perforating arteries, and below with the terminal branches of the profunda and the muscular branches of the popliteal.
The nutrient artery of the femur may arise from this branch.
The termination of the profunda artery, already described, is sometimes termed the fourth perforating artery of Elliott
|
https://en.wikipedia.org/wiki/Intercostal%20arteries
|
The intercostal arteries are a group of arteries passing within an intercostal space (the space between two adjacent ribs). There are 9 anterior and 11 posterior intercostal arteries on each side of the body. The anterior intercostal arteries are branches of the internal thoracic artery and its terminal branch - the musculophrenic artery. The posterior intercostal arteries are branches of the supreme intercostal artery and thoracic aorta.
Each anterior intercostal artery anastomoses with the corresponding posterior intercostal artery arising from the thoracic aorta.
Anterior intercostal arteries
Origin
The upper five or six anterior intercostal arteries are branches of the internal thoracic artery (anterior intercostal branches of internal thoracic artery). The internal thoracic artery then divides into its two terminal branches, one of which - the musculophrenic artery - proceeds to issue anterior intercostal arteries to the remaining 6th, 7th, and 9th intercostal spaces; these diminish in size as the spaces decrease in length.
Course and relations
They are at first situated between the pleurae and the intercostales interni, and then between the mm. intercostales interni et intimi.
Distribution
They supply the intercostal muscles and, by branches which perforate the intercostales externi, the pectoral muscles and the mamma.
Posterior intercostal arteries
There are eleven posterior intercostal arteries on each side. Each artery divides into an anterior and a posterior ramus.
Origin
The 1st and 2nd posterior intercostal arteries arise from the supreme intercostal artery (also called superior intercostal artery or supreme intercostal artery) (usually a branch of the costocervical trunk of the subclavian artery).
The remaining nine arteries arise from (the posterior aspect of) the thoracic aorta.
Course and relations
Each posterior intercostal artery travels along the bottom of the rib alongside its corresponding posterior intercostal vein and intercost
|
https://en.wikipedia.org/wiki/Superficial%20circumflex%20iliac%20artery
|
The superficial iliac circumflex artery (or superficial circumflex iliac), the smallest of the cutaneous branches of the femoral artery, arises close to the superficial epigastric artery, and, piercing the fascia lata, runs lateralward, parallel with the inguinal ligament, as far as the crest of the ilium.
It divides into branches which supply the integument of the groin, the superficial fascia, and the superficial subinguinal lymph glands, anastomosing with the deep iliac circumflex, the superior gluteal and lateral femoral circumflex arteries.
In 45% to 50% of persons the superficial circumflex iliac artery and superficial inferior epigastric artery arise from a common trunk. In contrast, 40% to 45% of persons have a superficial circumflex iliac artery and superficial inferior epigastric artery that arise from separate origins.
Additional images
|
https://en.wikipedia.org/wiki/Gnomon%20%28figure%29
|
In geometry, a gnomon is a plane figure formed by removing a similar parallelogram from a corner of a larger parallelogram; or, more generally, a figure that, added to a given figure, makes a larger figure of the same shape.
Building figurate numbers
Figurate numbers were a concern of Pythagorean mathematics, and Pythagoras is credited with the notion that these numbers are generated from a gnomon or basic unit. The gnomon is the piece which needs to be added to a figurate number to transform it to the next bigger one.
For example, the gnomon of the square number is the odd number, of the general form 2n + 1, n = 1, 2, 3, ... . The square of size 8 composed of gnomons looks like this:
To transform from the n-square (the square of size n) to the (n + 1)-square, one adjoins 2n + 1 elements: one to the end of each row (n elements), one to the end of each column (n elements), and a single one to the corner. For example, when transforming the 7-square to the 8-square, we add 15 elements; these adjunctions are the 8s in the above figure.
This gnomonic technique also provides a proof that the sum of the first n odd numbers is n2; the figure illustrates Applying the same technique to a multiplication table proves that each squared triangular number is a sum of cubes.
Isosceles triangles
In an acute isosceles triangle, it is possible to draw a similar but smaller triangle, one of whose sides is the base of the original triangle. The gnomon of these two similar triangles is the triangle remaining when the smaller of the two similar isosceles triangles is removed from the larger one. The gnomon is itself isosceles if and only if the ratio of the sides to the base of the original isosceles triangle, and the ratio of the base to the sides of the gnomon, is the golden ratio, in which case the acute isosceles triangle is the golden triangle and its gnomon is the golden gnomon.
Conversely, the acute golden triangle can be the gnomon of the obtuse golden triangle in an excepti
|
https://en.wikipedia.org/wiki/Deep%20external%20pudendal%20artery
|
The deep external pudendal artery (deep external pudic artery) is one of the pudendal arteries that is more deeply seated than the superficial external pudendal artery, passes medially across the pectineus and the adductor longus muscles; it is covered by the fascia lata, which it pierces at the medial side of the thigh, and is distributed, in the male, to the integument of the scrotum and perineum, in the female to the labia majora; its branches anastomose with the scrotal or labial branches of the perineal artery.
Additional Images
See also
Internal pudendal artery
|
https://en.wikipedia.org/wiki/Superficial%20external%20pudendal%20artery
|
The superficial external pudendal artery (superficial external pudic artery) is one of the three pudendal arteries. It arises from the medial side of the femoral artery, close to the superficial epigastric artery and superficial iliac circumflex artery.
Course and target
After piercing the femoral sheath and fascia cribrosa, it courses medialward, across the spermatic cord (or round ligament in the female), to be distributed to the integument on the lower part of the abdomen, the penis and scrotum in the male, and the labium majus in the female, anastomosing with branches of the internal pudendal artery.
It crosses superficial to the inguinal ligament.
See also
Deep external pudendal artery
Internal pudendal artery
Additional images
|
https://en.wikipedia.org/wiki/Anterior%20lateral%20malleolar%20artery
|
The anterior lateral malleolar artery (lateral anterior malleolar artery, external malleolar artery) is an artery in the ankle.
The anterior lateral malleolar artery is a branch of the anterior tibial artery. It passes beneath the tendons of the extensor digitorum longus and fibularis tertius and supplies the lateral side of the ankle. It forms anastomoses with the perforating branch of the fibular artery, and with ascending twigs from the lateral tarsal artery.
|
https://en.wikipedia.org/wiki/Lateral%20tarsal%20artery
|
The lateral tarsal artery (tarsal artery) arises from the dorsalis pedis, as that vessel crosses the navicular bone; it passes in an arched direction lateralward, lying upon the tarsal bones, and covered by extensor hallucis brevis and extensor digitorum brevis; it supplies these muscles and the articulations of the tarsus, and receives the arcuate over the base of the fifth metatarsal. It may receive contributions from branches of the anterior lateral malleolar and the perforating branch of the peroneal artery directed towards the joint capsule, and from the lateral plantar arteries through perforating arteries of the foot.
|
https://en.wikipedia.org/wiki/Medial%20tarsal%20arteries
|
The medial tarsal arteries are two or three small branches which ramify on the medial border of the foot and join the medial malleolar network.
|
https://en.wikipedia.org/wiki/Posterior%20tibial%20recurrent%20artery
|
The posterior tibial recurrent artery, an inconstant branch, is given off from the anterior tibial before that vessel passes through the gap between superior tibio-fibular joint and upper border of interosseous membrane.
It ascends in front of the Popliteus, which it supplies, and anastomoses with the inferior genicular branches of the popliteal artery, giving an offset to the tibiofibular joint.
Notes
Arteries of the lower limb
|
https://en.wikipedia.org/wiki/Descending%20genicular%20artery
|
The descending genicular artery (also known as the highest genicular artery) arises from the femoral artery just before its passage through the adductor hiatus.
The descending geniculate artery immediately divides into two branches: a saphenous branch (which classically joins with the medial inferior genicular artery), and muscular and articular branches.
Structure
Branches
Saphenous branch
The saphenous branch pierces the aponeurotic covering of the adductor canal, and accompanies the saphenous nerve to the medial side of the knee. It passes between the sartorius muscle and the gracilis muscle, and, piercing the fascia lata, is distributed to the integument of the upper and medial part of the leg, anastomosing with the medial inferior genicular artery.
Articular branches
The articular branches descend within the vastus medialis muscle, and in front of the tendon of the adductor magnus muscle, to the medial side of the knee, where they join with the medial superior genicular and anterior recurrent tibial artery.
A branch from this vessel crosses above the patellar surface of the femur, forming an anastomotic arch with the lateral superior genicular artery, and supplying branches to the knee-joint.
|
https://en.wikipedia.org/wiki/Arcuate%20artery%20of%20the%20foot
|
The arcuate artery of the foot (metatarsal artery) arises from dorsalis pedis slightly anterior to the lateral tarsal artery, specifically over the naviculocuneiform joint; it passes lateralward, over the bases of the lateral four metatarsal bones, beneath the tendons of the extensor digitorum brevis, its direction being influenced by its point of origin; and it terminates in the lateral tarsal artery. It communicates with the plantar arteries through the perforating arteries of the foot.
It runs with the lateral terminal branch of deep fibular nerve. This vessel gives off the second, third, and fourth dorsal metatarsal arteries.
It is not present in all individuals.
|
https://en.wikipedia.org/wiki/John%20H.%20Hubbard
|
John Hamal Hubbard (born October 6 or 7, 1945) is an American mathematician and professor at Cornell University and the . He is known for the mathematical contributions he made with Adrien Douady in the field of complex dynamics, including a study of the Mandelbrot set. One of their most important results is that the Mandelbrot set is connected.
Education
Hubbard graduated with a Doctorat d'État from Université de Paris-Sud in 1973 under the direction of Adrien Douady; his thesis was entitled Sur Les Sections Analytiques de La Courbe Universelle de Teichmüller and was published by the American Mathematical Society.
Writing
Hubbard and his wife Barbara Burke Hubbard wrote the book Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach.
He has also published three volumes of a book on Teichmüller theory and its applications to four revolutionary theorems of William Thurston.
Personal life
Hubbard is married to Barbara Burke Hubbard, a science writer. Together they have a son and three younger daughters.
|
https://en.wikipedia.org/wiki/Sural%20arteries
|
The sural arteries (inferior muscular arteries) are two large branches, lateral and medial, which are distributed to the gastrocnemius, soleus, and plantaris muscles. Sural means related to the calf. The term applies to any of four or five arteries arising from the popliteal artery, with distribution to the muscles and integument of the calf, and with anastomoses to the posterior tibial, medial and lateral inferior genicular arteries.
|
https://en.wikipedia.org/wiki/Accompanying%20artery%20of%20ischiadic%20nerve
|
The accompanying artery of ischiadic nerve is a long, slender artery in the thigh. It branches of the inferior gluteal artery. It accompanies the sciatic nerve for a short distance. It then penetrates it, and runs in its substance to the lower part of the thigh.
|
https://en.wikipedia.org/wiki/A%20Madman%20Dreams%20of%20Turing%20Machines
|
A Madman Dreams of Turing Machines is a book by Janna Levin which contrasts fictionalized accounts of the lives and ideas of Kurt Gödel and Alan Turing (who never met). First published in 2006, the book won several awards, including the prestigious PEN/Bingham Fellowship Prize for Writers and the MEA Mary Shelley Award for Outstanding Fictional Work. It was also a runner-up for the Hemingway Foundation/PEN Award.
Description
A Madman Dreams of Turing Machines is a book by Janna Levin which contrasts fictionalized accounts of the lives and ideas of Kurt Gödel and Alan Turing (who never met).
In an interview with Sylvie Myerson in The Brooklyn Rail, Levin said of her book: "There was a lot that made me want to write it as a novel, one being this whole idea that sometimes truth cannot come out as a theorem even in mathematics, let alone in a retelling of two people's lives. Sometimes you have to step outside of the perfect linear logic of biographical facts.".
|
https://en.wikipedia.org/wiki/Disk%20diffusion%20test
|
The disk diffusion test (also known as the agar diffusion test, Kirby–Bauer test, disc-diffusion antibiotic susceptibility test, disc-diffusion antibiotic sensitivity test and KB test) is a culture-based microbiology assay used in diagnostic and drug discovery laboratories. In diagnostic labs, the assay is used to determine the susceptibility of bacteria isolated from a patient's infection to clinically approved antibiotics. This allows physicians to prescribe the most appropriate antibiotic treatment. In drug discovery labs, especially bioprospecting labs, the assay is used to screen biological material (e.g. plant extracts, bacterial fermentation broths) and drug candidates for antibacterial activity. When bioprospecting, the assay can be performed with paired strains of bacteria to achieve dereplication and provisionally identify antibacterial mechanism of action.
In diagnostic laboratories, the test is performed by inoculating the surface of an agar plate with bacteria isolated from a patient's infection. Antibiotic-containing paper disks are then applied to the agar and the plate is incubated. If an antibiotic stops the bacteria from growing or kills the bacteria, there will be an area around the disk where the bacteria have not grown enough to be visible. This is called a zone of inhibition. The susceptibility of the bacterial isolate to each antibiotic can then be semi-quantified by comparing the size of these zones of inhibition to databases of information on known antibiotic-susceptible, moderately susceptible and resistant bacteria. In this way, it is possible to identify the most appropriate antibiotic for treating a patient's infection. Although the disk diffusion test cannot be used to differentiate bacteriostatic and bactericidal activity, it is less cumbersome than other susceptibility test methods such as broth dilution.
In drug discovery labs, the disk diffusion test is performed slightly differently than in diagnostic labs. In this set
|
https://en.wikipedia.org/wiki/Sublime%20number
|
In number theory, a sublime number is a positive integer which has a perfect number of positive factors (including itself), and whose positive factors add up to another perfect number.
The number 12, for example, is a sublime number. It has a perfect number of positive factors (6): 1, 2, 3, 4, 6, and 12, and the sum of these is again a perfect number: 1 + 2 + 3 + 4 + 6 + 12 = 28.
There are only two known sublime numbers: 12 and (2126)(261 − 1)(231 − 1)(219 − 1)(27 − 1)(25 − 1)(23 − 1) . The second of these has 76 decimal digits:
6,086,555,670,238,378,989,670,371,734,243,169,622,657,830,773,351,885,970,528,324,860,512,791,691,264.
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Biogeochemistry
|
The Max Planck Institute for Biogeochemistry is located in Jena, Germany. It was created in 1997, and moved into new buildings 2002. It is one of 80 institutes in the Max Planck Society (Max Planck Gesellschaft).
Departments and research groups
Biogeochemical Processes (Susan E. Trumbore)
Molecular Biogeochemistry (Gerd Gleixner)
Theoretical Ecosystem Ecology (Carlos A. Sierra)
Soil Biogeochemistry (Marion Schrumpf)
Plant Allocation (Henrik Hartmann)
Landscape Proceesses (Shaun Levick)
Emeritus Group (Ernst Detlef Schulze)
Tanguro Flux (Susan E. Trumbore)
Biogeochemical Integration (Markus Reichstein)
Biosphere-Atmosphere Interactions and Experimentation (Mirco Migliavacca)
Terrestrial Biosphere Modelling (Sönke Zaehle)
Model-Data Integration (Nuno Carvalhais)
Global Diagnostic Modelling (Miguel D. Mahecha)
Empirical Inference of the Earth System (Martin Jung)
Flora Incognita (Jana Wäldchen)
Hydrology-Biosphere-Climate Interactions (René Orth)
Biogeochemical Systems (Martin Heiman, emeritus)
Atmospheric Remote Sensing (ARS) (Dietrich Feist)
Airborne trace gas measurements and mesoscale modelling (ATM) (Christoph Gerbig)
Inverse data-driven estimation (IDE) (Christian Rödenbeck)
Integrating surface-atmosphere Exchange Processes Across Scales - Modeling and Monitoring (IPAS) (Mathias Goeckede)
Tall Tower Atmospheric Gas Measurements (TAG) (Jošt Valentin Lavrič)
Carbon Cycle Data Assimilation (CCDAS) (Sönke Zaehle)
Satellite-based remote sensing of greenhouse gases (SRS) (Julia Marshall)
Independent research groups
Organic Paleobiogeochemistry (Christian Hallmann)
Biospheric Theory and Modelling (Axel Kleidon)
Carbon Balance and Ecosystem Research (Ernst-Detlef Schulze)
Functional Biogeography (Christian Wirth & Jens Kattge)
External links
Homepage of the Max Planck Institute for Biogeochemistry
Laboratories in Germany
Biogeochemistry
Earth science research institutes
Research institutes established in 1997
Biogeochemistry
|
https://en.wikipedia.org/wiki/Field-programmable%20analog%20array
|
A field-programmable analog array (FPAA) is an integrated circuit device containing computational analog blocks (CAB) and interconnects between these blocks offering field-programmability. Unlike their digital cousin, the FPGA, the devices tend to be more application driven than general purpose as they may be current mode or voltage mode devices. For voltage mode devices, each block usually contains an operational amplifier in combination with programmable configuration of passive components. The blocks can, for example, act as summers or integrators.
FPAAs usually operate in one of two modes: continuous time and discrete time.
Discrete-time devices possess a system sample clock. In a switched capacitor design, all blocks sample their input signals with a sample and hold circuit composed of a semiconductor switch and a capacitor. This feeds a programmable op amp section which can be routed to a number of other blocks. This design requires more complex semiconductor construction. An alternative, switched-current design, offers simpler construction and does not require the input capacitor, but can be less accurate, and has lower fan-out - it can drive only one following block. Both discrete-time device types must compensate for switching noise, aliasing at the system sample rate, and sample-rate limited bandwidth, during the design phase.
Continuous-time devices work more like an array of transistors or op amps which can operate at their full bandwidth. The components are connected in a particular arrangement through a configurable array of switches. During circuit design, the switch matrix's parasitic inductance, capacitance and noise contributions must be taken into account.
Currently there are very few manufactures of FPAAs. On-chip resources are still very limited when compared to that of an FPGA. This resource deficit is often cited by researchers as a limiting factor in their research.
History
The term FPAA was first used in 1991 by Lee and Gulak. Th
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Biology
|
The Max Planck Institute for Biology is a research institute located in Tübingen, Germany, and was formerly known as the Max Planck Institute for Developmental Biology. A predecessor institution operated under the same name from 1948 to 2004.
The Kaiser Wilhelm Society, the forerunner to the Max Planck Society, established various natural science research institutes in the Berlin district of Dahlem in the beginning of the 20th century. Among them was the Kaiser Wilhelm Institute for Biology. The main aim of the newly established institutes was to supplement the universities and academies with research in the natural sciences and thus also to keep Germany internationally competitive.
In the following decades, scientists there and at the Institute of Biochemistry realized the importance of viruses as model organisms for understanding biological processes. Thus, they established a working group in the field of virus research. In 1941, Nobel Prize winner Adolf Butenandt, together with his colleagues Alfred Kühn and Fritz von Wettstein, set up their own working group for virus research. Two years later, parts of the Kaiser Wilhelm Institute for Biology moved to the safer city of Tübingen. After the foundation of the Max Planck Society in 1948, the institute was renamed as the Max Planck Institute for Biology, which closed in 2004 as part of consolidation measures.
The aforementioned subsidiary institute for virus research had already broadened its base with a new focus on developmental biology, and was renamed as the Max Planck Institute for Developmental Biology in 1985. Christiane Nüsslein-Volhard, who was appointed as Director of Department for Genetics in that year, later won the Nobel Prize for Physiology in 1995.
The Max Planck Institute for Developmental Biology further broadened its research fields, which now range from biochemistry and cell biology to genome research in an evolutionary and ecological context, and was renamed the Max Planck Institute for Bio
|
https://en.wikipedia.org/wiki/Cubic%20form
|
In mathematics, a cubic form is a homogeneous polynomial of degree 3, and a cubic hypersurface is the zero set of a cubic form. In the case of a cubic form in three variables, the zero set is a cubic plane curve.
In , Boris Delone and Dmitry Faddeev showed that binary cubic forms with integer coefficients can be used to parametrize orders in cubic fields. Their work was generalized in to include all cubic rings (a is a ring that is isomorphic to Z3 as a Z-module), giving a discriminant-preserving bijection between orbits of a GL(2, Z)-action on the space of integral binary cubic forms and cubic rings up to isomorphism.
The classification of real cubic forms is linked to the classification of umbilical points of surfaces. The equivalence classes of such cubics form a three-dimensional real projective space and the subset of parabolic forms define a surface – the umbilic torus.
Examples
Cubic plane curve
Elliptic curve
Fermat cubic
Cubic 3-fold
Koras–Russell cubic threefold
Klein cubic threefold
Segre cubic
Notes
|
https://en.wikipedia.org/wiki/Gallium%20phosphate
|
Gallium phosphate (GaPO4 or gallium orthophosphate) is a colorless trigonal crystal with a hardness of 5.5 on the Mohs scale. GaPO4 is isotypic with quartz, possessing very similar properties, but the silicon atoms are alternately substituted with gallium and phosphorus, thereby doubling the piezoelectric effect. GaPO4 has many advantages over quartz for technical applications, like a higher electromechanical coupling coefficient in resonators, due to this doubling.
Contrary to quartz, GaPO4 is not found in nature. Therefore, a hydrothermal process must be used to synthesize the crystal.
Modifications
GaPO4 possesses, in contrast to quartz, no α-β phase transition, thus the low temperature structure (structure like α-quartz) of GaPO4 is stable up to 970°C, as are most of its other physical properties. Around 970°C another phase transition occurs which changes the low quartz structure into another structure similar with cristobalite.
Structure
The specific structure of GaPO4 shows the arrangement of tetrahedrons consisting of GaO4 and PO4 that are slightly tilted. Because of the helical arrangement of these tetrahedrons, two modifications of GaPO4 exist with different optical rotation (left and right).
Sources
GaPO4 does not occur in nature; thus it must be grown synthetically. Presently, only one company in Austria produces these crystals commercially.
History and technical importance
Pressure sensors based on quartz have to be cooled with water for applications at higher temperatures (above 300°C). Starting in 1994 it was possible to substitute these big sensors with miniaturized, non cooled ones, based on GaPO4.
Further exceptional properties of GaPO4 for applications at high temperatures include its nearly temperature independent piezo effect and excellent electrical insulation up to 900°C. For bulk resonator applications, this crystal exhibits temperature compensated cuts of up to 500°C while having Q factors comparable with quartz. Due to these mat
|
https://en.wikipedia.org/wiki/Gene%20orders
|
Gene orders are the permutation of genome arrangement. A fair amount of research has been done trying to determine whether gene orders evolve according to a molecular clock (molecular clock hypothesis) or in jumps (punctuated equilibrium).
Some research on gene orders in animals' mitochondrial genomes reveal that the mutation rate of gene orders is not a constant in some degrees.
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20of%20Biophysics
|
The Max Planck Institute of Biophysics () is located in Frankfurt, Germany. It was founded as the Kaiser Wilhelm Institute of Biophysics in 1937, and moved into a new building in 2003. It is an institute of the Max Planck Society.
Since March 2003, the MPI for Biophysics has resided in a new building on the Riedberg campus of the Goethe University Frankfurt in the north of the city. At the end of 2016, a total of 178 employees were working at the institute, including 48 scientists and 50 junior researchers. The Nobel Prize winner Hartmut Michel was the director of the institute starting 1987 until he was replaced by Ana J. García-Sáez in October 2023. Scientific links to fellow researchers at Goethe University have been strengthened further, as the institute is now situated next to the university's biology, chemistry and physics laboratories. Together with the Max Planck Institute for Brain Research and the Goethe University the institute run the International Max Planck Research School (IMPRS) for Structure and Function of Biological Membranes, a graduate program offering a Ph.D. in the period from 2000 until 2012.
Departments
Molecular Membrane Biology (Hartmut Michel, since 1987)
Structural Biology (Werner Kühlbrandt, since 1997)
Biophysical Chemistry (Ernst Bamberg, since 1993, em. since 2016)
Theoretical Biophysics (Gerhard Hummer, since 2013)
Molecular Sociology (Martin Beck, since 2019)
Molecular Neurogenetics (Peter Mombaerts, from 2006 until 2010)
A prerequisite for the understanding of the fundamental processes of life is the knowledge of the structure of the participating macromolecules. Two of the four departments are devoted to the challenging task of determining the structure of membrane proteins. Under the direction of Hartmut Michel (Nobel Prize in Chemistry of 1988 for the first structure determination of a membrane protein), the Department of Molecular Membrane Biology approaches this problem primarily by x-ray crystallography, whereas the Dep
|
https://en.wikipedia.org/wiki/Artery%20of%20bulb%20of%20penis
|
The artery of bulb of penis (artery of the urethral bulb or bulbourethral artery) is a short artery of large caliber which arises from the internal pudendal artery between the two layers of fascia (the superior and inferior) of the urogenital diaphragm. It passes medialward, pierces the inferior fascia of the urogenital diaphragm and gives off branches which ramify in the bulb of the urethra and in the posterior part of the corpus spongiosum.
Additional images
|
https://en.wikipedia.org/wiki/Deep%20artery%20of%20the%20penis
|
The deep artery of the penis (artery to the corpus cavernosum) is a branch of the internal pudendal artery that supplies the corpus spongiosum. The artery enters the crus of penis at its anterior extremity.
Anatomy
Origin
The deep artery of the penis one of the terminal branches of the internal pudendal artery. It arises from the internal pudendal artery while it while it is situated between the two fasciæ of the urogenital diaphragm (deep perineal pouch).
Course
It pierces the inferior fascia, and, entering the crus penis obliquely, runs anterior-ward in the center of the corpus cavernosum penis, to which its branches are distributed.
Additional images
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.