source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Atwater%20system
|
The Atwater system, named after Wilbur Olin Atwater, or derivatives of this system are used for the calculation of the available energy of foods. The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the 19th century and the early years of the 20th at Wesleyan University in Middletown, Connecticut. Its use has frequently been the cause of dispute, but few alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation.
Derivation
Available energy (as used by Atwater) is equivalent to the modern usage of the term metabolisable energy (ME).
In most studies on humans, losses in secretions and gases are ignored. The gross energy (GE) of a food, as measured by bomb calorimetry is equal to the sum of the heats of combustion of the components – protein (GEp), fat (GEf) and carbohydrate (GEcho) (by difference) in the proximate system.
Atwater considered the energy value of feces in the same way.
By measuring coefficients of availability or in modern terminology apparent digestibility, Atwater derived a system for calculating faecal energy losses.
where Dp, Df, and Dcho are respectively the digestibility coefficients of protein, fat and carbohydrate calculated as
for the constituent in question.
Urinary losses were calculated from the energy to nitrogen ratio in urine. Experimentally this was 7.9 kcal/g (33 kJ/g) urinary nitrogen and thus his equation for metabolisable energy became
Gross energy values
Atwater collected values from the literature and also measured the heat of combustion of proteins, fats and carbohydrates. These vary slightly depending on sources and Atwater derived weighted values for the gross heat of combustion of the protein, fat and carbohydrate in the typical mixed diet of his time. It has been argued that these weighted values are invalid for individual foods and for diets who
|
https://en.wikipedia.org/wiki/Ellipsoid%20method
|
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.
The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
History
The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the polynomial-time solvability of linear programs. This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the simplex algorithm, which has a run time that typically is linear in the size of the problem, but for which examples exist for which it is exponential in the size of the problem. As such, having an algorithm that is guaranteed to be polynomial for all cases seemed like a theoretical breakthrough.
Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial. In practice, however, the algorithm is fairly slow and of little practical interest, though it provided inspiration for later work that turned out to be of much greater practical use. Specifically, Karmarkar's algorithm, an interior-point method, is much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case.
The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of
|
https://en.wikipedia.org/wiki/InterPro
|
InterPro is a database of protein families, protein domains and functional sites in which identifiable features found in known proteins can be applied to new protein sequences in order to functionally characterise them.
The contents of InterPro consist of diagnostic signatures and the proteins that they significantly match. The signatures consist of models (simple types, such as regular expressions or more complex ones, such as Hidden Markov models) which describe protein families, domains or sites. Models are built from the amino acid sequences of known families or domains and they are subsequently used to search unknown sequences (such as those arising from novel genome sequencing) in order to classify them. Each of the member databases of InterPro contributes towards a different niche, from very high-level, structure-based classifications (SUPERFAMILY and CATH-Gene3D) through to quite specific sub-family classifications (PRINTS and PANTHER).
InterPro's intention is to provide a one-stop-shop for protein classification, where all the signatures produced by the different member databases are placed into entries within the InterPro database. Signatures which represent equivalent domains, sites or families are put into the same entry and entries can also be related to one another. Additional information such as a description, consistent names and Gene Ontology (GO) terms are associated with each entry, where possible.
Data contained in InterPro
InterPro contains three main entities: proteins, signatures (also referred to as "methods" or "models") and entries. The proteins in UniProtKB are also the central protein entities in InterPro. Information regarding which signatures significantly match these proteins are calculated as the sequences are released by UniProtKB and these results are made available to the public (see below). The matches of signatures to proteins are what determine how signatures are integrated together into InterPro entries: comparativ
|
https://en.wikipedia.org/wiki/Information%20integration%20theory
|
Information integration theory was proposed by Norman H. Anderson to describe and model how a person integrates information from a number of sources in order to make an overall judgment. The theory proposes three functions.
The valuation function is an empirically derived mapping of stimuli to an interval scale. It is unique up to an interval exchange transformation ().
The integration function is an algebraic function combining the subjective values of the information. "Cognitive algebra" refers to the class of functions that are used to model the integration process. They may be adding, averaging, weighted averaging, multiplying, etc.
The response production function is the process by which the internal impression is translated into an overt response.
Information integration theory differs from other theories in that it is not erected on a consistency principle such as balance or congruity but rather relies on algebraic models.
The theory is also referred to as functional measurement, because it can provide validated scale values of the stimuli.
An elementary treatment of the theory, along with a Microsoft Windows program for carrying out functional measurement analysis, is provided in the textbook by David J. Weiss.
Integration models
There are three main types of algebraic models used in information integration theory: adding, averaging, and multiplying.
Adding models
reaction/overt behavior
contributing factors
(Condition 1)
(Condition 2)
Typically an experiment is designed so that:
, and
, so that
.
There are two special cases known as discounting and augmentation.
Discounting: The value of any factor is reduced if other factors that produce the same effect are added.
Example: is not present or has a value of zero. If is positive, then G1 must be less than .
Augmentation: An inverse version of the typical model.
Example: If is negative, then must be greater than .
Two advantages of adding models:
Participants are not required to hav
|
https://en.wikipedia.org/wiki/Certified%20email
|
Certified email (known as Posta elettronica certificata in Italy, or PEC in short) is a special type of email in use in Italy, Switzerland, Hong Kong and Germany. Certified email is meant to provide a legal equivalent of the traditional registered mail, where users are able to legally prove that a given email has been sent and received by paying a small fee.
Registered mail is mainly used in Italy, but there are present efforts to extend its legal validity according to the framework of the European Union.
Description
A certified email can only be sent using a special Certified Email Account provided by a registered provider.
When a certified email is sent, the sender's provider will release a receipt of the successful (or failed) transaction. This receipt has legal value and it includes precise information about the time the certified email was sent.
Similarly, the receiver's provider will deliver the message in the appropriate certified email account and will then release to the sender a receipt of successful (or failed) delivery, indicating on this receipt the exact time of delivery.
If either of these two receipts are lost by the sender, providers are required to issue a proof of transaction with equal legal validity, if this proof is requested within 30 months of delivery.
In terms of user experience, a certified email account is very similar to a normal email account.
The only additional features are the receipts, received as attachments, providing details and timestamps for all transactions.
A certified email account can only handle certified email and can't be used to send regular email.
Technical process
The development of this email service has conceptual variations that are dominated by two-party scenarios with only one sender and one receiver as well as a trusted third party (TTP) serving as a mediator. As in traditional registered mail, many certified email technologies call for the parties involved to trust the TTP, or the "postman", because it h
|
https://en.wikipedia.org/wiki/Radio%20Battalion
|
Radio Battalions are tactical signals intelligence units of Marine Corps Intelligence. There are currently three operational Radio Battalions in the Marine Corps organization: 1st, 2nd, and 3rd. In fleet operations, teams from Radio Battalions are most often attached to the command element of Marine Expeditionary Units.
Concept
A Radio Battalion consists mainly of signals intelligence and electronic intelligence operators organized into smaller tactical units with different roles. Basic collection teams consist of 4–6 operators using specialized equipment based in HMMWVs. A variation on this is the MEWSS (Mobile Electronic Warfare Support System), which is an amphibious light armored vehicle equipped with similar electronic warfare equipment. MEWSS crews serve dual roles as electronic warfare operators and LAV crewmen. Radio Reconnaissance Platoons serve in a special operations role where the use of standard collection teams is not possible, such as covert infiltrations or tactical recovery of aircraft and personnel (TRAP).
History
In June 1943, 2nd Radio Intelligence Platoon was activated at Camp Elliott, California. The unit took part in the Battle of Guadalcanal and the Battle of Peleliu. The 3rd Radio Intelligence Platoon was also formed in June 1943 and took part in the Battles of the Kwajalein Atoll and Okinawa.
General Alfred M. Gray Jr., who served as the 29th Commandant of the Marine Corps from 1 July 1987 until his retirement on 30 June 1991, is considered the founding father of post-war Marine Corps signals intelligence (SIGINT). In 1955 then Captain Gray was tasked with forming two SIGINT units, one to be assigned to Europe and the other to the Pacific area, chosen from Marines undergoing Manual Morse intercept training. Captain Gray established the Pacific team at NSG Kamiseya, Japan in May 1956.
In 1958 then-Captain Gray was assigned to Hawaii to form and activate the 1st Radio Company, a tactical signals intelligence (SIGINT) unit, where he
|
https://en.wikipedia.org/wiki/EUROCAT%20%28medicine%29
|
EUROCAT founded in 1979, is a high quality network of population-based congenital anomaly registries across Europe for the monitoring, surveillance and research of congenital anomalies.
In January 2023 the network has 43 member registries from 23 countries covering more than 25% of European births per year. The detailed registry descriptions can be found on the EUROCAT website
Objectives
EUROCAT’s objectives are to:
Provide essential epidemiologic information on congenital anomalies in Europe.
Facilitate the early warning of teratogenic exposures.
Evaluate the effectiveness of primary prevention.
Assess the impact of developments in prenatal screening.
Act as an information and resource centre regarding clusters or exposures or risk factors for concern.
Provide a ready collaborative network and infrastructure for research related to the causes and prevention of congenital anomalies and the treatment and care of affected children.
Act as a catalyst for the setting up of registries throughout Europe collecting comparable, standardised data.
History
EUROCAT was founded in 1979 as the European Concerted Action on Congenital Anomalies and Twins. The EUROCAT Central Registry was based in Brussels from 1979 to 1999 and at the University of Ulster from 2000 to 2014.
In 2015 the Central Registry was transferred to the European Commission’s Joint Research Centre (JRC) in Ispra, Italy, and it is now an integral part of the European Platform on Rare Disease Registration.
Leadership is provided by the JRC-EUROCAT Management Committee, comprising elected members from the EUROCAT congenital anomaly registries and representatives from the JRC.
Methodology
All EUROCAT member registries use multiple sources of information to ascertain cases in live births, late fetal deaths (>20 weeks gestational age) and terminations of pregnancy for fetal anomaly at any gestational age.
EUROCAT has achieved a high level of data harmonisation and interoperability between registri
|
https://en.wikipedia.org/wiki/Robinson%20oscillator
|
The Robinson oscillator is an electronic oscillator circuit originally devised for use in the field of continuous wave (CW) nuclear magnetic resonance (NMR). It was a development of the marginal oscillator. Strictly one should distinguish between the marginal oscillator and the Robinson oscillator, although sometimes they are conflated and referred to as a Robinson marginal oscillator. Modern magnetic resonance imaging (MRI) systems are based on pulsed (or Fourier transform) NMR; they do not rely on the use of such oscillators.
The key feature of a Robinson oscillator is a limiter in the feedback loop. This means that a square wave current, of accurately-fixed amplitude, is fed back to the tank circuit. The tank selects the fundamental of the square wave, which is amplified and fed back. This results in an oscillation with well-defined amplitude; the voltage across the tank circuit is proportional to its Q-factor.
The marginal oscillator has no limiter. It is arranged for the working point of one of the amplifier elements to operate at a nonlinear part of its characteristic and this determines the amplitude of oscillation. This is not as stable as the Robinson arrangement.
The Robinson oscillator was invented by British physicist Neville Robinson.
|
https://en.wikipedia.org/wiki/Semantic%20Interoperability%20Community%20of%20Practice
|
Semantic Interoperability Community of Practice (SICoP) is a group of people who seek to make the Semantic Web operational in their respective settings by achieving "semantic interoperability" and "semantic data integration".
SICoP seeks to enable Semantic Interoperability, specifically the "operationalizing" of relevant technologies and approaches, through online conversation, meetings, tutorials, conferences, pilot projects, and other activities aimed at developing and disseminating best practices.
The individuals making up this Community of Practice are from various settings, however, the SICoP claims neither formal nor implied endorsement by any organization.
See also
Semantic Wiki Information Management
Semantics
Interoperability
|
https://en.wikipedia.org/wiki/Sawney
|
Sawney (sometimes Sandie/y, or Sanders, or Sannock) was an English nickname for a Scotsman, now obsolete, and playing much the same linguistic role that "Jock" does now. The name is a Lowland Scots diminutive of the favourite Scottish first name Alexander (also Alasdair in Scottish Gaelic form, anglicised into Alistair) from the last two syllables. The English commonly abbreviate the first two syllables into "Alec".
From the days after the accession of James VI to the English throne under the title of James I, to the time of George III and the Bute administration, when Scotsmen were exceedingly unpopular and Dr. Samuel Johnson - the great Scotophobe, and son of a Scottish bookseller at Lichfield - thought it prudent to disguise his origin, and overdid his prudence by maligning his father's countrymen, it was customary to designate a Scotsman a "Sawney". This vulgar epithet, however, was dying out fast by the 1880s, and was obsolete by the 20th century.
Sawney was a common figure of fun in English cartoons. A particularly stereotypical example, Sawney in the Bog House, shows a stereotypical Scottish Highlander using a communal bench toilet by sticking one of his legs down each of the holes. This was originally published in London in June 1745, just over a month before Charles Edward Stuart landed in Scotland to begin the Jacobite rising of 1745. In this version Sawney's excreta emerge from below his kilt and flow across the bench. The idea was revived in a different and slightly more decorous version of 1779, which is attributed to the young James Gillray. An inscription reads:
'Tis a bra' bonny seat, o' my saul, Sawney cries,
I never beheld sic before with me Eyes,
Such a place in aw' Scotland I never could meet,
For the High and the Low ease themselves in the Street.
It has also been suggested that the Galloway cannibal Sawney Bean may have been a fabrication to emphasise the alleged savagery of the Scots.
Sometimes also used in the term "Sawney Ha'peth"
|
https://en.wikipedia.org/wiki/Bulbus%20cordis
|
The bulbus cordis (the bulb of the heart) is a part of the developing heart that lies ventral to the primitive ventricle after the heart assumes its S-shaped form. The superior end of the bulbus cordis is also called the conotruncus.
Structure
In the early tubular heart, the bulbus cordis is the major outflow pathway. It receives blood from the primitive ventricle, and passes it to the truncus arteriosus. After heart looping, it is located slightly to the left of the ventricle.
Development
The early bulbus cordis is formed by the fifth week of development. The truncus arteriosus is derived from it later.
The adjacent walls of the bulbus cordis and ventricle approximate, fuse, and finally disappear, and the bulbus cordis now communicates freely with the right ventricle, while the junction of the bulbus with the truncus arteriosus is brought directly ventral to and applied to the atrial canal.
By the upgrowth of the ventricular septum the bulbus cordis is separated from the left ventricle, but remains an integral part of the right ventricle, of which it forms the infundibulum.
Together, the bulbus cordis and the primitive ventricle give rise to the ventricles of the formed heart.
Other animals
The bulbus cordis is shared in the development of many animals, including frogs and fish.
Additional images
|
https://en.wikipedia.org/wiki/Discharger
|
A discharger in electronics is a device or circuit that releases stored energy or electric charge from a battery, capacitor or other source.
Discharger types include:
metal probe with insulated handle & ground wire, and sometimes resistor (for capacitors)
resistor (for batteries)
parasitic discharge (for batteries arranged in parallel)
more complex electronic circuits (for batteries)
See also Bleeder resistor
Electronic circuits
|
https://en.wikipedia.org/wiki/Cohesin
|
Cohesin is a protein complex that mediates sister chromatid cohesion, homologous recombination, and DNA looping. Cohesin is formed of SMC3, SMC1, SCC1 and SCC3 (SA1 or SA2 in humans). Cohesin holds sister chromatids together after DNA replication until anaphase when removal of cohesin leads to separation of sister chromatids. The complex forms a ring-like structure and it is believed that sister chromatids are held together by entrapment inside the cohesin ring. Cohesin is a member of the SMC family of protein complexes which includes Condensin, MukBEF and SMC-ScpAB.
Cohesin was separately discovered in budding yeast (Saccharomyces cerevisiae) both by Douglas Koshland and Kim Nasmyth in 1997.
Structure
Cohesin is a multi-subunit protein complex, made up of SMC1, SMC3, RAD21 and SCC3 (SA1 or SA2). SMC1 and SMC3 are members of the Structural Maintenance of Chromosomes (SMC) family. SMC proteins have two main structural characteristics: an ATP-binding cassette-like 'head' domain with ATPase activity (formed by the interaction of the N- and C- terminals) and a hinge domain that allows dimerization of SMCs. The head and the hinge domains are connected to each other via long anti-parallel coiled coils. The dimer is present in a V-shaped form, connected by the hinges.
The N-terminal domain of RAD21 contains two α-helices which forms a three helix bundle with the coiled coil of SMC3. The central region of RAD21 is thought to be largely unstructured but contains several binding sites for regulators of cohesin. This includes a binding site for SA1 or SA2, recognition motifs for separase cleavage and a region that is competitively bound by PDS5A, PDS5B or NIPBL. The C-terminal domain of RAD21 forms a winged helix that binds two β-sheets in the Smc1 head domain.
Once RAD21 binds the SMC proteins, SCC3 can also associate with RAD21. When RAD21 binds on both SMC1 and SMC3, the cohesin complex forms a closed ring structure. The interfaces between the SMC subunits and RAD21 c
|
https://en.wikipedia.org/wiki/Rail%20transport%20modelling%20scales
|
Rail transport modelling uses a variety of scales (ratio between the real world and the model) to ensure scale models look correct when placed next to each other. Model railway scales are standardized worldwide by many organizations and hobbyist groups. Some of the scales are recognized globally, while others are less widespread and, in many cases, virtually unknown outside their circle of origin. Scales may be expressed as a numeric ratio (e.g. 1/87 or 1:87) or as letters defined in rail transport modelling standards (e.g. HO, OO, N, O, G, TT and Z.) The majority of commercial model railway equipment manufacturers base their offerings on Normen Europäischer Modellbahnen (NEM) or National Model Railroad Association (NMRA) standards in most popular scales.
Terminology
Although scale and gauge are often confused, scale means the ratio between a unit of measurement on a model compared with a unit of measurement in corresponding full size prototype, while gauge is the distance between the two running rails of the track. About 60% of the world's railways have a track gauge of known as "standard gauge", but there are also narrow-gauge railways where the track gauge is less than standard and broad-gauge railways where the gauge is wider. In a similar manner, a scale model railway may have several track gauges in one scale.
In addition to the scale and gauge issue, rail transport modelling standards are also applied to other attributes such as catenary, rolling stock wheel profile, loading gauge, curve radii and grades for slopes, to ensure interoperation of scale models produced by different manufacturers. Globally, the two dominating standard organizations are NMRA in North America and MOROP in Europe with their NEM standard.
History of scale standards
The first model railways were not built to any particular scale and were more like toys than miniature representations. Eventually, models became more accurate, and benefits of standardization became more obvious. The
|
https://en.wikipedia.org/wiki/Intertidal%20ecology
|
Intertidal ecology is the study of intertidal ecosystems, where organisms live between the low and high tide lines. At low tide, the intertidal is exposed whereas at high tide, the intertidal is underwater. Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as between different species of intertidal organisms within a particular intertidal community. The most important environmental and species interactions may vary based on the type of intertidal community being studied, the broadest of classifications being based on substrates—rocky shore and soft bottom communities.
Organisms living in this zone have a highly variable and often hostile environment, and have evolved various adaptations to cope with and even exploit these conditions. One easily visible feature of intertidal communities is vertical zonation, where the community is divided into distinct vertical bands of specific species going up the shore. Species ability to cope with abiotic factors associated with emersion stress, such as desiccation determines their upper limits, while biotic interactions e.g.competition with other species sets their lower limits.
Intertidal regions are utilized by humans for food and recreation, but anthropogenic actions also have major impacts, with overexploitation, invasive species and climate change being among the problems faced by intertidal communities. In some places Marine Protected Areas have been established to protect these areas and aid in scientific research.
Types of intertidal communities
Intertidal habitats can be characterized as having either hard or soft bottoms substrates. Rocky intertidal communities occur on rocky shores, such as headlands, cobble beaches, or human-made jetties. Their degree of exposure may be calculated using the Ballantine Scale. Soft-sediment habitats include sandy beaches, and intertidal wetlands (e.g., mudflats and salt marshes). These habitats differ in levels of abio
|
https://en.wikipedia.org/wiki/The%20Complexity%20of%20Songs
|
"The Complexity of Songs" is a scholarly article by computer scientist Donald Knuth in 1977, as an in-joke about computational complexity theory. The article capitalizes on the tendency of popular songs to devolve from long and content-rich ballads to highly repetitive texts with little or no meaningful content. The article notes that a song of length N words may be produced remembering, e.g., only words ("space complexity" of the song) or even less.
Article summary
Knuth writes that "our ancient ancestors invented the concept of refrain" to reduce the space complexity of songs, which becomes crucial when a large number of songs is to be committed to one's memory. Knuth's Lemma 1 states that if N is the length of a song, then the refrain decreases the song complexity to cN, where the factor c < 1.
Knuth further demonstrates a way of producing songs with O() complexity, an approach "further improved by a Scottish farmer named O. MacDonald".
More ingenious approaches yield songs of complexity O(), a class known as "m bottles of beer on the wall".
Finally, the progress during the 20th century—stimulated by the fact that "the advent of modern drugs has led to demands for still less memory"—leads to the ultimate improvement: Arbitrarily long songs with space complexity O(1) exist, e.g. a song defined by the recurrence relation
'That's the way,' 'I like it,' , for all
'uh huh,' 'uh huh'
Further developments
Prof. Kurt Eisemann of San Diego State University in his letter to the Communications of the ACM further improves the latter seemingly unbeatable estimate. He begins with an observation that for practical applications the value of the "hidden constant" c in the Big Oh notation may be crucial in making the difference between the feasibility and unfeasibility: for example a constant value of 1080 would exceed the capacity of any known device. He further notices that a technique has already been known in Mediaeval Europe whereby textual content of an arbitra
|
https://en.wikipedia.org/wiki/Aplasia
|
Aplasia (; from Greek a, "not", "no" + plasis, "formation") is a birth defect where an organ or tissue is wholly or largely absent. It is caused by a defect in a developmental process.
Aplastic anemia is the failure of the body to produce blood cells. It may occur at any time, and has multiple causes.
Examples
Acquired pure red cell aplasia
Aplasia cutis congenita
Aplastic anemia
Germ cell aplasia, also known as Sertoli cell-only syndrome
Radial aplasia
Thymic aplasia, which is found in DiGeorge syndrome and also occurs naturally as part of the gradual loss of function of the immune system later in life
See also
Atrophy
Hyperplasia
Hypoplasia
Neoplasia
List of biological development disorders
|
https://en.wikipedia.org/wiki/Distributed%20lock%20manager
|
Operating systems use lock managers to organise and serialise the access to resources. A distributed lock manager (DLM) runs in every machine in a cluster, with an identical copy of a cluster-wide lock database. In this way a DLM provides software applications which are distributed across a cluster on multiple machines with a means to synchronize their accesses to shared resources.
DLMs have been used as the foundation for several successful clustered file systems, in which the machines in a cluster can use each other's storage via a unified file system, with significant advantages for performance and availability. The main performance benefit comes from solving the problem of disk cache coherency between participating computers. The DLM is used not only for file locking but also for coordination of all disk access. VMScluster, the first clustering system to come into widespread use, relied on the OpenVMS DLM in just this way.
Resources
The DLM uses a generalized concept of a resource, which is some entity to which shared access must be controlled. This can relate to a file, a record, an area of shared memory, or anything else that the application designer chooses. A hierarchy of resources may be defined, so that a number of levels of locking can be implemented. For instance, a hypothetical database might define a resource hierarchy as follows:
Database
Table
Record
Field
A process can then acquire locks on the database as a whole, and then on particular parts of the database. A lock must be obtained on a parent resource before a subordinate resource can be locked.
Lock modes
A process running within a VMSCluster may obtain a lock on a resource. There are six lock modes that can be granted, and these determine the level of exclusivity being granted, it is possible to convert the lock to a higher or lower level of lock mode. When all processes have unlocked a resource, the system's information about the resource is destroyed.
Null (NL). Indicates interes
|
https://en.wikipedia.org/wiki/Amiga%20software
|
Amiga software is computer software engineered to run on the Amiga personal computer. Amiga software covers many applications, including productivity, digital art, games, commercial, freeware and hobbyist products. The market was active in the late 1980s and early 1990s but then dwindled. Most Amiga products were originally created directly for the Amiga computer (most taking advantage of the platform's unique attributes and capabilities), and were not ported from other platforms.
During its lifetime, thousands of applications were produced with over 10,000 utilities (collected into the Aminet repository). However, it was perceived as a games machine from outside its community of experienced and professional users. More than 12,000 games were available. New applications for the three existing Amiga-like operating systems are generally ported from the open source (mainly from Linux) software base.
Many Amiga software products or noteworthy programs during the timeline were ported to other platforms or inspired new programs, such as those aimed at 3D rendering or audio creations, e.g. LightWave 3D, Cinema 4D, and Blender (whose development started for the Amiga platform only). The first multimedia word processors for Amiga, such as TextCraft, Scribble!, Rashumon, and Wordworth, were the first on the market to implement full color WYSIWYG (with other platforms then only implementing black-and-white previews) and allowing the embedding of audio files.
History and characteristics
From the origins to 1988
1985
Amiga software started its history with the 1985 Amiga 1000. Commodore International released the programming specifications and development computers to various software houses, prominently Electronic Arts, a software house that then offered Deluxe Paint, Deluxe Music and others. Electronic Arts also developed the Interchange File Format (IFF) file container, to store project files realized by Deluxe Paint and Deluxe Music. IFF became the de facto standard in
|
https://en.wikipedia.org/wiki/Rabbi%20Nehemiah
|
Rabbi Nehemiah was a rabbi who lived circa 150 AD (fourth generation of tannaim).
He was one of the great students of Rabbi Akiva, and one of the rabbis who received semicha from R' Judah ben Baba
The Talmud equated R' Nechemiah with Rabbi Nehorai: "His name was not Rabbi Nehorai, but Rabbi Meir."
His son, R' Yehudah BeRabi Nechemiah, studied before Rabbi Tarfon, but died at a young age after damaging R' Tarfon's honor, after R' Akiva predicted his death.
Teachings
In the Talmud, all anonymous sayings in the Tosefta are attributed to R' Nechemiah. However, Sherira Gaon said that this does not mean they were said by R' Nechemiah, but that the laws in question were transmitted by R' Nechemiah.
In the Talmud, many times he disagrees with R' Judah bar Ilai on matters of halacha.
He is attributed as the author of the Mishnat ha-Middot (ca. AD 150), making it the earliest known Hebrew text on geometry, although some historians assign the text to a later period by an unknown author.
The Mishnat ha-Middot argues against the common belief that the Bible defines the geometric ratio (pi) as being exactly equal to 3, based on the description in 1 Kings 7:23 (and 2 Chronicles 4:2) of the great bowl situated outside the Temple of Jerusalem as having a diameter of 10 cubits and a circumference of 30 cubits. He maintained that the diameter of the bowl was measured from the outside brim, while the circumference was measured along the inner brim, which with a brim that is one handbreadth wide (as described in the subsequent verses 1 Kings 7:24 and 2 Chronicles 4:3) yields a ratio from the circular rim closer to the actual value of .
Quotes
"Due to the sin of baseless hatred, great strife is found in a man's home, and his wife miscarries, and his sons and daughters die at a young age."
See also
History of numerical approximations of π
|
https://en.wikipedia.org/wiki/Append
|
In computer programming, append is the operation for concatenating linked lists or arrays in some high-level programming languages.
Lisp
Append originates in the programming language Lisp. The append procedure takes zero or more (linked) lists as arguments, and returns the concatenation of these lists.
(append '(1 2 3) '(a b) '() '(6))
;Output: (1 2 3 a b 6)
Since the append procedure must completely copy all of its arguments except the last, both its time and space complexity are O(n) for a list of elements. It may thus be a source of inefficiency if used injudiciously in code.
The nconc procedure (called append! in Scheme) performs the same function as append, but destructively: it alters the cdr of each argument (save the last), pointing it to the next list.
Implementation
Append can easily be defined recursively in terms of cons. The following is a simple implementation in Scheme, for two arguments only:
(define append
(lambda (ls1 ls2)
(if (null? ls1)
ls2
(cons (car ls1) (append (cdr ls1) ls2)))))
Append can also be implemented using fold-right:
(define append
(lambda (a b)
(fold-right cons b a)))
Other languages
Following Lisp, other high-level programming languages which feature linked lists as primitive data structures have adopted an append. To append lists, as an operator, Haskell uses ++, OCaml uses @.
Other languages use the + or ++ symbols to nondestructively concatenate a string, list, or array.
Prolog
The logic programming language Prolog features a built-in append predicate, which can be implemented as follows:
append([],Ys,Ys).
append([X|Xs],Ys,[X|Zs]) :-
append(Xs,Ys,Zs).
This predicate can be used for appending, but also for picking lists apart. Calling
?- append(L,R,[1,2,3]).
yields the solutions:
L = [], R = [1, 2, 3] ;
L = [1], R = [2, 3] ;
L = [1, 2], R = [3] ;
L = [1, 2, 3], R = []
Miranda
In Miranda, this right-fold, from Hughes (1989:5-6), has the same semantics (by example) as the Scheme imple
|
https://en.wikipedia.org/wiki/Recovery%20%28metallurgy%29
|
In metallurgy, recovery is a process by which a metal or alloy's deformed grains can reduce their stored energy by the removal or rearrangement of defects in their crystal structure. These defects, primarily dislocations, are introduced by plastic deformation of the material and act to increase the yield strength of a material. Since recovery reduces the dislocation density, the process is normally accompanied by a reduction in a material's strength and a simultaneous increase in the ductility. As a result, recovery may be considered beneficial or detrimental depending on the circumstances.
Recovery is related to the similar processes of recrystallization and grain growth, each of them being stages of annealing. Recovery competes with recrystallization, as both are driven by the stored energy, but is also thought to be a necessary prerequisite for the nucleation of recrystallized grains. It is so called because there is a recovery of the electrical conductivity due to a reduction in dislocations. This creates defect-free channels, giving electrons an increased mean free path.
Definition
The physical processes that fall under the designations of recovery, recrystallization and grain growth are often difficult to distinguish in a precise manner. Doherty et al. (1998) stated:
"The authors have agreed that ... recovery can be defined as all annealing processes occurring in deformed materials that occur without the migration of a high-angle grain boundary"
Thus the process can be differentiated from recrystallization and grain growth as both feature extensive movement of high-angle grain boundaries.
If recovery occurs during deformation (a situation that is common in high-temperature processing) then it is referred to as 'dynamic' while recovery that occurs after processing is termed 'static'. The principal difference is that during dynamic recovery, stored energy continues to be introduced even as it is decreased by the recovery process - resulting in a form of dyna
|
https://en.wikipedia.org/wiki/Great%20Chardonnay%20Showdown
|
The Great Chardonnay Showdown, held in the spring of 1980, was organized by Craig Goldwyn, the wine columnist for the Chicago Tribune and the founder of the Beverage Testing Institute, with help from three Chicago wine stores. A total of 221 Chardonnays from around the world were selected for the blind wine competition. France and California were heavily represented, but entries from many countries around the world were included.
Competition
Five panels of five judges each first selected 19 finalists. Then ten of the original judges reviewed the finalists a second time. The winning wine was the Grgich Hills Wine Cellar Sonoma County Chardonnay 1977, which was the new winery's very first vintage. The winemaker was Mike Grgich, who had earlier made the Chateau Montelena Chardonnay that won first place among white wines at the historic Judgment of Paris wine competition.
See also
Globalization of wine
California wine
Grand European Jury Wine Tasting of 1997
|
https://en.wikipedia.org/wiki/Superior%20parietal%20lobule
|
The superior parietal lobule is bounded in front by the upper part of the postcentral sulcus, but is usually connected with the postcentral gyrus above the end of the sulcus. The superior parietal lobule contains Brodmann's areas 5 and 7.
Behind it is the lateral part of the parietooccipital fissure, around the end of which it is joined to the occipital lobe by a curved gyrus, the arcus parietooccipitalis. Below, it is separated from the inferior parietal lobule by the horizontal portion of the intraparietal sulcus.
The superior parietal lobule is involved with spatial orientation, and receives a great deal of visual input as well as sensory input from one's hand. In addition to spatial cognition and visual perception, it has also been associated with reasoning, working memory, and attention.
It is also involved with other functions of the parietal lobe in general.
There are major white matter pathway connections with the superior parietal lobule such as the Cingulum, SLF I, superior parietal lobule connections of the Medial longitudinal fasciculus and other newly described superior parietal white matter connections.
Damage to the superior parietal lobule can cause contralateral astereognosis and hemispatial neglect. It is also associated with deficits on tests involving the manipulation and rearrangement of information in working memory, but not on working memory tests requiring only rehearsal and retrieval processes.
Additional images
|
https://en.wikipedia.org/wiki/Inferior%20parietal%20lobule
|
The inferior parietal lobule (subparietal district) lies below the horizontal portion of the intraparietal sulcus, and behind the lower part of the postcentral sulcus. Also known as Geschwind's territory after Norman Geschwind, an American neurologist, who in the early 1960s recognised its importance. It is a part of the parietal lobe.
Structure
It is divided from rostral to caudal into two gyri:
One, the supramarginal gyrus (BA 40), arches over the upturned end of the lateral fissure; it is continuous in front with the postcentral gyrus, and behind with the superior temporal gyrus.
The second, the angular gyrus (BA 39), arches over the posterior end of the superior temporal sulcus, behind which it is continuous with the middle temporal gyrus.
In males, the inferior parietal lobule is significantly more voluminous in the left hemisphere compared to the right. This extreme asymmetry is not present in females, and may contribute to slight cognitive variations of both sexes.
In macaque neuroanatomy, this region is often divided into caudal and rostral portions, cIPL and rIPL, respectively. The cIPL is further divided into areas Opt and PG whereas rIPL is divided into PFG and PF areas.
Function
Inferior parietal lobule has been involved in the perception of emotions in facial stimuli, and interpretation of sensory information. The Inferior parietal lobule is concerned with language, mathematical operations, and body image, particularly the supramarginal gyrus and the angular gyrus.
Clinical significance
Destruction to the inferior parietal lobule of the dominant hemisphere results in Gerstmann's syndrome: right-to-left confusion, finger agnosia, dysgraphia and dyslexia, dyscalculia, contralateral hemianopia, or lower quadrantanopia. Destruction to the inferior parietal lobule of the non-dominant hemisphere results in topographic memory loss, anosognosia, construction apraxia, dressing apraxia, contralateral sensory neglect, contralateral hemianopia, or lower quadr
|
https://en.wikipedia.org/wiki/Rudolf%20Jaenisch
|
Rudolf Jaenisch (born on April 22, 1942) is a Professor of Biology at MIT and a founding member of the Whitehead Institute for Biomedical Research. He is a pioneer of transgenic science, in which an animal’s genetic makeup is altered. Jaenisch has focused on creating genetically modified mice to study cancer, epigenetic reprogramming and neurological diseases.
Research
Jaenisch’s first breakthrough occurred in 1974, when he and Beatrice Mintz showed that foreign DNA could be integrated into the DNA of early mouse embryos They injected retrovirus DNA into early mouse embryos and showed that leukemia DNA sequences had integrated into the mouse genome and also into that of its offspring. These mice were the first transgenic mammals in history.
His current research focuses on the epigenetic regulation of gene expression, which has led to major advances in creating embryonic stem cells and “induced pluripotent stem" (IPS) cells, as well as their therapeutic applications. In 2007, Jaenisch’s laboratory was one of the first three laboratories worldwide to report reprogramming cells taken from a mouse's tail into IPS cells. Jaenisch has since shown therapeutic benefits of IPS cell-based treatment for sickle-cell anemia and Parkinson's disease in mice. Additional research focuses on the epigenetic mechanisms involved in cancer and brain development.
Jaenisch’s therapeutic cloning research deals exclusively with mice, but he is an advocate for using the same techniques with human cells in order to advance embryonic stem cell research. However, in 2001, Jaenisch made a public case against human reproductive cloning, testifying before a U.S. House of Representatives subcommittee and writing an editorial in Science magazine.
Career
Jaenisch received his doctorate in medicine from the University of Munich in 1967, preferring the laboratory to the clinic. He became a postdoc at the Max Planck Institute in Munich, studying bacteriophages. He left Germany in 1970 for research
|
https://en.wikipedia.org/wiki/Equipotentiality
|
Equipotentiality refers to a psychological theory in both neuropsychology and behaviorism. Karl Spencer Lashley defined equipotentiality as "The apparent capacity of any intact part of a functional brain to carry out… the [memory] functions which are lost by the destruction of [other parts]". In other words, the brain can co-opt other areas to take over the role of the damaged part. Equipotentiality is subject to the other term Lashley coined, the law of mass action. The law of mass action says that the efficiency of any complex function of the brain is reduced proportionately to how much damage the brain as a whole has sustained, but not to the damage of any particular area of the brain. In this context when we use our brains we are referring to the cortex.
Historical context
In the 1800s brain localization theories were the popular theories on how the brain functioned. The Broca's area of speech was discovered in 1861, in 1870 the cerebral cortex was marked as the motor center of the brain, and the general visual and auditory areas were defined in the cerebral cortex. Behaviorism at the time would also say that learned responses were series of specific connections in the cerebral cortex. Lashley argued that one would then be able to locate these connections in part of the brain and he systematically looked for where learning was localized.
Experiments
While working on his PhD in genetics, Lashley began a number of tests on brain tissue and the idea of localization. Lashley wanted to focus mainly on behaviors that could be observed and an easy way to do that was to study white rats in a controlled setting. A fellow researcher, Shepherd Ivory Franz, also shared the common interest of studying localization and studying only things that could be observed. Franz had already done previous work with lesions in cat brains and puzzle boxes so Lashley and Franz decided to team up and work with rats.
In their first experiments, Lashley was in charge of building different
|
https://en.wikipedia.org/wiki/Torula
|
Torula (Cyberlindnera jadinii) is a species of yeast.
Use
Torula, in its inactive form (usually labeled as torula yeast), is widely used as a flavoring in processed foods and pet foods. It is often grown on wood liquor, a byproduct of paper production, which is rich in wood sugars (xylose). It is pasteurized and spray-dried to produce a fine, light grayish-brown powder with a slightly yeasty odor and gentle, slightly meaty taste.
Cyberlindnera jadinii (which in these contexts is often still labelled with its synonym Candida utilis) can be used, in a blend of various other yeasts, as secondary cheese starter culture "... to inoculate pasteurised milk, which mimic the natural yeast flora of raw milk and improve cheese flavour. Other functions of the added yeast organisms are the neutralisation of the curd (lactate degradation) and galactose consumption."
Like the flavor enhancer monosodium glutamate (MSG), torula is rich in glutamic acid. Therefore, it has become a popular replacement among manufacturers wishing to eliminate MSG or hide flavor enhancer usage in an ingredients list. It also enables the marketing of "all-natural" ingredients.
Torula finds accepted use in Europe and California for the organic control of olive flies. When dissolved in water, it serves as a food attractant, with or without additional pheromone lures, in McPhail and OLIPE traps, which drown the insects. In field trials in Sonoma County, California, mass trappings reduced crop damage to an average of 30% compared to almost 90% in untreated controls.
See also
Nutritional yeast
|
https://en.wikipedia.org/wiki/Magneto-optical%20trap
|
In condensed matter physics, a magneto-optical trap (MOT) is an apparatus which uses laser cooling and a spatially-varying magnetic field to create a trap which can produce samples of cold, neutral atoms. Temperatures achieved in a MOT can be as low as several microkelvin, depending on the atomic species, which is two or three times below the photon recoil limit. However, for atoms with an unresolved hyperfine structure, such as , the temperature achieved in a MOT will be higher than the Doppler cooling limit.
A MOT is formed from the intersection of a weak, quadrupolar, spatially-varying magnetic field and six circularly-polarized, red-detuned, optical molasses beams. As atoms travel away from the field zero at the center of the trap (halfway between the coils), the spatially-varying Zeeman shift brings an atomic transition into resonance which gives rise to a scattering force that pushes the atoms back towards the center of the trap. This is why a MOT traps atoms, and because this force arises from photon scattering in which atoms receive momentum "kicks" in the direction opposite their motion, it also slows the atoms (i.e. cools them), on average, over repeated absorption and spontaneous emission cycles. In this way, a MOT is able to trap and cool atoms with initial velocities of hundreds of meters per second down to tens of centimeters per second (again, depending upon the atomic species).
Although charged particles can be trapped using a Penning trap or a Paul trap using a combination of electric and magnetic fields, those traps are ineffective for neutral atoms.
Theoretical description of a MOT
Two coils in an anti-Helmholtz configuration are used to generate a weak quadrupolar magnetic field; here, we will consider the coils as being separated along the -axis. In the proximity of the field zero, located halfway between the two coils along the -direction, the field gradient is uniform and the field itself varies linearly with position. For this discussion,
|
https://en.wikipedia.org/wiki/Carry-save%20adder
|
A carry-save adder is a type of digital adder, used to efficiently compute the sum of three or more binary numbers. It differs from other digital adders in that it outputs two (or more) numbers, and the answer of the original summation can be achieved by adding these outputs together. A carry save adder is typically used in a binary multiplier, since a binary multiplier involves addition of more than two binary numbers after multiplication. A big adder implemented using this technique will usually be much faster than conventional addition of those numbers.
Motivation
Consider the sum:
12345678
+ 87654322
= 100000000
Using basic arithmetic, we calculate right to left, "8 + 2 = 0, carry 1", "7 + 2 + 1 = 0, carry 1", "6 + 3 + 1 = 0, carry 1", and so on to the end of the sum. Although we know the last digit of the result at once, we cannot know the first digit until we have gone through every digit in the calculation, passing the carry from each digit to the one on its left. Thus adding two n-digit numbers has to take a time proportional to n, even if the machinery we are using would otherwise be capable of performing many calculations simultaneously.
In electronic terms, using bits (binary digits), this means that even if we have n one-bit adders at our disposal, we still have to allow a time proportional to n to allow a possible carry to propagate from one end of the number to the other. Until we have done this,
We do not know the result of the addition.
We do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative).
A carry look-ahead adder can reduce the delay. In principle the delay can be reduced so that it is proportional to log n, but for large numbers this is no longer the case, because even when carry look-ahead is implemented, the distances that signals have to travel on the chip increase in proportion to n, and propagation delays increase at the same
|
https://en.wikipedia.org/wiki/QSO%20B0839%2B187
|
QSO B0839+187 (PKS 0839+187) is a quasar that was used for a VLBI experiment conducted by Edward Fomalont and Sergei Kopeikin in September 2002. They claimed to measure the speed of gravity, but this is disputed.
|
https://en.wikipedia.org/wiki/PortAudio
|
PortAudio is an open-source computer library for audio playback and recording. It is a cross-platform library, so programs using it can run on many different computer operating systems, including Windows, Mac OS X and Linux. PortAudio supports Core Audio, ALSA, and MME, DirectSound, ASIO and WASAPI on Windows. Like other libraries whose primary goal is portability, PortAudio is written in the C programming language. It has also been implemented in the languages PureBasic and Lazarus/Free Pascal. PortAudio is based on a callback paradigm, similar to JACK and ASIO.
PortAudio is part of the PortMedia project, which aims to provide a set of platform-independent libraries for music software. The free audio editor Audacity uses the PortAudio library, and so does JACK on the Windows platform.
See also
List of free software for audio
Notes
|
https://en.wikipedia.org/wiki/PortMedia
|
PortMedia, formerly PortMusic, is a set of open source computer libraries for dealing with sound and MIDI. Currently the project has two main libraries: PortAudio, for digital audio input and output, and PortMidi, a library for MIDI input and output. A library for dealing with different audio file formats, PortSoundFile, is being planned, although another library, libsndfile, already exists and is licensed under the copyleft GNU Lesser General Public License. A standard MIDI file I/O library, PortSMF, is under construction.
PortMusic has become PortMedia and is hosted on SourceForge.
See also
List of free software for audio
External links
PortMusic website
Audio libraries
Computer libraries
Free audio software
|
https://en.wikipedia.org/wiki/PortMidi
|
PortMidi is a computer library for real time input and output of MIDI data. It is designed to be portable to many different operating systems. PortMidi is part of the PortMusic project.
See also
PortAudio
External links
portmidi.h – definition of the API and contains the documentation for PortMidi
Audio libraries
Computer libraries
|
https://en.wikipedia.org/wiki/Attack%20tree
|
Attack trees are conceptual diagrams showing how an asset, or target, might be attacked. Attack trees have been used in a variety of applications. In the field of information technology, they have been used to describe threats on computer systems and possible attacks to realize those threats. However, their use is not restricted to the analysis of conventional information systems. They are widely used in the fields of defense and aerospace for the analysis of threats against tamper resistant electronics systems (e.g., avionics on military aircraft). Attack trees are increasingly being applied to computer control systems (especially relating to the electric power grid). Attack trees have also been used to understand threats to physical systems.
Some of the earliest descriptions of attack trees are found in papers and articles by Bruce Schneier, when he was CTO of Counterpane Internet Security. Schneier was clearly involved in the development of attack tree concepts and was instrumental in publicizing them. However, the attributions in some of the early publicly available papers on attack trees also suggest the involvement of the National Security Agency in the initial development.
Attack trees are very similar, if not identical, to threat trees. Threat trees were developed by Jonathan Weiss of Bell Laboratories to comply with guidance in MIL STD 1785 for AT&T's work on Command and Control for federal applications, and were first described in his paper in 1982. This work was later discussed in 1994 by Edward Amoroso.
Basic
Attack trees are multi-leveled diagrams consisting of one root, leaves, and children. From the bottom up, child nodes are conditions which must be satisfied to make the direct parent node true; when the root is satisfied, the attack is complete. Each node may be satisfied only by its direct child nodes.
A node may be the child of another node; in such a case, it becomes logical that multiple steps must be taken to carry out an attack.
|
https://en.wikipedia.org/wiki/Featherstone%27s%20algorithm
|
Featherstone's algorithm is a technique used for computing the effects of forces applied to a structure of joints and links (an "open kinematic chain") such as a skeleton used in ragdoll physics.
The Featherstone's algorithm uses a reduced coordinate representation. This is in contrast to the more popular Lagrange multiplier method, which uses maximal coordinates. Brian Mirtich's PhD Thesis has a very clear and detailed description of the algorithm. Baraff's paper "Linear-time dynamics using Lagrange multipliers" has a discussion and comparison of both algorithms.
|
https://en.wikipedia.org/wiki/List%20of%20adiabatic%20concepts
|
Adiabatic (from Gr. ἀ negative + διάβασις passage; transference) refers to any process that occurs without heat transfer. This concept is used in many areas of physics and engineering. Notable examples are listed below.
Automobiles
Engine braking, a feature of some diesel engines, uses adiabatic expansion to diminish the vehicle's forward momentum.
Meteorology
Adiabatic lapse rate, the change in air temperature with changing height, resulting from pressure change.
Quantum chemistry
Adiabatic invariant Born–Oppenheimer approximation
Thermodynamics
Adiabatic process
Adiabatic ionization
Adiabatic index
Adiabatic accessibility
Quantum mechanics
Adiabatic theorem
Adiabatic quantum motor
Electronics
Adiabatic circuit
Adiabatic logic
Carbohydrate chemistry
Adiabatic map
Extraction
Adiabatic extraction
|
https://en.wikipedia.org/wiki/Antarctic%20bottom%20water
|
The Antarctic bottom water (AABW) is a type of water mass in the Southern Ocean surrounding Antarctica with temperatures ranging from −0.8 to 2 °C (35 °F) and absolute salinities from 34.6 to 35.0 g/kg. As the densest water mass of the oceans, AABW is found to occupy the depth range below 4000 m of all ocean basins that have a connection to the Southern Ocean at that level.
The major significance of Antarctic bottom water is that it is the coldest bottom water, giving it a significant influence on large-scale movement in the world's oceans through thermohaline circulation.
Initially, AABW has a high oxygen content relative to the rest of the oceans' deep waters but this depletes over time. This early oxygen abundance comes from the precursor water mass of the AABW, which is cold, relatively salty and oxygen-rich dense shelf water (DSW) formed above Antarctica’s continental shelf by wintertime cooling and brine rejection. This water sinks at four distinct regions around the margins of the continent and forms the AABW; this process leads to ventilation of the deep ocean, or abyssal ventilation.
Formation and circulation
Antarctic bottom water is created in part due to the major overturning of ocean water.
Antarctic bottom water is formed in the Weddell and Ross Seas, off the Adélie Coast and by Cape Darnley from surface water cooling in polynyas and below the ice shelf. A unique feature of Antarctic bottom water is the cold surface wind blowing off the Antarctic continent. The surface wind creates the polynyas which opens up the water surface to more wind. This Antarctic wind is stronger during the winter months and thus the Antarctic bottom water formation is more pronounced during the Antarctic winter season. Surface water is enriched in salt from sea ice formation. Due to its increased density, it flows down the Antarctic continental margin and continues north along the bottom. It is the densest water in the free ocean, and underlies other bottom and intermedi
|
https://en.wikipedia.org/wiki/Shannon%E2%80%93Weaver%20model
|
The Shannon–Weaver model is one of the first and most influential models of communication. It was initially published in the 1948 paper A Mathematical Theory of Communication and explains communication in terms of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source produces the original message. The transmitter translates the message into a signal, which is sent using a channel. The receiver translates the signal back into the original message and makes it available to the destination. For a landline phone call, the person calling is the source. They use the telephone as a transmitter, which produces an electric signal that is sent through the wire as a channel. The person receiving the call is the destination and their telephone is the receiver.
Shannon and Weaver distinguish three types of problems of communication: technical, semantic, and effectiveness problems. They focus on the technical level, which concerns the problem of how to use a signal to accurately reproduce a message from one location to another location. The difficulty in this regard is that noise may distort the signal. They discuss redundancy as a solution to this problem: if the original message is redundant then the distortions can be detected, which makes it possible to reconstruct the source's original intention.
The Shannon–Weaver model of communication has been very influential in various fields, including communication theory and information theory. Many later theorists have built their own models on its insights. However, it is often criticized based on the claim that it oversimplifies communication. One common objection is that communication should not be understood as a one-way process but as a dynamic interaction of messages going back and forth between both participants. Another criticism rejects the idea that the message exists prior to the communication and argues instead that the encoding is itself a creative process that creates th
|
https://en.wikipedia.org/wiki/Hemiazygos%20vein
|
The hemiazygos vein (vena azygos minor inferior) is a vein running superiorly in the lower thoracic region, just to the left side of the vertebral column.
Structure
The hemiazygos vein and the accessory hemiazygos vein, when taken together, essentially serve as the left-sided equivalent of the azygos vein. That is, the azygos vein serves to drain most of the posterior intercostal veins on the right side of the body, and the hemiazygos vein and the accessory hemiazygos vein drain most of the posterior intercostal veins on the left side of the body. Specifically, the hemiazygos vein mirrors the bottom part of the azygos vein.
The structure of the hemiazygos vein is often variable. It usually begins in the left ascending lumbar vein or renal vein, and passes upward through the left crus of the diaphragm to enter the thorax. It continues ascending on the left side of the vertebral column, and around the level of the ninth thoracic vertebra, it passes rightward across the column, behind the aorta, esophagus, and thoracic duct, to end in the azygos vein.
It receives the 9th, 10th, and 11th posterior intercostal veins and the subcostal vein of the left side, and some esophageal and mediastinal veins.
Variations
The hemiazygos may or may not be continuous superiorly with the accessory hemiazygos vein.
Clinical significance
The dilated hemiazygos system displayed by chest or abdominal X-ray films can be misdiagnosed as a mediastinal or retroperitoneal neoplasm, lymphadenopathy or aortic dissection (2, 5–7, 10). Venostasis which is the consequence of pathological conditions such as acquired obstruction of the IVC or SVC, the right heart failure, portal hypertension or due to pregnancy can have the same clinical presentation as hemiazygos continuation of the IVC (5, 6, 10, 13). In the case of hemiazygos continuation of the IVC, the hepatic veins can drain directly into the right atrium (3, 10). An incidental finding of this condition during venous cannulation for cardio
|
https://en.wikipedia.org/wiki/MIKEY
|
Multimedia Internet KEYing (MIKEY) is a key management protocol that is intended for use with real-time applications. It can specifically be used to set up encryption keys for multimedia sessions that are secured using SRTP, the security protocol commonly used for securing real-time communications such as VoIP.
MIKEY was first defined in . Additional MIKEY modes have been defined in , , , and .
Purpose of MIKEY
As described in RFC 3830, the MIKEY protocol is intended to provide end-to-end security between users to support a communication. To do this, it shares a session key, known as the Traffic Encryption Key (TEK), between the participants of a communication session. The MIKEY protocol may also authenticate the participants of the communication.
MIKEY provides many methods to share the session key and authenticate participants.
Using MIKEY in practice
MIKEY is used to perform key management for securing a multimedia communication protocol. As such, MIKEY exchanges generally occur within the signalling protocol which supports the communication.
A common setup is for MIKEY to support Secure VoIP by providing the key management mechanism for the VoIP protocol (SRTP). Key management is performed by including MIKEY messages within the SDP content of SIP signalling messages.
Use cases
MIKEY considers how to secure the following use cases:
One-to-one communications
Conference communications
Group Broadcast
Call Divert
Call Forking
Delayed delivery (Voicemail)
Not all MIKEY methods support each use case. Each MIKEY method also has its own advantages and disadvantages in terms of feature support, computational complexity and latency of communication setup.
Key transport and exchange methods
MIKEY supports eight different methods to set up a common secret (to be used as e.g. a session key or a session KEK):
Pre-Shared Key (MIKEY-PSK): This is the most efficient way to handle the transport of the Common Secret, since only symmetric encryption is used and
|
https://en.wikipedia.org/wiki/Rambutan%20%28cryptography%29
|
Rambutan is a family of encryption technologies designed by the Communications-Electronics Security Group (CESG), the technical division of the United Kingdom government's secret communications agency, GCHQ.
It includes a range of encryption products designed by CESG for use in handling confidential (not secret) communications between parts of the British government, government agencies, and related bodies such as NHS Trusts. Unlike CESG's Red Pike system, Rambutan is not available as software: it is distributed only as a self-contained electronic device (an ASIC) which implements the entire cryptosystem and handles the related key distribution and storage tasks. Rambutan is not sold outside the government sector.
Technical details of the Rambutan algorithm are secret. Security researcher Bruce Schneier describes it as being a stream cipher (linear-feedback shift register) based cryptosystem with 5 shift registers each of around 80 bits, and a key size of 112 bits. RAMBUTAN-I communications chips (which implement a secure X.25 based communications system) are made by approved contractors Racal and Baltimore Technologies/Zergo Ltd. CESG later specified RAMBUTAN-II, an enhanced system with backward compatibility with existing RAMBUTAN-I infrastructure. The RAMBUTAN-II chip is a 64-pin quad ceramic pack chip, which implements the electronic codebook, cipher block chaining, and output feedback operating modes (each in 64 bits) and the cipher feedback mode in 1 or 8 bits. Schneier suggests that these modes may indicate Rambutan is a block cipher rather than a stream. The three 64 bit modes operate at 88 megabits/second. Rambutan operates in three modes: ECB, CBC, and 8 bit CFB.
|
https://en.wikipedia.org/wiki/Torsion%20tensor
|
In differential geometry, the notion of torsion is a manner of characterizing a twist or screw of a moving frame around a curve. The torsion of a curve, as it appears in the Frenet–Serret formulas, for instance, quantifies the twist of a curve about its tangent vector as the curve evolves (or rather the rotation of the Frenet–Serret frame about the tangent vector). In the geometry of surfaces, the geodesic torsion describes how a surface twists about a curve on the surface. The companion notion of curvature measures how moving frames "roll" along a curve "without twisting".
More generally, on a differentiable manifold equipped with an affine connection (that is, a connection in the tangent bundle), torsion and curvature form the two fundamental invariants of the connection. In this context, torsion gives an intrinsic characterization of how tangent spaces twist about a curve when they are parallel transported; whereas curvature describes how the tangent spaces roll along the curve. Torsion may be described concretely as a tensor, or as a vector-valued 2-form on the manifold. If ∇ is an affine connection on a differential manifold, then the torsion tensor is defined, in terms of vector fields X and Y, by
where [X,Y] is the Lie bracket of vector fields.
Torsion is particularly useful in the study of the geometry of geodesics. Given a system of parametrized geodesics, one can specify a class of affine connections having those geodesics, but differing by their torsions. There is a unique connection which absorbs the torsion, generalizing the Levi-Civita connection to other, possibly non-metric situations (such as Finsler geometry). The difference between a connection with torsion, and a corresponding connection without torsion is a tensor, called the contorsion tensor. Absorption of torsion also plays a fundamental role in the study of G-structures and Cartan's equivalence method. Torsion is also useful in the study of unparametrized families of geodesics, via
|
https://en.wikipedia.org/wiki/Flags%20of%20Norwegian%20subdivisions
|
Most of the Norwegian counties and municipalities have their own flag. They are based on the respective coat of arms of the subdivision. However they are seldom used. Most public buildings and private homes use the National flag. Note: As of 2020, many municipalities and counties have been merged. Because of this many of the new regions do not have a current flag and instead the coat of arms will be used for the new regions until a flag is made.
=== Flags of the former counties ===
=== Flags and Coat of Arms of the current regions ===
Although Finnmark and Troms are officially merged, they still use their old county flags as the merging is disputed by locals and officials in Finnmark. The region of Trøndelag uses the old flag of Nord-Trøndelag as it has historical value and connection to king Olav II of Norway.
=== Flags of the municipalities ===
This is just a number of municipalities with their own flags, there are however many more municipalities that do use a flag.
==== Agder ====
Flags of municipalities in Agder county.
==== Innlandet ====
Flags of municipalities in Innlandet county.
==== Møre og Romsdal ====
Flags of municipalities in Møre og Romsdal county.
==== Nordland ====
Flags of municipalities in Nordland county.
==== Oslo ====
==== Rogaland ====
Flags of municipalities in Rogaland county.
==== Troms og Finnmark ====
Flags of municipalities in Troms og Finnmark county.
==== Trøndelag ====
Flags of municipalities in Trøndelag county.
==== Vestfold og Telemark ====
Flags of municipalities in Vestfold og Telemark county.
==== Vestland ====
Flags of municipalities in Vestland county.
==== Viken ====
Flags of municipalities in Viken county.
Subdivisions
Lists and galleries of flags
Flags
Norway
|
https://en.wikipedia.org/wiki/Surface%20%28mathematics%29
|
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line.
There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
Definitions
Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface. If the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation
A surface may also be defined as the image, in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to insure that the image is not a curve). In this case, one says that one has a parametric surface, which is parametrized by these two variables, called parameters. For example, the unit sphere may be parametrized by the Eu
|
https://en.wikipedia.org/wiki/Antidromic
|
An antidromic impulse in an axon refers to conduction opposite of the normal (orthodromic) direction. That is, it refers to conduction along the axon away from the axon terminal(s) and towards the soma. For most neurons, their dendrites, soma, or axons are depolarized forming an action potential that moves from the starting point of the depolarization (near the cell body) along the axons of the neuron (orthodromic). Antidromic activation is often induced experimentally by direct electrical stimulation of a presumed target structure. Antidromic activation is often used in a laboratory setting to confirm that a neuron being recorded from projects to the structure of interest.
See also
Orthodromic
Neuron
Dendrite
Axon
Action potential
|
https://en.wikipedia.org/wiki/Orthodromic
|
An orthodromic impulse runs along an axon in its anterograde direction, away from the soma.
In the heart, orthodromic may also refer to an impulse going in the correct direction from the dendrites to axon terminal (from the atria to the ventricles) in contrast to some impulses in re-entry.
See also
Antidromic
Action potential
Anterograde Tracing
|
https://en.wikipedia.org/wiki/Helmut%20Gr%C3%B6ttrup
|
Helmut Gröttrup (12 February 1916 – 4 July 1981) was a German engineer, rocket scientist and inventor of the smart card. During World War II, he worked in the German V-2 rocket program under Wernher von Braun. From 1946 to 1950 he headed a group of 170 German scientists who were forced to work for the Soviet rocketry program under Sergei Korolev. After returning to West Germany in December 1953, he developed data processing systems, contributed to early commercial applications of computer science and coined the German term "Informatik". In 1967 Gröttrup invented the smart card as a "forgery-proof key" for secure identification and access control (ID card) or storage of a secure key, also including inductive coupling for near-field communication (NFC). From 1970 he headed a start-up division of Giesecke+Devrient for the development of banknote processing systems and machine-readable security features.
Education
Helmut Gröttrup's father Johann Gröttrup (1881 – 1940) was a mechanical engineer. He worked full-time at the Bund der technischen Angestellten und Beamten (Butab), a federation for technical staff and officials of the social democratic trade union in Berlin. His mother Thérèse Gröttrup (1894 – 1981), born Elsen, was active in the peace movement. Johann Gröttrup lost his job in 1933 when the Nazi Party came into power.
From 1935 to 1939 Helmut Gröttrup studied applied physics at the Technical University of Berlin and made his thesis with professor Hans Geiger, the co-inventor of the Geiger counter. He also worked for Manfred von Ardenne's research laboratory Forschungslaboratorium für Elektronenphysik.
German rocketry program
From December 1939, Helmut Gröttrup worked in the German V-2 rocket program at the Peenemünde Army Research Center with Walter Dornberger and Wernher von Braun. In December 1940, he was made department head under Ernst Steinhoff for developing remote guidance and control systems.
Since October 1943 Gröttrup had been under SD surveillan
|
https://en.wikipedia.org/wiki/OPLS
|
The OPLS (Optimized Potentials for Liquid Simulations) force field was developed by Prof. William L. Jorgensen at Purdue University and later at Yale University, and is being further developed commercially by Schrödinger, Inc.
Functional form
The functional form of the OPLS force field is very similar to that of AMBER:
with the combining rules and .
Intramolecular nonbonded interactions are counted only for atoms three or more bonds apart; 1,4 interactions are scaled down by the "fudge factor" , otherwise . All the interaction sites are centered on the atoms; there are no "lone pairs".
Parameterization
Several sets of OPLS parameters have been published. There is OPLS-ua (united atom), which includes hydrogen atoms next to carbon implicitly in the carbon parameters, and can be used to save simulation time. OPLS-aa (all atom) includes every atom explicitly. Later publications include parameters for other specific functional groups and types of molecules such as carbohydrates. OPLS simulations in aqueous solution typically use the TIP4P or TIP3P water model.
A distinctive feature of the OPLS parameters is that they were optimized to fit experimental properties of liquids, such as density and heat of vaporization, in addition to fitting gas-phase torsional profiles.
Implementation
The reference implementations of the OPLS force field are the BOSS and MCPRO programs developed by Jorgensen. Other packages such as TINKER, GROMACS, PCMODEL, Abalone, LAMMPS, Desmond and NAMD also implement OPLS force fields.
|
https://en.wikipedia.org/wiki/TreeFam
|
TreeFam (Tree families database) is a database of phylogenetic trees of animal genes. It aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families.
TreeFam defines a gene family as a group of genes that evolved after the speciation of single-metazoan animals. It also tries to include outgroup genes like yeast (S.cerevisiae and S. pombe) and plant (A. thaliana) to reveal these distant members.
TreeFam is also an ortholog database. Unlike other pairwise alignment based ones, TreeFam infers orthologs by means of gene trees. It fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. TreeFam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations.
The basic elements of TreeFam are gene families that can be divided into two parts: TreeFam-A and TreeFam-B families. TreeFam-B families are automatically created. They might contain errors given complex phylogenies. TreeFam-A families are manually curated from TreeFam-B ones. Family names and node names are assigned at the same time. The ultimate goal of TreeFam is to present a curated resource for all the families.
TreeFam is being run as a project at the Wellcome Trust Sanger Institute, and its software is housed on SourceForge as "TreeSoft".
See also
Homology (biology)
HomoloGene
Phylogenetics
OrthoDB
Orthologous MAtrix (OMA)
Inparanoid
|
https://en.wikipedia.org/wiki/Hybrid%20kernel
|
A hybrid kernel is an operating system kernel architecture that attempts to combine aspects and benefits of microkernel and monolithic kernel architectures used in operating systems.
Overview
The traditional kernel categories are monolithic kernels and microkernels (with nanokernels and exokernels seen as more extreme versions of microkernels). The "hybrid" category is controversial, due to the similarity of hybrid kernels and ordinary monolithic kernels; the term has been dismissed by Linus Torvalds as simple marketing.
The idea behind a hybrid kernel is to have a kernel structure similar to that of a microkernel, but to implement that structure in the manner of a monolithic kernel. In contrast to a microkernel, all (or nearly all) operating system services in a hybrid kernel are still in kernel space. There are none of the reliability benefits of having services in user space, as with a microkernel. However, just as with an ordinary monolithic kernel, there is none of the performance overhead for message passing and context switching between kernel and user mode that normally comes with a microkernel.
Examples
NT kernel
One prominent example of a hybrid kernel is the Microsoft Windows NT kernel that powers all operating systems in the Windows NT family, up to and including Windows 11 and Windows Server 2022, and powers Windows Phone 8, Windows Phone 8.1, and Xbox One.
Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. NT-based Windows is classified as a hybrid kernel (or a macrokernel) rather than a monolithic kernel because the emulation subsystems run in user-mode server processes, rather than in kernel mode as on a monolithic kernel, and further because of the large number of design goals which resemble design goals of
|
https://en.wikipedia.org/wiki/Area%20codes%20in%20the%20Caribbean
|
The integration of the Caribbean telephone networks into the North American Numbering Plan (NANP) began with the assignment of area codes in the Caribbean in 1958, when area code 809 was designated for Bermuda and any other potential participant island countries.
From 1958 to 1999, most of the British West Indies in the Caribbean Basin, Bermuda, the U.S. Virgin Islands, the Dominican Republic and Puerto Rico shared area code 809. By the mid-1990s, with the proliferation of fax machines, mobile phones, computers, and pagers in the region, the pool of available central office codes was exhausting. Beginning with Bermuda in November 1994, and The Bahamas, Puerto Rico, and Barbados in 1995, several countries in the Caribbean received individual area code assignments from the NANPA, effectively splitting area code 809. By 1999, it was retained only by the Dominican Republic, following the departure of Saint Vincent and the Grenadines from using the area code.
Assignments
Sint Maarten was part of the Netherlands Antilles until its dissolution in 2010. It is a constituent country within the Kingdom of the Netherlands. Sint Maarten used the country code +599 of the Netherlands Antilles until joining the NANP on September 30, 2011, with area code 721.
Former assignments within 809
The following was the 1958-1995 numbering plan for 809. Starting in the 1980s, Puerto Rico, Bermuda and the Dominican Republic began to use prefixes from unused ranges throughout the 2xx to 9xx range. Historic (1960s-mid-1980s) ranges are shown in parentheses.
The number pool of the area code was divided between the regions by the national number, which was from two to four digits long, leaving five to three digits, respectively, of the total of 10 digits of a complete telephone number for local telephone number assignments. The national number appeared in local telephone directories.
Caribbean nations with a larger numbering resource requirement used seven-digit dialing, and had no need fo
|
https://en.wikipedia.org/wiki/The%20Dam%20Busters%20%28video%20game%29
|
The Dam Busters is a combat flight simulator set in World War II, published by U.S. Gold in 1984. It is loosely based on the real life Operation Chastise and the 1955 film. The game was released in 1984 for the ColecoVision and Commodore 64; in 1985 for Apple II, DOS, MSX and ZX Spectrum; then in 1986 for the Amstrad CPC and NEC PC-9801.
Gameplay
The player chooses from three different night missions, each of which is increasingly difficult. In all three, the goal is to successfully bomb a dam. On the practice run, the player can approach and bomb the dam without any other obstacles. The two other missions feature various enemies to overcome, and the flight start from either the French coast or a British airfield.
During your flight, the player controls every aspect of the bomber from each of the seven crew positions: Pilot, Front Gunner, Tail Gunner, Bomb Aimer, Navigator, Engineer, and Squadron Leader. Leaving any of these positions unattended during an event could spell the death of the person in that position, rendering it useless during further encounters. The player must evade enemies, plan your approach, and set all of the variables (speed, height, timing, etc.) to execute a successful bombing. Sometimes, it becomes necessary to deal with emergencies, such as engine fires.
While en route to the target the player can expect to encounter attacks by enemy aircraft, barrage balloons, flak and enemy searchlights. Events like this will flash along the border of the screen, while indicating the key to press to take the player to the station in need of assistance. For example, when flying through enemy search lights, the player will need to man the gunner's station and shoot out the lights on the ground. If left unattended, the player can expect flak and enemy aircraft to start damaging the bomber.
Once the player begins the final run to their target, they are presented with the custom bombing sights, as made famous by the story. When the player toggles t
|
https://en.wikipedia.org/wiki/IPrint
|
iPrint is a print server developed by Novell, now owned by Micro Focus. iPrint enabled users to install a device driver for a printer directly from a web browser, and to submit print jobs over a computer network. It could process print jobs routed through the internet using the Internet Printing Protocol (IPP).
The iPrint server ran on Novell's NetWare operating system. Windows, Linux, and Mac OS clients needed Novell's iPrint client software to make use of iPrint services.
Although iPrint is bound to Novell Distributed Print Services (NDPS), it does not require a Novell client but only an iPrint client.
See also
Novell Embedded Systems Technology (NEST)
|
https://en.wikipedia.org/wiki/Histogram%20equalization
|
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.
Overview
This method usually increases the global contrast of many images, especially when the image is represented by a narrow range of intensity values. Through this adjustment, the intensities can be better distributed on the histogram utilizing the full range of intensities evenly. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the highly populated intensity values which are used to degrade image contrast.
The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are either over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique adaptive to the input image and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal.
In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal-to-noise ratio usually hampers visual detections.
Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images to which one would apply false-color. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further red
|
https://en.wikipedia.org/wiki/Toy%20block
|
Toy blocks (also building bricks, building blocks, or simply blocks) are wooden, plastic, or foam pieces of various shapes (cube, cylinder, arch etc.) and colors that are used as construction toys. Sometimes, toy blocks depict letters of the alphabet.
History
There are mentions of blocks or "dice" with letters inscribed on them used as entertaining educational tools in the works of English writer and inventor Hugh Plat (his 1594 book The Jewel House of Art and Nature) and English philosopher John Locke (his 1693 essay Thoughts Concerning Education). Plat described them as "the child using to play much with them, and being always told what letter chanceth, will soon gain his Alphabet" and Locke noted "Thus Children may be cozen’d into a Knowledge of the Letters; be taught to read, without perceiving it to be anything but a Sport".
University of Pennsylvania professor of Urbanism Witold Rybczynski has found that the earliest mention of building bricks for children appears in Maria and R.L. Edgeworth's Practical Education (1798). Called "rational toys", blocks were intended to teach children about gravity and physics, as well as spatial relationships that allow them to see how many different parts become a whole. In 1837 Friedrich Fröbel invented a preschool educational institution Kindergarten. For that, he designed ten Froebel Gifts based on building blocks principles. During the mid-nineteenth century, Henry Cole (under the pseudonym of Felix Summerly) wrote a series of children’s books. Cole's A book of stories from The Home Treasury included a box of terracotta toy blocks and, in the accompanying pamphlet "Architectural Pastime", actual blueprints.
In 2003 the National Toy Hall of Fame at The Strong museum in Rochester, New York inducted ABC blocks into their collection, granting it the title of one of America's toys of national significance.
Educational benefits
Physical benefits: toy blocks build strength in a child's fingers and hands, and improve eye-han
|
https://en.wikipedia.org/wiki/Force%20of%20mortality
|
In actuarial science, force of mortality represents the instantaneous rate of mortality at a certain age measured on an annualized basis. It is identical in concept to failure rate, also called hazard function, in reliability theory.
Motivation and definition
In a life table, we consider the probability of a person dying from age x to x + 1, called qx. In the continuous case, we could also consider the conditional probability of a person who has attained age (x) dying between ages x and x + Δx, which is
where FX(x) is the cumulative distribution function of the continuous age-at-death random variable, X. As Δx tends to zero, so does this probability in the continuous case. The approximate force of mortality is this probability divided by Δx. If we let Δx tend to zero, we get the function for force of mortality, denoted by :
Since fX(x)=F 'X(x) is the probability density function of X, and S(x) = 1 - FX(x) is the survival function, the force of mortality can also be expressed variously as:
To understand conceptually how the force of mortality operates within a population, consider that the ages, x, where the probability density function fX(x) is zero, there is no chance of dying. Thus the force of mortality at these ages is zero. The force of mortality μ(x) uniquely defines a probability density function fX(x).
The force of mortality can be interpreted as the conditional density of failure at age x, while f(x) is the unconditional density of failure at age x. The unconditional density of failure at age x is the product of the probability of survival to age x, and the conditional density of failure at age x, given survival to age x.
This is expressed in symbols as
or equivalently
In many instances, it is also desirable to determine the survival probability function when the force of mortality is known. To do this, integrate the force of mortality over the interval x to x + t
.
By the fundamental theorem of calculus, this is simply
Let us denote
then tak
|
https://en.wikipedia.org/wiki/Aspergillus%20nidulans
|
Aspergillus nidulans (also called Emericella nidulans when referring to its sexual form, or teleomorph) is one of many species of filamentous fungi in the phylum Ascomycota. It has been an important research organism for studying eukaryotic cell biology
for over 50 years,
being used to study a wide range of subjects including recombination, DNA repair, mutation, cell cycle control, tubulin, chromatin, nucleokinesis, pathogenesis, metabolism, and experimental evolution.
It is one of the few species in its genus able to form sexual spores through meiosis, allowing crossing of strains in the laboratory. A. nidulans is a homothallic fungus, meaning it is able to self-fertilize and form fruiting bodies in the absence of a mating partner. It has septate hyphae with a woolly colony texture and white mycelia. The green colour of wild-type colonies is due to pigmentation of the spores, while mutations in the pigmentation pathway can produce other spore colours.
Genome
The A. nidulans genome was sequenced in a collaboration between Monsanto and the Broad Institute. A sequence with 13-fold coverage was publicly released in March 2003; analysis of the annotated genome was published in Nature in December 2005. It is 30 million base pairs in size and is predicted to contain around 9,500 protein-coding genes on eight chromosomes.
Recently, several caspase-like proteases were isolated from A. nidulans samples under which programmed cell death had been induced. Findings such as these play a key role in determining the evolutionary conservation of the mitochondrion within the eukaryotic cell, and its role as an ancient alphaproteobacterium capable of inducing cell death.
Sexual reproduction
Sexual reproduction occurs in two fundamentally different ways. This is by outcrossing (heterothallic sex), in which two distinct individuals contribute nuclei, or by homothallic sex or self-fertilization (selfing) in which both nuclei are derived from the same individual. Selfing in A. ni
|
https://en.wikipedia.org/wiki/Uniform%20memory%20access
|
Uniform memory access (UMA) is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with non-uniform memory access (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose and time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time-critical applications.
Types of architectures
There are three types of UMA architectures:
UMA using bus-based symmetric multiprocessing (SMP) architectures;
UMA using crossbar switches;
UMA using multistage interconnection networks.
hUMA
In April 2013, the term hUMA (heterogeneous uniform memory access) began to appear in AMD promotional material to refer to CPU and GPU sharing the same system memory via cache coherent views. Advantages include an easier programming model and less copying of data between separate memory pools.
See also
Non-uniform memory access
Cache-only memory architecture
Heterogeneous System Architecture
|
https://en.wikipedia.org/wiki/Anterior%20commissure
|
The anterior commissure (also known as the precommissure) is a white matter tract (a bundle of axons) connecting the two temporal lobes of the cerebral hemispheres across the midline, and placed in front of the columns of the fornix. In most existing mammals, the great majority of fibers connecting the two hemispheres travel through the corpus callosum, which is over 10 times larger than the anterior commissure, and other routes of communication pass through the hippocampal commissure or, indirectly, via subcortical connections. Nevertheless, the anterior commissure is a significant pathway that can be clearly distinguished in the brains of all mammals.
The anterior commissure plays a key role in pain sensation, more specifically sharp, acute pain. It also contains decussating fibers from the olfactory tracts, vital for the sense of smell and chemoreception. The anterior commissure works with the posterior commissure to link the two cerebral hemispheres of the brain and also interconnects the amygdalae and temporal lobes, contributing to the role of memory, emotion, speech and hearing. It also is involved in olfaction, instinct, and sexual behavior.
In a sagittal section, the anterior commissure is oval in shape, having a long vertical axis that measures about 5 mm.
Structure
It interconnects multiple cortical regions of the temporal lobes, the amygdalae, and olfactory bulbs. It is a part of the neospinothalamic tract for pain.
Function
The functionality of the anterior commissure is still not completely understood. Researchers have implicated it in functions ranging from colour perception to attention. One such study supported colour perception in callosal agenesis (Those born without a corpus callosum; Barr & Corballis, 2002). Other studies have built on this to imply that the anterior commissure can be a compensatory pathway in those without a corpus callosum, presenting diffusion tensor imaging (DTI) techniques to better elucidate the anterior commissur
|
https://en.wikipedia.org/wiki/Horizontal%20fissure%20of%20cerebellum
|
The largest and deepest fissure in the cerebellum is named the horizontal fissure (or horizontal sulcus).
It commences in front of the pons, and passes horizontally around the free margin of the hemisphere to the middle line behind, and divides the cerebellum into an upper and a lower portion.
Additional images
|
https://en.wikipedia.org/wiki/171%20%28number%29
|
171 (one hundred [and] seventy-one) is the natural number following 170 and preceding 172.
In mathematics
171 is a triangular number and a Jacobsthal number.
There are 171 transitive relations on three labeled elements, and 171 combinatorially distinct ways of subdividing a cuboid by flat cuts into a mesh of tetrahedra, without adding extra vertices.
The diagonals of a regular decagon meet at 171 points, including both crossings and the vertices of the decagon.
There are 171 faces and edges in the 57-cell, an abstract 4-polytope with hemi-dodecahedral cells that is its own dual polytope.
Within moonshine theory of sporadic groups, the friendly giant is defined as having cyclic groups ⟨ ⟩ that are linked with the function,
∈ where is the character of at .
This generates 171 moonshine groups within associated with that are principal moduli for different genus zero congruence groups commensurable with the projective linear group .
See also
The year AD 171 or 171 BC
List of highways numbered 171
|
https://en.wikipedia.org/wiki/174%20%28number%29
|
174 (one hundred [and] seventy-four) is the natural number following 173 and preceding 175.
In mathematics
There are 174 7-crossing semi-meanders, ways of arranging a semi-infinite curve in the plane so that it crosses a straight line seven times. There are 174 invertible (0,1)-matrices. There are also 174 combinatorially distinct ways of subdividing a topological cuboid into a mesh of tetrahedra, without adding extra vertices, although not all can be represented geometrically by flat-sided polyhedra.
The Mordell curve has rank three, and 174 is the smallest positive integer for which has this rank. The corresponding number for curves is 113.
In other fields
In English draughts or checkers, a common variation is the "three-move restriction", in which the first three moves by both players are chosen at random. There are 174 different choices for these moves, although some systems for choosing these moves further restrict them to a subset that is believed to lead to an even position.
See also
The year AD 174 or 174 BC
List of highways numbered 174
|
https://en.wikipedia.org/wiki/Falciform%20ligament
|
In human anatomy, the falciform ligament () is a ligament that attaches the liver to the front body wall and divides the liver into the left lobe and right lobe. The falciform ligament is a broad and thin fold of peritoneum, its base being directed downward and backward and its apex upward and forward. It droops down from the hilum of the liver.
Structure
The falciform ligament stretches obliquely from the front to the back of the abdomen, with one surface in contact with the peritoneum behind the right rectus abdominis muscle and the diaphragm, and the other in contact with the left lobe of the liver.
The ligament stretches from the underside of the diaphragm to the posterior surface of the sheath of the right rectus abdominis muscle, as low down as the umbilicus; by its right margin it extends from the notch on the anterior margin of the liver, as far back as the posterior surface.
It is composed of two layers of peritoneum closely united together.
Its base or free edge contains, between its layers, the round ligament and the paraumbilical veins.
Development
It is a remnant of the embryonic ventral mesentery. The umbilical vein of the fetus gives rise to the round ligament of liver in the adult, which is found in the free border of the falciform ligament.
Clinical significance
The falciform ligament can become canalized if an individual is suffering from portal hypertension. Due to the increase in venous congestion, blood is pushed down from the liver towards the anterior abdominal wall and if blood pools here, will result in dilatation of veins around the umbilicus. If these veins radiate out from the umbilicus, they can give the appearance of a head (the umbilicus) with hair of snakes (the veins) – this is referred to as caput medusae.
Additional images
|
https://en.wikipedia.org/wiki/Loop%20theorem
|
In mathematics, in the topology of 3-manifolds, the loop theorem is a generalization of Dehn's lemma. The loop theorem was first proven by Christos Papakyriakopoulos in 1956, along with Dehn's lemma and the Sphere theorem.
A simple and useful version of the loop theorem states that if for some 3-dimensional manifold M with boundary ∂M there is a map
with not nullhomotopic in , then there is an embedding with the same property.
The following version of the loop theorem, due to John Stallings, is given in the standard 3-manifold treatises (such as Hempel or Jaco):
Let be a 3-manifold and let
be a connected surface in . Let be a normal subgroup such that .
Let be a continuous map such that and Then there exists an embedding such that and
Furthermore if one starts with a map f in general position, then for any neighborhood U of the singularity set of f, we can find such a g with image lying inside the union of image of f and U.
Stalling's proof utilizes an adaptation, due to Whitehead and Shapiro, of Papakyriakopoulos' "tower construction". The "tower" refers to a special sequence of coverings designed to simplify lifts of the given map. The same tower construction was used by Papakyriakopoulos to prove the sphere theorem (3-manifolds), which states that a nontrivial map of a sphere into a 3-manifold implies the existence of a nontrivial embedding of a sphere. There is also a version of Dehn's lemma for minimal discs due to Meeks and S.-T. Yau, which also crucially relies on the tower construction.
A proof not utilizing the tower construction exists of the first version of the loop theorem. This was essentially done 30 years ago by Friedhelm Waldhausen as part of his solution to the word problem for Haken manifolds; although he recognized this gave a proof of the loop theorem, he did not write up a detailed proof. The essential ingredient of this proof is the concept of Haken hierarchy. Proofs were later written up, by Klaus Johannson, Marc Lack
|
https://en.wikipedia.org/wiki/Human%E2%80%93animal%20marriage
|
Human–animal marriage is a marriage between an animal and a human. This topic has appeared in mythology and magical fiction. In the 21st century, there have been numerous reports from around the world of humans marrying their pets and other animals. Human–animal marriage is often seen in accordance with zoophilia, although they are not necessarily linked. Although animal-human marriage is not mentioned specifically in national laws, the act of engaging in sexual acts with an animal is illegal in many countries under animal abuse laws.
Animal–human marriage in folklore
The practice of animal-human marriage has made appearances in folklore and several mythological stories where it is often understood to mean a deity-human marriage involving gods or heroes. Many tribes of the Native Americans in the United States trace the origin of humanity to marriages between other animals and humans. The indigenous Cheyenne have a story of animal-human marriage in "The Girl who Married a Dog". In other Native American myths, animal spirits frequently assume human form. In many cases they are not seen as literal animals, but representatives from the animal kingdom. The Chinese folktale "The Goddess of the Silkworm" is an example of a tale where a woman marries a horse. A similar Irish legend tells of a king who marries a horse, symbolizing a divine union between the king and the goddess of the land.
The most famous animal-as-bridegroom story that has survived in modern times is Beauty and the Beast by Gabrielle-Suzanne de Villeneuve. According to Bernard Sergent, "human–animal marriage is an union that is too remote as incest is a too close one. Compared to a balanced marriage, between humans but from another clan or another village, that is to say–depending on the society–within the framework of a well measured endogamy or exogamy, incest transgresses the norm because it is an exaggerated endogamy, and animal marriage transgresses it because it is an exaggerated exogamy."
An
|
https://en.wikipedia.org/wiki/Radiation%20Protection%20Convention%2C%201960
|
Radiation Protection Convention, 1960 is an International Labour Organization Convention to restrict workers from exposure of ionising radiation and to prohibit persons under 16 engaging in work that causes such exposure. (Article 6)
It was established in 1960, with the preamble stating:
Ratifications
As of January 2023, the convention has been ratified by 50 states.
External links
Text.
Ratifications.
Health treaties
Nuclear technology treaties
International Labour Organization conventions
Occupational safety and health treaties
Radiation
Treaties concluded in 1960
Treaties entered into force in 1962
Treaties of Argentina
Treaties of Azerbaijan
Treaties of Barbados
Treaties of the Byelorussian Soviet Socialist Republic
Treaties of Belgium
Treaties of the military dictatorship in Brazil
Treaties of Chile
Treaties of Czechoslovakia
Treaties of the Czech Republic
Treaties of Denmark
Treaties of Djibouti
Treaties of Ecuador
Treaties of Egypt
Treaties of Finland
Treaties of France
Treaties of West Germany
Treaties of Ghana
Treaties of Greece
Treaties of Guinea
Treaties of Guyana
Treaties of the Hungarian People's Republic
Treaties of India
Treaties of the Iraqi Republic (1958–1968)
Treaties of Italy
Treaties of Japan
Treaties of South Korea
Treaties of Kyrgyzstan
Treaties of Latvia
Treaties of Lebanon
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Mexico
Treaties of the Netherlands
Treaties of Nicaragua
Treaties of Norway
Treaties of Paraguay
Treaties of the Polish People's Republic
Treaties of the Soviet Union
Treaties of Portugal
Treaties of Slovakia
Treaties of Francoist Spain
Treaties of Sri Lanka
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Turkey
Treaties of the Ukrainian Soviet Socialist Republic
Treaties of the United Kingdom
Treaties of Uruguay
1960 in labor relations
Radiation protection
|
https://en.wikipedia.org/wiki/Ubicom
|
Ubicom was a company which developed communications and media processor (CMP) and software platforms for real-time interactive applications and multimedia content delivery in the digital home. The company provided optimized system-level solutions to OEMs for a wide range of products including wireless routers, access points, VoIP gateways, streaming media devices, print servers and other network devices. Ubicom was a venture-backed, privately held company with corporate headquarters in San Jose, California.
History
Ubicom was founded as Scenix Semiconductor in 1996. The company operated under that name until 1999. In 2000, Scenix became "Ubicom," a word derived from "ubiquitous communications".
April 1999: Mayfield Fund leads $10 million equity investment in Scenix.
November 2000: Scenix changes its name to Ubicom.
November 2002: Intersil and Ubicom demonstrate world's first 802.11g wireless access point.
March 2006: Ubicom secures $20 million in Series 3 funding, led by Investcorp Technology Ventures.
March 2012: Ubicom is taken over by Qualcomm Atheros.
Products
As Scenix and Ubicom, the company designed several families of microcontrollers, including:
The SX Series of 8-bit microcontrollers, a product line which was partially compatible with Arizona Microchip devices and ran at up to 100 MHz, single cycle. This product was eventually sold to Parallax, who continued its production.
The IP series of high performance media and Internet processors. These devices were designed to act as gateways for streaming media and data over wired and wireless links.
The Scenix/Ubicom processors relied on very high speed and low latency processing to emulate hardware interfaces in software such as interrupt-polled soft-UARTS. This reduced the size of the silicon chip and therefore the cost, but increased the complexity of the software required on the chip.
Ubicom developed its own architecture, the Ubicom32, and a real-time operating system (RTOS) for it. For exam
|
https://en.wikipedia.org/wiki/Mating%20in%20fungi
|
Fungi are a diverse group of organisms that employ a huge variety of reproductive strategies, ranging from fully asexual to almost exclusively sexual species. Most species can reproduce both sexually and asexually, alternating between haploid and diploid forms. This contrasts with many eukaryotes such as mammals, where the adults are always diploid and produce haploid gametes which combine to form the next generation. In fungi, both haploid and diploid forms can reproduce – haploid individuals can undergo asexual reproduction while diploid forms can produce gametes that combine to give rise to the next generation.
Mating in fungi is a complex process governed by mating types. Research on fungal mating has focused on several model species with different behaviour. Not all fungi reproduce sexually and many that do are isogamous; thus, for many members of the fungal kingdom, the terms "male" and "female" do not apply. Homothallic species are able to mate with themselves, while in heterothallic species only isolates of opposite mating types can mate.
Mating between isogamous fungi may consist only of a transfer of a nucleus from one cell to another. Vegetative incompatibility within species often prevents a fungal isolate from mating with another isolate. Isolates of the same incompatibility group do not mate or mating does not lead to successful offspring. High variation has been reported including same-chemotype mating, sporophyte to gametophyte mating and biparental transfer of mitochondria.
Mating in Zygomycota
A zygomycete hypha grows towards a compatible mate and they both form a bridge, called a progametangia, by joining at the hyphal tips via plasmogamy. A pair of septa forms around the merged tips, enclosing nuclei from both isolates. A second pair of septa forms two adjacent cells, one on each side. These adjacent cells, called suspensors provide structural support. The central cell, called the zygosporangium, is destined to become a spore. The zygosporang
|
https://en.wikipedia.org/wiki/Conjugate%20index
|
In mathematics, two real numbers are called conjugate indices (or Hölder conjugates) if
Formally, we also define as conjugate to and vice versa.
Conjugate indices are used in Hölder's inequality, as well as Young's inequality for products; the latter can be used to prove the former. If are conjugate indices, the spaces Lp and Lq are dual to each other (see Lp space).
See also
Beatty's theorem
|
https://en.wikipedia.org/wiki/Bacteroides
|
Bacteroides is a genus of Gram-negative, obligate anaerobic bacteria. Bacteroides species are non endospore-forming bacilli, and may be either motile or nonmotile, depending on the species. The DNA base composition is 40–48% GC. Unusual in bacterial organisms, Bacteroides membranes contain sphingolipids. They also contain meso-diaminopimelic acid in their peptidoglycan layer.
Bacteroides species are normally mutualistic, making up the most substantial portion of the mammalian gastrointestinal microbiota, where they play a fundamental role in processing of complex molecules to simpler ones in the host intestine. As many as 1010–1011 cells per gram of human feces have been reported. They can use simple sugars when available; however, the main sources of energy for Bacteroides species in the gut are complex host-derived and plant glycans. Studies indicate that long-term diet is strongly associated with the gut microbiome composition—those who eat plenty of protein and animal fats have predominantly Bacteroides bacteria, while for those who consume more carbohydrates the Prevotella species dominate.
One of the most important clinically is Bacteroides fragilis.
Bacteroides melaninogenicus has recently been reclassified and split into Prevotella melaninogenica and Prevotella intermedia.
Pathogenesis
Bacteroides species also benefit their host by excluding potential pathogens from colonizing the gut. Some species (B. fragilis, for example) are opportunistic human pathogens, causing infections of the peritoneal cavity, gastrointestinal surgery, and appendicitis via abscess formation, inhibiting phagocytosis, and inactivating beta-lactam antibiotics. Although Bacteroides species are anaerobic, they are transiently aerotolerant and thus can survive in the abdominal cavity.
In general, Bacteroides are resistant to a wide variety of antibiotics—β-lactams, aminoglycosides, and recently many species have acquired resistance to erythromycin and tetracycline. This high level
|
https://en.wikipedia.org/wiki/C-slowing
|
C-slow retiming is a technique used in conjunction with retiming to improve throughput of a digital circuit. Each register in a circuit is replaced by a set of C registers (in series). This creates a circuit with C independent threads, as if the new circuit contained C copies of the original circuit. A single computation of the original circuit takes C times as many clock cycles to compute in the new circuit. C-slowing by itself increases latency, but throughput remains the same.
Increasing the number of registers allows optimization of the circuit through retiming to reduce the clock period of the circuit. In the best case, the clock period can be reduced by a factor of C. Reducing the clock period of the circuit reduces latency and increases throughput. Thus, for computations that can be multi-threaded, combining C-slowing with retiming can increase the throughput of the circuit, with little, or in the best case, no increase in latency.
Since registers are relatively plentiful in FPGAs, this technique is typically applied to circuits implemented with FPGAs.
See also
Pipelining
Barrel processor
Resources
PipeRoute: A Pipelining-Aware Router for Reconfigurable Architectures
Simple Symmetric Multithreading in Xilinx FPGAs
Post Placement C-Slow Retiming for Xilinx Virtex (.ppt)
Post Placement C-Slow Retiming for Xilinx Virtex (.pdf)
Exploration of RaPiD-style Pipelined FPGA Interconnects
Time and Area Efficient Pattern Matching on FPGAs
Gate arrays
|
https://en.wikipedia.org/wiki/Magic%20graph
|
A magic graph is a graph whose edges are labelled by the first q positive integers, where q is the number of edges, so that the sum over the edges incident with any vertex is the same, independent of the choice of vertex; or it is a graph that has such a labelling. The name "magic" sometimes means that the integers are any positive integers; then the graph and the labelling using the first q positive integers are called supermagic.
A graph is vertex-magic if its vertices can be labelled so that the sum on any edge is the same. It is total magic if its edges and vertices can be labelled so that the vertex label plus the sum of labels on edges incident with that vertex is a constant.
There are a great many variations on the concept of magic labelling of a graph. There is much variation in terminology as well. The definitions here are perhaps the most common.
Comprehensive references for magic labellings and magic graphs are Gallian (1998), Wallis (2001), and Marr and Wallis (2013).
Magic squares
A semimagic square is an n × n square with the numbers 1 to n2 in its cells, in which the sum of each row and column is the same. A semimagic square is equivalent to a magic labelling of the complete bipartite graph Kn,n. The two vertex sets of Kn,n correspond to the rows and the columns of the square, respectively, and the label on an edge ri sj is the value in row i, column j of the semimagic square.
The definition of semimagic squares differs from the definition of magic squares in the treatment of the diagonals of the square. Magic squares are required to have diagonals with the same sum as the row and column sums, but for semimagic squares this is not required. Thus, every magic square is semimagic, but not vice versa.
|
https://en.wikipedia.org/wiki/Biosatellite
|
A bio satellite is an artificial satellite designed to carry plants or animals in outer space. They are used to research the effects of space (cosmic radiation, weightlessness, etc.) on biological matter while in orbit around a celestial body. The first satellite carrying an animal (a dog, "Laika") was Soviet Sputnik 2 on November 3, 1957. On August 20, 1960 Soviet Sputnik 5 launched and recovered dogs from Earth orbit.
NASA launched 3 satellites between 1966 and 1969 for the Biosatellite program.
The most famous biosatellites include:
Biosatellite program launched by NASA between 1966 and 1969.
Bion space program by Soviet Union
The Mars Gravity Biosatellite
Orbiting Frog Otolith (OFO-A)
See also
Animals in space
Biosatellite (NASA)
|
https://en.wikipedia.org/wiki/Roman%20Jackiw
|
Roman Wladimir Jackiw (; ; 8 November 1939 – 14 June 2023) was a Polish-born American theoretical physicist and Dirac Medallist.
Biography
Born in Lubliniec, Poland in 1939 to a Ukrainian family, the family later moved to Austria and Germany before settling in New York City when Jackiw was about 10.
Jackiw earned his undergraduate degree from Swarthmore College and his PhD from Cornell University in 1966 under Hans Bethe and Kenneth Wilson. He was a professor at the Massachusetts Institute of Technology Center for Theoretical Physics from 1969 until his retirement. He retained his affiliation in emeritus status in 2019.
Jackiw co-discovered the chiral anomaly, which is also known as the Adler–Bell–Jackiw anomaly. In 1969, he and John Stewart Bell published their explanation, which was later expanded and clarified by Stephen L. Adler, of the observed decay of a neutral pion into two photons. This decay is forbidden by a symmetry of classical electrodynamics, but Bell and Jackiw showed that this symmetry cannot be preserved at the quantum level. Their introduction of an "anomalous" term from quantum field theory required that the sum of the charges of the elementary fermions had to be zero. This work also gave important support to the colour theory of quarks.
Jackiw is also known for Jackiw–Teitelboim gravity, a theory of gravity with one dimension each of space and time that includes a dilaton field. Sometimes known as the R = T model or as JT gravity, it is used to model some aspects of near-extremal black holes.
Jackiw married fellow physicist So-Young Pi, daughter of Korean writer Pi Chun-deuk. One of Jackiw's sons is Stefan Jackiw, an American violinist. The other is Nicholas Jackiw, a software designer known for inventing The Geometer's Sketchpad. His daughter, Simone Ahlborn, is an educator at Moses Brown School in Providence, Rhode Island.
Jackiw died 14 June 2023, at the age of 83.
Awards
Heineman Prize, 1995
On 26 May 2000, Jackiw received an honorary
|
https://en.wikipedia.org/wiki/MIT%20Center%20for%20Theoretical%20Physics
|
The MIT Center for Theoretical Physics (CTP) is the hub of theoretical nuclear physics, particle physics, and quantum information research at MIT. It is a subdivision of MIT Laboratory for Nuclear Science and Department of Physics.
Research
CTP activities range from string theory and cosmology at the highest energies down through unification and beyond-the-standard-model physics, through the standard model, to QCD, hadrons, quark matter, and nuclei at the low energy scale.
Members of the CTP are also currently working on quantum computation and on energy policy. The breadth and depth of research in nuclear, particle, string, and gravitational physics at the CTP makes it a unique environment for researchers in these fields.
Members
In addition to the 15 MIT faculty members working in the CTP, at any one time there are roughly a dozen postdoctoral fellows, and as many, or more, long-term visitors working at the postdoctoral or faculty level. The CTP supports 25-35 MIT graduate students, who work with the faculty and postdocs on problems across the energy spectrum.
Current research areas in the center include particle physics, cosmology, string theory, phenomenology in and beyond the standard model, quantum field theory, lattice QCD, condensed matter physics, quantum computing, and energy research.
Notable current faculty include Nobel Laureate Frank Wilczek, Jeffrey Goldstone, inflationary cosmologist Alan Guth, cosmologist Max Tegmark, and quantum information scientist Peter Shor. Past CTP faculty members include US Secretary of Energy Ernest Moniz, Breakthrough Prize winner Daniel Freedman, particle theorist and author Lisa Randall, Abel Prize winner Isadore Singer, Nobel Laureate Steven Weinberg, and many others.
Directors
Herman Feshbach, 1967–73
Francis Low, 1973–76
Arthur Kerman, 1976–83
Jeffrey Goldstone, 1983–89
John Negele, 1989–98
Robert Jaffe, 1998–2004
Eddie Farhi, 2004–16
Washington Taylor IV, 2016–19
Iain Stewart, 2019–present
Faculty
|
https://en.wikipedia.org/wiki/NCR%20VRX
|
VRX is an acronym for Virtual Resource eXecutive, a proprietary operating system on the NCR Criterion series, and later the V-8000 series of mainframe computers manufactured by NCR Corporation during the 1970s and 1980s. It replaced the B3 Operating System originally distributed with the Century series, and inherited many of the features of the B4 Operating System from the high-end of the NCR Century series of computers. VRX was upgraded in the late 1980s and 1990s to become VRX/E for use on the NCR 9800 (Criterion) series of computers. Edward D. Scott managed the development team of 150 software engineers who developed VRX and James J "JJ" Whelan was the software architect responsible for technical oversight and the overall architecture of VRX. Tom Tang was the Director of Engineering at NCR responsible for development of the entire Criterion family of computers. This product line achieved over $1B in revenue and $300M in profits for NCR.
VRX was shipped to its first customers, on a trial basis, in 1977. Customer sites included the United Farm Workers labor union, who in 1982 were running an NCR 8555 mainframe running VRX.
VRX was NCR's response to IBM's MVS virtual storage operating system and was NCR's first virtual storage system. It was based on a segmented page architecture provided in the Criterion architecture.
The Criterion series provided a virtual machine architecture which allowed different machine architectures running under the same operating system. The initial offering provided a Century virtual machine which was instruction compatible with the Century series and a COBOL virtual machine designed to optimize programs written in COBOL. Switching between virtual machines was provided by a virtual machine indicator in the subroutine call mechanism. This allowed programs written in one virtual machine to use subroutines written for another. The same mechanism was used to enter an "executive" state used for operating system functions and a "privile
|
https://en.wikipedia.org/wiki/Channel%2037
|
Channel 37 is an intentionally unused ultra-high frequency (UHF) television broadcasting channel by countries in most of ITU region 2 such as the United States, Canada, Mexico and Brazil. The frequency range allocated to this channel is important for radio astronomy, so all broadcasting is prohibited within a window of frequencies centred typically on . Similar reservations exist in portions of the Eurasian and Asian regions, although the channel numbering varies.
History
Channel 37 in System M and N countries occupied a band of UHF frequencies from . This band is particularly important to radio astronomy because it allows observation in a region of the spectrum in between the dedicated frequency allocations near 410 MHz and 1.4 GHz. The area reserved or unused differs from nation to nation and region to region (as for example the EU and British Isles have slightly different reserved frequency areas).
One radio astronomy application in this band is for very-long-baseline interferometry.
When UHF channels were being allocated in the United States in 1952, channel 37 was assigned to 18 communities across the country. One of them, Valdosta, Georgia, featured the only construction permit ever issued for channel 37: WGOV-TV, owned by Eurith Dickenson "Dee" Rivers Jr., son of the former governor of Georgia (hence the call letters). Rivers received the CP on February 26, 1953, but WGOV-TV never made it to the air; on October 28, 1955, they requested an allocation on channel 8, but the petition was denied.
In 1963, the Federal Communications Commission (FCC) adopted a 10-year moratorium on any allocation of stations to Channel 37. A new ban on such stations took effect at the beginning of 1974, and was made permanent by a number of later FCC actions. As a result of this, and similar actions by the Canadian Radio-television and Telecommunications Commission, Channel 37 has never been used by any over-the-air television station in Canada or the United States.
The 2016-20
|
https://en.wikipedia.org/wiki/Emergent%20virus
|
An emergent virus (or emerging virus) is a virus that is either newly appeared, notably increasing in incidence/geographic range or has the potential to increase in the near future. Emergent viruses are a leading cause of emerging infectious diseases and raise public health challenges globally, given their potential to cause outbreaks of disease which can lead to epidemics and pandemics. As well as causing disease, emergent viruses can also have severe economic implications. Recent examples include the SARS-related coronaviruses, which have caused the 2002-2004 outbreak of SARS (SARS-CoV-1) and the 2019–21 pandemic of COVID-19 (SARS-CoV-2). Other examples include the human immunodeficiency virus which causes HIV/AIDS; the viruses responsible for Ebola; the H5N1 influenza virus responsible for avian flu; and H1N1/09, which caused the 2009 swine flu pandemic (an earlier emergent strain of H1N1 caused the 1918 Spanish flu pandemic). Viral emergence in humans is often a consequence of zoonosis, which involves a cross-species jump of a viral disease into humans from other animals. As zoonotic viruses exist in animal reservoirs, they are much more difficult to eradicate and can therefore establish persistent infections in human populations.
Emergent viruses should not be confused with re-emerging viruses or newly detected viruses. A re-emerging virus is generally considered to be a previously appeared virus that is experiencing a resurgence, for example measles. A newly detected virus is a previously unrecognized virus that had been circulating in the species as endemic or epidemic infections. Newly detected viruses may have escaped classification because they left no distinctive clues, and/or could not be isolated or propagated in cell culture. Examples include human rhinovirus (a leading cause of common colds which was first identified in 1956), hepatitis C (eventually identified in 1989), and human metapneumovirus (first described in 2001, but thought to have been ci
|
https://en.wikipedia.org/wiki/Caffeine%20citrate
|
Caffeine citrate, sold under the brand name Cafcit among others, is a medication used to treat a lack of breathing in premature babies. Specifically it is given to babies who are born at less than 35 weeks or weigh less than once other causes are ruled out. It is given by mouth or slow injection into a vein.
Side effects can include problems feeding, increased heart rate, low blood sugar, necrotizing enterocolitis, and kidney problems. Testing blood caffeine levels is occasionally recommended. Although it is often referred to as a citric acid salt of caffeine, as implied by its name, caffeine citrate in fact consists of cocrystals of the two components. Caffeine citrate is in the xanthine family of medication. It works by stimulating the respiratory centers in the brain.
Caffeine was discovered in 1819. It is on the World Health Organization's List of Essential Medicines. The intravenous form may also be taken by mouth.
In June 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) recommended the approval of Gencebok. It was approved for use in the European Union in August 2020.
Medical uses
Caffeine citrate is generally the preferred treatment for apnea of prematurity for infants born 28 to 32 weeks or earlier than 28 weeks. It has fewer side effects as compared to theophylline.
Caffeine improves airway function in asthma, increasing forced expiratory volume (FEV1) by 5% to 18%, with this effect lasting for up to four hours.
Mechanism
In method of action, the preparation is identical to that of caffeine base as the citrate counter ion dissociates in water. Doses of caffeine citrate, due to the added weight of the citrate moiety, are understandably higher than with caffeine base, i.e., it takes a larger dose to get the same amount of caffeine. The ratio of therapeutic doses of caffeine base to its citrate salt is typically 1:2. Dosing should therefore be clearly distinguished.
Manufacture
The drug is prepared
|
https://en.wikipedia.org/wiki/Antibody-dependent%20cellular%20cytotoxicity
|
Antibody-dependent cellular cytotoxicity (ADCC), also referred to as antibody-dependent cell-mediated cytotoxicity, is a mechanism of cell-mediated immune defense whereby an effector cell of the immune system kills a target cell, whose membrane-surface antigens have been bound by specific antibodies. It is one of the mechanisms through which antibodies, as part of the humoral immune response, can act to limit and contain infection.
ADCC is independent of the immune complement system that also lyses targets but does not require any other cell. ADCC requires an effector cell which classically is known to be natural killer (NK) cells that typically interact with immunoglobulin G (IgG) antibodies. However, macrophages, neutrophils and eosinophils can also mediate ADCC, such as eosinophils killing certain parasitic worms known as helminths via IgE antibodies.
In general, ADCC has typically been described as the immune response to antibody-coated cells leading ultimately to the lysing of the infected or non-host cell. In recent literature, its importance in regards to treatment of cancerous cells and deeper insight into its deceptively complex pathways have been topics of increasing interest to medical researchers.
NK cells
The typical ADCC involves activation of NK cells by antibodies in a multi-tiered progression of immune control. A NK cell expresses Fcγ receptors. These receptors recognize and bind to the reciprocal portion of an antibody, such as IgG, which binds to the surface of a pathogen-infected target cell. The most common of these Fc receptors on the surface of an NK cell is CD16 or FcγRIII. Once the Fc receptor binds to the Fc region of the antibody, the NK cell releases cytotoxic factors that cause the death of the target cell.
During replication of a virus, some of the viral proteins are expressed on the cell surface membrane of the infected cell. Antibodies can then bind to these viral proteins. Next, the NK cells which have reciprocal Fcγ receptors wi
|
https://en.wikipedia.org/wiki/Fredholm%20theory
|
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.
Overview
The following sections provide a casual sketch of the place of Fredholm theory in the broader context of operator theory and functional analysis. The outline presented here is broad, whereas the difficulty of formalizing this sketch is, of course, in the details.
Fredholm equation of the first kind
Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given:
This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation
where the function is given and is unknown. Here, stands for a linear differential operator.
For example, one might take to be an elliptic operator, such as
in which case the equation to be solved becomes the Poisson equation.
A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function such that for a given pair ,
where is the Dirac delta function.
The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation,
The function is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises.
In the general theory, and may be points on any manifold; the real number line or -dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, t
|
https://en.wikipedia.org/wiki/Apolipoprotein%20E
|
Apolipoprotein E (Apo-E) is a protein involved in the metabolism of fats in the body of mammals. A subtype is implicated in the Alzheimer's disease and cardiovascular diseases. It is encoded in humans by the gene APOE.
Apo-E belongs to a family of fat-binding proteins called apolipoproteins. In the circulation, it is present as part of several classes of lipoprotein particles, including chylomicron remnants, VLDL, IDL, and some HDL. APOE interacts significantly with the low-density lipoprotein receptor (LDLR), which is essential for the normal processing (catabolism) of triglyceride-rich lipoproteins. In peripheral tissues, APOE is primarily produced by the liver and macrophages, and mediates cholesterol metabolism. In the central nervous system, Apo-E is mainly produced by astrocytes and transports cholesterol to neurons via APOE receptors, which are members of the low density lipoprotein receptor gene family. Apo-E is the principal cholesterol carrier in the brain. Apo-E is required for cholesterol transportation from astrocytes to neurons. APOE qualifies as a checkpoint inhibitor of the classical complement pathway by complex formation with activated C1q.
Evolution
Apolipoproteins are not unique to mammals. Many terrestrial and marine vertebrates have versions of them. It is believed that APOE arose via gene duplications of APOC1 before the fish-tetrapod split c. 400 million years ago. Proteins similar in function have been found in choanoflagellates, suggesting that they are a very old class of proteins predating the dawn of all living animals.
The three major human alleles (E4, E3, E2) arose after the primate-human split around 7.5 million years ago. These alleles are the by-product of non-synonymous mutations which led to changes in functionality. The first allele to emerge was E4. After the primate-human split, there were four amino acid changes in the human lineage, three of which had no effect on protein function (V174L, A18T, A135V). The fourth substit
|
https://en.wikipedia.org/wiki/176%20%28number%29
|
176 (one hundred [and] seventy-six) is the natural number following 175 and preceding 177.
In mathematics
176 is an even number and an abundant number. It is an odious number, a self number, a semiperfect number, and a practical number.
176 is a cake number, a happy number, a pentagonal number, and an octagonal number. 15 can be partitioned in 176 ways.
The Higman–Sims group can be constructed as a doubly transitive permutation group acting on a geometry containing 176 points, and it is also the symmetry group of the largest possible set of equiangular lines in 22 dimensions, which contains 176 lines.
In astronomy
176 Iduna is a large main belt asteroid with a composition similar to that of the largest main belt asteroid, 1 Ceres
Gliese 176 is a red dwarf star in the constellation of Taurus
Gliese 176 b is a super-Earth exoplanet in the constellation of Taurus. This planet orbits close to its parent star Gliese 176
In the Bible
Minuscule 176 (in the Gregory-Aland numbering), a Greek minuscule manuscript of the New Testament
176 is the highest verse number in the Bible. Found in Psalm 119.
In the military
Attack Squadron 176 United States Navy squadron during the Vietnam War
was a United States Navy troop transport during World War II, the Korean War and Vietnam War
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy Porpoise-class submarine during World War II
was a United States Navy during World War II
was a United States Navy following World War I
was a United States Navy Sonoma-class fleet tug during World War II
176th Wing is the largest unit of the Alaska Air National Guard
In transportation
Heinkel He 176 was a German rocket-powered aircraft
London Buses route 176
176th Street, Bronx elevated station on the IRT Jerome Avenue Line of the New York City Subway
In other fields
176 is also:
The year AD 176 or 176 BC
176 AH is a year in the Islamic calendar that co
|
https://en.wikipedia.org/wiki/Host%20system
|
Host system is any networked computer that provides services to other systems or users. These services may include printer, web or database access.
Host system is a computer on a network, which provides services to users or other computers on that network. Host system usually runs a multi-user operating system such as Unix, MVS or VMS, or at least an operating system with network services such as Windows.
Computer networking
fr:Système hôte
|
https://en.wikipedia.org/wiki/Zearalenone
|
Zearalenone (ZEN), also known as RAL and F-2 mycotoxin, is a potent estrogenic metabolite produced by some Fusarium and Gibberella species. Specifically, the Gibberella zeae, the fungal species where zearalenone was initially detected, in its asexual/anamorph stage is known as Fusarium graminearum. Several Fusarium species produce toxic substances of considerable concern to livestock and poultry producers, namely deoxynivalenol, T-2 toxin, HT-2 toxin, diacetoxyscirpenol (DAS) and zearalenone. Particularly, ZEN is produced by Fusarium graminearum, Fusarium culmorum, Fusarium cerealis, Fusarium equiseti, Fusarium verticillioides, and Fusarium incarnatum. Zearalenone is the primary toxin that binds to estrogen receptors, causing infertility, abortion or other breeding problems, especially in swine. Often, ZEN is detected together with deoxynivalenol in contaminated samples and its toxicity needs to be considered in combination with the presence of other toxins.
Zearalenone is heat-stable and is found worldwide in a number of cereal crops, such as maize, barley, oats, wheat, rice, and sorghum. Its production increases when the climate is warm with air humidity at or above twenty percent. The environmental pH plays also a role in the toxin's production. When temperatures fall to 15oC, alkaline soils still support ZEN production. At the preferred Fusarium temperature, which ranges between 25oC and 30oC, neutral pH results in the greatest toxin production.
In addition to its actions on the classical estrogen receptors, zearalenone has been found to act as an agonist of the GPER (GPR30).
Chemical and physical properties
Zearalenone is a white crystalline solid, with molecular formula C18H22O5 and 318.364 g/mol molecular weight. It is a resorcyclic acid lactone. It exhibits blue-green fluorescence when excited by long wavelength ultraviolet (UV) light (360 nm) and a more intense green fluorescence when excited with short wavelength UV light (260 nm). In methanol, UV a
|
https://en.wikipedia.org/wiki/177%20%28number%29
|
177 (one hundred [and] seventy-seven) is the natural number following 176 and preceding 178.
In mathematics
It is a Leyland number since .
It is a 60-gonal number, and an arithmetic number, since the mean of its divisors (1, 3, 59 and 177) is equal to 60, an integer.
177 is a Leonardo number, part of a sequence of numbers closely related to the Fibonacci numbers. In graph enumeration, there are 177 undirected graphs (not necessarily connected) that have seven edges and no isolated vertices, and 177 rooted trees with ten nodes and height at most three. There are 177 ways of re-connecting the (labeled) vertices of a regular octagon into a star polygon that does not use any of the octagon edges.
In other fields
177 is the second highest score for a flight of three darts, below the highest score of 180.
See also
The year AD 177 or 177 BC
List of highways numbered 177
|
https://en.wikipedia.org/wiki/ACM%20SIGACT
|
ACM SIGACT or SIGACT is the Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory, whose purpose is support of research in theoretical computer science. It was founded in 1968 by Patrick C. Fischer.
Publications
SIGACT publishes a quarterly print newsletter, SIGACT News. Its online version, SIGACT News Online, is available since 1996 for SIGACT members, with unrestricted access to some features.
Conferences
SIGACT sponsors or has sponsored several annual conferences.
COLT: Conference on Learning Theory, until 1999
PODC: ACM Symposium on Principles of Distributed Computing (jointly sponsored by SIGOPS)
PODS: ACM Symposium on Principles of Database Systems
POPL: ACM Symposium on Principles of Programming Languages
SOCG: ACM Symposium on Computational Geometry (jointly sponsored by SIGGRAPH), until 2014
SODA: ACM/SIAM Symposium on Discrete Algorithms (jointly sponsored by the Society for Industrial and Applied Mathematics). Two annual workshops held in conjunction with SODA also have the same joint sponsorship:
ALENEX: Workshop on Algorithms and Experiments
ANALCO: Workshop on Analytic Algorithms and Combinatorics
SPAA: ACM Symposium on Parallelism in Algorithms and Architectures
STOC: ACM Symposium on the Theory of Computing
COLT, PODC, PODS, POPL, SODA, and STOC are all listed as highly cited venues by both citeseerx and libra.
Awards and prizes
Gödel Prize, for outstanding papers in theoretical computer science (sponsored jointly with EATCS)
Donald E. Knuth Prize, for outstanding contributions to the foundations of computer science (sponsored jointly with IEEE Computer Society's Technical Committee on the Mathematical Foundations of Computing)
Edsger W. Dijkstra Prize in distributed computing (sponsored jointly with SIGOPS, EATCS, and companies)
Paris Kanellakis Theory and Practice Award, for theoretical accomplishments of significant and demonstrable effect on the practice of computing (ACM Award co-sponsored by SIGAC
|
https://en.wikipedia.org/wiki/Eduardo%20Reck%20Miranda
|
Eduardo Reck Miranda (born 1963) is a Brazilian composer of chamber and electroacoustic pieces but is most notable in the United Kingdom for his scientific research into computer music, particularly in the field of human-machine interfaces where brain waves will replace keyboards and voice commands to permit the disabled to express themselves musically.
Biography
Early life
Miranda was born in Porto Alegre, Brazil. As one of the largest cities in Southern Brazil and a cultural, political and economical center, Porto Alegre had significant influence on Miranda's music.
Education
In the early 1990s, Miranda attended the University of Vale do Rio dos Sinos (UNISINOS) in Brazil where he received a degree in Data Processing Technology in 1985. Miranda then attended the Federal University of Rio Grande do Sul (UFRGS) where he studied music composition. Desiring to learn more about music technology and experience more of the world, Miranda made his way to the United Kingdom, where he started his post-graduate research studies at the University of York. At York, he developed an in-depth study into musical composition using cellular automata. In 1991, he received his MSc in Music Technology from York. After receiving his MSc, Miranda went briefly to Germany to study algorithmic composition at the Zentrum für Kunst und Medientechnologie in Karlsruhe.
In 1992, Miranda gained admittance to the Faculty of Music of the University of Edinburgh in Scotland where he obtained his PhD in the combined fields of music and artificial intelligence in 1995. For his doctoral thesis, he focused on musical knowledge representation, machine learning of music and software sound synthesis.
Experiences
After receiving his PhD, Miranda worked at the Edinburgh Parallel Computing Centre (EPCC). At EPCC, he developed Chaosynth, an innovative granular synthesis software that uses cellular automata to generate complex sound spectra.
In the mid-1990s, Miranda joined the Department of Music at the
|
https://en.wikipedia.org/wiki/Ribbon%20of%20Saint%20George
|
The ribbon of Saint George (also known as Saint George's ribbon, the Georgian ribbon; ; and the Guards ribbon in Soviet context: see Terminology for further information) is a Russian military symbol consisting of a black and orange bicolour pattern, with three black and two orange stripes. It appears as a component of many high military decorations awarded by the Russian Empire, the Soviet Union and the current Russian Federation.
In the early 21st century, the ribbon of Saint George has come to be used as an awareness ribbon for commemorating the veterans of the Eastern Front of the Second World War (known in Russia and some post-Soviet countries as the Great Patriotic War). It is the primary symbol used in association with Victory Day. It enjoys wide popularity in Russia as a patriotic symbol, as well as a way to show public support to the Russian government. Since 2014, the symbol has become much more controversial in certain post-Soviet states such as Ukraine and the Baltic states, due to its association with pro-Russian and separatist sentiment, especially following the start of the 2022 Russian invasion of Ukraine where it has been associated with Russian nationalism and militarism.
Terminology
As the ribbon of Saint George has been used by different Russian governments, multiple terms exist for it in the Russian language. The ribbon first received a formal name in the Russian Empire, in documents prescribing its usage as an award: the Georgian ribbon (, georgiyevskaya lenta). The old Tsarist term was used in the Soviet Union to describe the black-orange ribbon in the Soviet award system, but only in non-official contexts, such as the Military History Journal published by the Soviet Ministry of Defense. Formally, the black-orange ribbon on the badges, flags and cap tallies of Guards units was called the Guards ribbon (, gvardeyskaya lenta), while the same ribbon as it was used in other Soviet awards had no official name. In the military terminology of the
|
https://en.wikipedia.org/wiki/Neutron%20backscattering
|
Neutron backscattering is one of several inelastic neutron scattering techniques. Backscattering from monochromator and analyzer crystals is used to achieve an energy resolution in the order of μeV. Neutron backscattering experiments are performed to study atomic or molecular motion on a nanosecond time scale.
History
Neutron backscattering was proposed by Heinz Maier-Leibnitz in 1966, and realized by some of his students in a test setup at the research reactor FRM I in Garching bei München, Germany. Following this successful demonstration of principle, permanent spectrometers were built at Forschungszentrum Jülich and at the Institut Laue-Langevin (ILL). Later instruments brought an extension of the accessible momentum transfer range (IN13 at ILL), the introduction of focussing optics (IN16 at ILL), and a further increase of intensity by a compact design with a phase-space transform chopper (HFBS at NIST, SPHERES at FRM II, IN16B at the Institut Laue-Langevin).
Backscattering spectrometers
Operational backscattering spectrometers at reactors include IN10, IN13, and IN16B at the Institut Laue-Langevin, the High Flux Backscattering Spectrometer (HFBS) at the NIST Center for Neutron Research,
the SPHERES] instrument of Forschungszentrum Jülich at FRM II
and EMU at ANSTO.
Inverse geometry spectrometers
Inverse geometry spectrometers at spallation sources include IRIS and OSIRIS at the ISIS neutron source at Rutherford-Appleton, BASIS at the Spallation Neutron Source, and MARS at the Paul Scherrer Institute
Historic instruments
Historic instruments are the first backscattering spectrometer that was a temporary setup at FRM I
and the backscattering spectrometer BSS (also called PI) at the DIDO reactor of the Forschungszentrum Jülich (decommissioned).
|
https://en.wikipedia.org/wiki/Bandwidth%20expansion
|
Bandwidth expansion is a technique for widening the bandwidth or the resonances in an LPC filter. This is done by moving all the poles towards the origin by a constant factor . The bandwidth-expanded filter can be easily derived from the original filter by:
Let be expressed as:
The bandwidth-expanded filter can be expressed as:
In other words, each coefficient in the original filter is simply multiplied by in the bandwidth-expanded filter. The simplicity of this transformation makes it attractive, especially in CELP coding of speech, where it is often used for the perceptual noise weighting and/or to stabilize the LPC analysis. However, when it comes to stabilizing the LPC analysis, lag windowing is often preferred to bandwidth expansion.
|
https://en.wikipedia.org/wiki/Halpern%E2%80%93L%C3%A4uchli%20theorem
|
In mathematics, the Halpern–Läuchli theorem is a partition result about finite products of infinite trees. Its original purpose was to give a model for set theory in which the Boolean prime ideal theorem is true but the axiom of choice is false. It is often called the Halpern–Läuchli theorem, but the proper attribution for the theorem as it is formulated below is to Halpern–Läuchli–Laver–Pincus or HLLP (named after James D. Halpern, Hans Läuchli, Richard Laver, and David Pincus), following .
Let d,r < ω, be a sequence of finitely splitting trees of height ω. Let
then there exists a sequence of subtrees strongly embedded in such that
Alternatively, let
and
.
The HLLP theorem says that not only is the collection partition regular for each d < ω, but that the homogeneous subtree guaranteed by the theorem is strongly embedded in
|
https://en.wikipedia.org/wiki/Civil%20ensign
|
A civil ensign is an ensign (maritime flag) used by civilian vessels to denote their nationality. It can be the same or different from the state ensign and the naval ensign (or war ensign). It is also known as the merchant ensign or merchant flag. Some countries have special civil ensigns for yachts, and even for specific yacht clubs, known as yacht ensigns.
Most countries have only one national flag and ensign for all purposes. In other countries, a distinction is made between the land flag and the civil, state and naval ensigns. The British ensigns, for example, differ from the flag used on land (the Union Flag) and have different versions of plain and defaced Red and Blue ensigns for civilian and state use, as well as the naval ensign (White Ensign) that can also be used by yachts of the Royal Yacht Squadron.
Countries having specific civil ensigns
The civil ensigns that are different from the general national flag can be grouped into a number of categories.
Civil ensigns with the national flag in the canton
Several countries use red flags with, in most cases, either the respective national flag or the Union Flag in the canton, patterned after the Red Ensign. British overseas territories fly the plain Red Ensign or a Red Ensign with the respective colonial arms in the fly. Saudi Arabia puts its national flag in the canton of an otherwise-green flag (the Saudi Arabian flag is hoisted with the flagpole to its right so the canton is in the upper right corner of the flag). Ghana stopped using its Red Ensign in 2003 with the adoption of a new merchant shipping act, which made the Ghanaian flag the proper national colors for Ghanaian ships. Similarly, Sri Lanka stopped using its Red Ensign in 1969 and uses the Sri Lankan flag as the civil ensign. Under the relevant shipping law for the Solomon Islands, the Shipping Act 1998, (No. 5 of 1998), the national flag of the Solomon Islands and not a Red Ensign is the appropriate flag: "The National Flag of Solomon Islands
|
https://en.wikipedia.org/wiki/65%2C535
|
65535 is the integer after 65534 and before 65536.
It is the maximum value of an unsigned 16-bit integer.
In mathematics
65535 is the sum of 20 through 215 (20 + 21 + 22 + ... + 215) and is therefore a repdigit in base 2 (1111111111111111), in base 4 (33333333), and in base 16 (FFFF).
It is the ninth number whose Euler totient has an aliquot sum that is : , and the twenty-eighth perfect totient number equal to the sum of its iterated totients.
65535 is the fifteenth 626-gonal number, the fifth 6555-gonal number, and the third 21846-gonal number.
65535 is the product of the first four Fermat primes: 65535 = (2 + 1)(4 + 1)(16 + 1)(256 + 1). Because of this property, it is possible to construct with compass and straightedge a regular polygon with 65535 sides (see, constructible polygon).
In computing
65535 occurs frequently in the field of computing because it is (one less than 2 to the 16th power), which is the highest number that can be represented by an unsigned 16-bit binary number. Some computer programming environments may have predefined constant values representing 65535, with names like .
In older computers with processors having a 16-bit address bus such as the MOS Technology 6502 popular in the 1970s and the Zilog Z80, 65535 (FFFF16) is the highest addressable memory location, with 0 (000016) being the lowest. Such processors thus support at most 64 KiB of total byte-addressable memory.
In Internet protocols, 65535 is also the number of TCP and UDP ports available for use, since port 0 is reserved.
In some implementations of Tiny BASIC, entering a command that divides any number by zero will return 65535.
In Microsoft Word 2011 for Mac, 65535 is the highest line number that will be displayed.
In HTML, 65535 is the decimal value of the web color Aqua (#00FFFF) .
See also
4,294,967,295
255 (number)
16-bit computing
|
https://en.wikipedia.org/wiki/Group%20%28computing%29
|
In computing, the term group generally refers to a grouping of users. In principle, users may belong to none, one, or many groups (although in practice some systems place limits on this.) The primary purpose of user groups is to simplify access control to computer systems.
Suppose a computer science department has a network which is shared by students and academics. The department has made a list of directories which the students are permitted to access and another list of directories which the staff are permitted to access. Without groups, administrators would give each student permission to every student directory, and each staff member permission to every staff directory. In practice, that would be very unworkable – every time a student or staff member arrived, administrators would have to allocate permissions on every directory.
With groups, the task is much simpler: create a student group and a staff group, placing each user in the proper group. The entire group can be granted access to the appropriate directory. To add or remove an account, one must only need to do it in one place (in the definition of the group), rather than on every directory. This workflow provides clear separation of concerns: to change access policies, alter the directory permissions; to change the individuals which fall under the policy, alter the group definitions.
Uses of groups
The primary uses of groups are:
Access control
Accounting - allocating shared resources like disk space and network bandwidth
Default per-user configuration profiles - e.g., by default, every staff account could have a specific directory in their PATH
Content selection - only display content relevant to group members - e.g. this portal channel is intended for students, this mailing list is for the chess club
Delegable group administration
Many systems provide facilities for delegation of group administration. In these systems, when a group is created, one or more users may be named as group administrato
|
https://en.wikipedia.org/wiki/Free%20water%20clearance
|
In the physiology of the kidney, free water clearance (CH2O) is the volume of blood plasma that is cleared of solute-free water per unit time. An example of its use is in the determination of an individual's state of hydration. Conceptually, free water clearance should be thought of relative to the production of isoosmotic urine, which would be equal to the osmolarity of the plasma. If an individual is producing urine more dilute than the plasma, there is a positive value for free water clearance, meaning pure water is lost in the urine in addition to a theoretical isoosmotic filtrate. If the urine is more concentrated than the plasma, then free water is being extracted from the urine, giving a negative value for free water clearance. A negative value is typical for free water clearance, as the kidney usually produces concentrated urine except in the cases of volume overload by the individual.
Overview
At its simplest, the kidney produces urine composed of solute and pure (solute-free) water. How rapidly the kidney clears the blood plasma of a substance (be it water or solute) is the renal clearance, which is related to the rate of urine production. The rate at which plasma is cleared of solute is the osmolal clearance; the rate at which plasma is cleared of solute-free water is the free water clearance.
Calculation
Since urine flow is determined by the rate at which plasma is cleared of solutes and water (as discussed above), urine flow (V) is given as the sum of osmolar (Cosm) and free water clearance (CH2O):
Rearranging yields CH2O
Since osmolar clearance is given as the product of urine flow rate and the ratio of urine to plasma osmolality, this is commonly represented as
For example, for an individual with a urine osmolality of 140 mOsm/L, plasma osmolality of 280 mOsm/L, and a urine production of 4 ml/min, the free water clearance is 2 ml/min, obtained from
Interpretation
Free water clearance can be used as an indicator of how the body is regulating wat
|
https://en.wikipedia.org/wiki/Bursting
|
Bursting, or burst firing, is an extremely diverse general phenomenon of the activation patterns of neurons in the central nervous system and spinal cord where periods of rapid action potential spiking are followed by quiescent periods much longer than typical inter-spike intervals. Bursting is thought to be important in the operation of robust central pattern generators, the transmission of neural codes, and some neuropathologies such as epilepsy. The study of bursting both directly and in how it takes part in other neural phenomena has been very popular since the beginnings of cellular neuroscience and is closely tied to the fields of neural synchronization, neural coding, plasticity, and attention.
Observed bursts are named by the number of discrete action potentials they are composed of: a doublet is a two-spike burst, a triplet three and a quadruplet four. Neurons that are intrinsically prone to bursting behavior are referred to as bursters and this tendency to burst may be a product of the environment or the phenotype of the cell.
Physiological context
Overview
Neurons typically operate by firing single action potential spikes in relative isolation as discrete input postsynaptic potentials combine and drive the membrane potential across the threshold. Bursting can instead occur for many reasons, but neurons can be generally grouped as exhibiting input-driven or intrinsic bursting. Most cells will exhibit bursting if they are driven by a constant, subthreshold input and particular cells which are genotypically prone to bursting (called bursters) have complex feedback systems which will produce bursting patterns with less dependence on input and sometimes even in isolation.
In each case, the physiological system is often thought as being the action of two linked subsystems. The fast subsystem is responsible for each spike the neuron produces. The slow subsystem modulates the shape and intensity of these spikes before eventually triggering quiescence.
Inpu
|
https://en.wikipedia.org/wiki/178%20%28number%29
|
178 (one hundred [and] seventy-eight) is the natural number following 177 and preceding 179.
In mathematics
There are 178 biconnected graphs with six vertices, among which one is designated as the root and the rest are unlabeled. There are also 178 median graphs on nine vertices.
178 is one of the indexes of the smallest triple of dodecahedral numbers where one is the sum of the other two: the sum of the 46th and the 178th dodecahedral numbers is the 179th.
See also
The year 178 AD or 178 BC
List of highways numbered 178
|
https://en.wikipedia.org/wiki/Kr%C3%B6ger%E2%80%93Vink%20notation
|
Kröger–Vink notation is a set of conventions that are used to describe electric charges and lattice positions of point defect species in crystals. It is primarily used for ionic crystals and is particularly useful for describing various defect reactions. It was proposed by and .
Notation
The notation follows the scheme:
M
M corresponds to the species. These can be
atoms – e.g., Si, Ni, O, Cl,
vacancies – V or v (since V is also the symbol for vanadium)
interstitials – i (although this is usually used to describe lattice site, not species)
electrons – e
electron holes – h
S indicates the lattice site that the species occupies. For instance, Ni might occupy a Cu site. In this case, M would be replaced by Ni and S would be replaced by Cu. The site may also be a lattice interstice, in this case, the symbol "i" is used. A cation site can be represented by the symbols C or M (for metal), and an anion site can be represented by either an A or X.
C corresponds to the electronic charge of the species relative to the site that it occupies. The charge of the species is calculated by the charge on the current site minus the charge on the original site. To continue the previous example, Ni often has the same valency as Cu, so the relative charge is zero. To indicate a null charge, × is used. A single • indicates a net single positive charge, while two would represent two net positive charges. Finally, signifies a net single negative charge, so two would indicate a net double negative charge.
Examples
Al — an aluminum ion sitting on an aluminum lattice site, with a neutral charge.
Ni — a nickel ion sitting on a copper lattice site, with neutral charge.
v — a chlorine vacancy, with single positive charge.
Ca — a calcium interstitial ion, with double positive charge.
Cl — a chlorine anion on an interstitial site, with single negative charge.
O — an oxygen anion on an interstitial site, with double negative charge.
e — an electron. No site is normally specified.
Procedure
When
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.