source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/List%20of%20limits
This is a list of limits for common functions such as elementary functions. In this article, the terms a, b and c are constants with respect to x. Limits for general functions Definitions of limits and related concepts if and only if This is the (ε, δ)-definition of limit. The limit superior and limit inferior of a sequence are defined as and . A function, , is said to be continuous at a point, c, if Operations on a single known limit If then: if L is not equal to 0. if n is a positive integer if n is a positive integer, and if n is even, then L > 0. In general, if g(x) is continuous at L and then Operations on two known limits If and then: Limits involving derivatives or infinitesimal changes In these limits, the infinitesimal change is often denoted or . If is differentiable at , . This is the definition of the derivative. All differentiation rules can also be reframed as rules involving limits. For example, if g(x) is differentiable at x, . This is the chain rule. . This is the product rule. If and are differentiable on an open interval containing c, except possibly c itself, and , L'Hôpital's rule can be used: Inequalities If for all x in an interval that contains c, except possibly c itself, and the limit of and both exist at c, then If and for all x in an open interval that contains c, except possibly c itself, This is known as the squeeze theorem. This applies even in the cases that f(x) and g(x) take on different values at c, or are discontinuous at c. Polynomials and functions of the form xa Polynomials in x if n is a positive integer In general, if is a polynomial then, by the continuity of polynomials, This is also true for rational functions, as they are continuous on their domains. Functions of the form xa In particular, . In particular, Exponential functions Functions of the form ag(x) , due to the continuity of Functions of the form xg(x) Functions of the form f(x)g(x) . This limit can be deriv
https://en.wikipedia.org/wiki/Ali%20Baba%20and%2040%20Thieves%20%28video%20game%29
Ali Baba and 40 Thieves is a maze arcade video game released by Sega in 1982. Players take the role of the famous Arabian hero who must fend off and kill the forty thieves who are trying to steal his money. The game is based on the folk tale of the same name. It was ported to the MSX platform, and then a Vector-06C port was made based on the MSX version. Legacy A clone for the ZX Spectrum was published by Suzy Soft in 1985 under the name Ali Baba.
https://en.wikipedia.org/wiki/Karl%20K%C3%BCpfm%C3%BCller
Karl Küpfmüller (6 October 1897 – 26 December 1977) was a German electrical engineer, who was prolific in the areas of communications technology, measurement and control engineering, acoustics, communication theory, and theoretical electro-technology. Biography Küpfmüller was born in Nuremberg, where he studied at the Ohm-Polytechnikum. After returning from military service in World War I, he worked at the telegraph research division of the German Post in Berlin as a co-worker of Karl Willy Wagner, and, from 1921, he was lead engineer at the central laboratory of Siemens & Halske AG in the same city. In 1928 he became full professor of general and theoretical electrical engineering at the Technische Hochschule in Danzig, and later held the same position in Berlin. Küpfmüller joined the National Socialist Motor Corps in 1933. In the following year he also joined the SA. In 1937 Küpfmüller joined the NSDAP and became a member of the SS, where he reached the rank of Obersturmbannführer. Küpfmüller was appointed as director of communication technology Research & Development at the Siemens-Wernerwerk for telegraphy. In 1941–1945 he was director of the central R&D division at Siemens & Halske in 1937. From 1952 until his retirement in 1963, he held the chair for general communications engineering at Technische Hochschule Darmstadt. Later he was honorary professor at the Technische Hochschule Berlin. In 1968, he received the Werner von Siemens Ring for his contributions to the theory of telecommunications and other electro-technology. He died at Darmstadt. Studies in communication theory About 1928, he did the same analysis that Harry Nyquist did, to show that not more than 2B independent pulses per second could be put through a channel of bandwidth B. He did this by quantifying the time-bandwidth product k of various communication signal types, and showing that k could never be less than 1/2. From his 1931 paper (rough translation from Swedish): "The time law al
https://en.wikipedia.org/wiki/Latimer%E2%80%93MacDuffee%20theorem
The Latimer–MacDuffee theorem is a theorem in abstract algebra, a branch of mathematics. It is named after Claiborne Latimer and Cyrus Colton MacDuffee, who published it in 1933. Significant contributions to its theory were made later by Olga Taussky-Todd. Let be a monic, irreducible polynomial of degree . The Latimer–MacDuffee theorem gives a one-to-one correspondence between -similarity classes of matrices with characteristic polynomial and the ideal classes in the order where ideals are considered equivalent if they are equal up to an overall (nonzero) rational scalar multiple. (Note that this order need not be the full ring of integers, so nonzero ideals need not be invertible.) Since an order in a number field has only finitely many ideal classes (even if it is not the maximal order, and we mean here ideals classes for all nonzero ideals, not just the invertible ones), it follows that there are only finitely many conjugacy classes of matrices over the integers with characteristic polynomial .
https://en.wikipedia.org/wiki/Cognitive%20imitation
Cognitive imitation is a form of social learning, and a subtype of imitation. Cognitive imitation, is contrasted with motor and vocal or oral imitation. As with all forms of imitation, cognitive imitation involves learning and copying specific rules or responses done by another. The principal difference between motor and cognitive imitation is the type of rule (and stimulus) that is learned and copied by the observer. So, whereas in the typical imitation learning experiment subjects must copy novel actions on objects or novel sequences of specific actions (novel motor imitation), in a novel cognitive imitation paradigm subjects have to copy novel rules, independently of specific actions or movement patterns. The following example illustrates the difference between cognitive and motor-spatial imitation: Imagine someone overlooking someone's shoulder and stealing their automated teller machine (ATM) password. As with all forms of imitation, the individual learns and successfully reproduces the observed sequence. The observer in our example, like most of us, presumably knows how to operate an ATM (namely, that you have to push X number of buttons on the ATM screen in a specific sequence), so the specific motor responses of touching the screen isn't what the thief is learning. Instead, the thief could learn two types of abstract rules. On the one hand, the thief can learn a spatial rule: touch item in the top right, followed by item on the top left, then the item in the middle of the screen, and finally the one on lower right. This would be an example of motor-spatial imitation because the thief's response is guided by an abstract motor-spatial rule. On the other, the thief could ignore the spatial patterning of the observed responses and instead focus on the particular items that were touched, generating an abstract numerical rule, independently of where they are in space: 3-1-5-9. This would constitute an example of cognitive imitation because the individuals is copy
https://en.wikipedia.org/wiki/Conditional%20operator
The conditional operator is supported in many programming languages. This term usually refers to ?: as in C, C++, C#, and JavaScript. However, in Java, this term can also refer to && and ||. && and || In some programming languages, e.g. Java, the term conditional operator refers to short circuit boolean operators && and ||. The second expression is evaluated only when the first expression is not sufficient to determine the value of the whole expression. Difference from bitwise operator & and | are bitwise operators that occur in many programming languages. The major difference is that bitwise operations operate on the individual bits of a binary numeral, whereas conditional operators operate on logical operations. Additionally, expressions before and after a bitwise operator are always evaluated. if (expression1 || expression2 || expression3)If expression 1 is true, expressions 2 and 3 are NOT checked. if (expression1 | expression2 | expression3)This checks expressions 2 and 3, even if expression 1 is true. Short circuit operators can reduce run times by avoiding unnecessary calculations. They can also avoid Null Exceptions when expression 1 checks whether an object is valid. Usage in Java class ConditionalDemo1 { public static void main(String[] args) { int value1 = 1; int value2 = 2; if ((value1 == 1) && (value2 == 2)) System.out.println("value1 is 1 AND value2 is 2"); if ((value1 == 1) || (value2 == 1)) System.out.println("value1 is 1 OR value2 is 1"); } } "?:" In most programming languages, ?: is called the conditional operator. It is a type of ternary operator. However, ternary operator in most situations refers specifically to ?: because it is the only operator that takes three operands. Regular usage of "?:" ?: is used in conditional expressions. Programmers can rewrite an if-then-else expression in a more concise way by using the conditional operator. Syntax condition ? expressio
https://en.wikipedia.org/wiki/Polyglycerol%20polyricinoleate
Polyglycerol polyricinoleate (PGPR), E476, is an emulsifier made from glycerol and fatty acids (usually from castor bean, but also from soybean oil). In chocolate, compound chocolate and similar coatings, PGPR is mainly used with another substance like lecithin to reduce viscosity. It is used at low levels (below 0.5%), and works by decreasing the friction between the solid particles (e.g. cacao, sugar, milk) in molten chocolate, reducing the yield stress so that it flows more easily, approaching the behaviour of a Newtonian fluid. It can also be used as an emulsifier in spreads and in salad dressings, or to improve the texture of baked goods. It is made up of a short chain of glycerol molecules connected by ether bonds, with ricinoleic acid side chains connected by ester bonds. PGPR is a yellowish, viscous liquid, and is strongly lipophilic: it is soluble in fats and oils and insoluble in water and ethanol. Manufacture Glycerol is heated to above 200 °C in a reactor in the presence of an alkaline catalyst to create polyglycerol. Castor oil fatty acids are separately heated to above 200 °C, to create interesterified ricinoleic fatty acids. The polyglycerol and the interesterified ricinoleic fatty acids are then mixed to create PGPR. Use in chocolate Because PGPR improves the flow characteristics of chocolate and compound chocolate, especially near the melting point, it can improve the efficiency of chocolate coating processes: chocolate coatings with PGPR flow better around shapes of enrobed and dipped products, and it also improves the performance of equipment used to produce solid molded products: the chocolate flows better into the mold, and surrounds inclusions and releases trapped air more easily. PGPR can also be used to reduce the quantity of cocoa butter needed in chocolate formulations: the solid particles in chocolate are suspended in the cocoa butter, and by reducing the viscosity of the chocolate, less cocoa butter is required, which saves costs, be
https://en.wikipedia.org/wiki/Expansion%20joint
A expansion joint, or movement joint, is an assembly designed to hold parts together while safely absorbing temperature-induced expansion and contraction of building materials. They are commonly found between sections of buildings, bridges, sidewalks, railway tracks, piping systems, ships, and other structures. Building faces, concrete slabs, and pipelines expand and contract due to warming and cooling from seasonal variation, or due to other heat sources. Before expansion joint gaps were built into these structures, they would crack under the stress induced. Bridge expansion joints Bridge expansion joints are designed to allow for continuous traffic between structures while accommodating movement, shrinkage, and temperature variations on reinforced and prestressed concrete, composite, and steel structures. They stop the bridge from bending out of place in extreme conditions, and also allow enough vertical movement to permit bearing replacement without the need to dismantle the bridge expansion joint. There are various types, which can accommodate movement from , including joints for small movement (EMSEAL BEJS, XJS, JEP, WR, WOSd, and Granor AC-AR), medium movement (ETIC EJ, Wd), and large movement (WP, ETIC EJF/Granor SFEJ). Modular expansion joints are used when the movements of a bridge exceed the capacity of a single gap joint or a finger type joint. Modular multiple-gap expansion joints can accommodate movements in all directions and rotations about every axis. They can be used for longitudinal movements of as little as 160mm, or for very large movements of over 3000 mm. The total movement of the bridge deck is divided among a number of individual gaps which are created by horizontal surface beams. The individual gaps are sealed by watertight elastomeric profiles, and surface beam movements are regulated by an elastic control system. The drainage of the joint is via the drainage system of the bridge deck. Certain joints feature so-called “sinus plates”
https://en.wikipedia.org/wiki/Prosigns%20for%20Morse%20code
Procedural signs or prosigns are shorthand signals used in Morse code telegraphy, for the purpose of simplifying and standardizing procedural protocols for land-line and radio communication. The procedural signs are distinct from conventional Morse code abbreviations, which consist mainly of brevity codes that convey messages to other parties with greater speed and accuracy. However, some codes are used both as prosigns and as single letters or punctuation marks, and for those, the distinction between a prosign and abbreviation is ambiguous, even in context. Overview In the broader sense prosigns are just standardised parts of short form radio protocol, and can include any abbreviation. Examples would be for "okay, heard you, continue" or for "message, received". In a more restricted sense, "prosign" refers to something analogous to the nonprinting control characters in teleprinter and computer character sets, such as Baudot and ASCII. Different from abbreviations, those are universally recognizable across language barriers as distinct and well-defined symbols. At the coding level, prosigns admit any form the Morse code can take, unlike abbreviations which have to be sent as a sequence of individual letters, like ordinary text. On the other hand, most prosigns codes are much longer than typical codes for letters and numbers. They are individual and indivisible code points within the broader Morse code, fully at par with basic letters and numbers. The development of prosigns began in the 1860s for wired telegraphy. Since telegraphy preceded voice communications by several decades, many of the much older Morse prosigns have acquired precisely equivalent prowords for use in more recent voice protocols. Not all prosigns used by telegraphers are standard: There are regional and community-specific variations of the coding convention used in certain radio networks to manage transmission and formatting of messages, and many unofficial prosign conventions exist; some o
https://en.wikipedia.org/wiki/Morse%20code%20abbreviations
Morse code abbreviations are used to speed up Morse communications by foreshortening textual words and phrases. Morse abbreviations are short forms, representing normal textual words and phrases formed from some (fewer) characters taken from the word or phrase being abbreviated. Many are typical English abbreviations, or short acronyms for often-used phrases. Distinct from prosigns and commercial codes Morse code abbreviations are not the same as prosigns. Morse abbreviations are composed of (normal) textual alpha-numeric character symbols with normal Morse code inter-character spacing; the character symbols in abbreviations, unlike the delineated character groups representing Morse code prosigns, are not "run together" or concatenated in the way most prosigns are formed. Although a few abbreviations (such as for "dollar") are carried over from former commercial telegraph codes, almost all Morse abbreviations are not commercial codes. From 1845 until well into the second half of the 20th century, commercial telegraphic code books were used to shorten telegrams, e.g. = "Locals have plundered everything from the wreck." However, these cyphers are typically "fake" words six characters long, or more, used for replacing commonly used whole phrases, and are distinct from single-word abbreviations. Word and phrase abbreviations The following Table of Morse code abbreviations and further references to Brevity codes such as 92 Code, Q code, Z code, and R-S-T system serve to facilitate fast and efficient Morse code communications. {| class="wikitable" |+Table of selected Morse code abbreviations |- ! Abbreviation ! Meaning ! Defined in ! Type of abbreviation |- | | All after (used after question mark to request a repetition) | ITU-R M.1172 | operating signal |- | | All before (similarly) | ITU-R M.1172 | operating signal |- | | Address | ITU-T Rec. F.1 | operating signal |- | | Address | ITU-R M.1172 | operating signal |- | | Again | | operating signal |- | | Ante
https://en.wikipedia.org/wiki/HBV%20hydrology%20model
The HBV hydrology model, or Hydrologiska Byråns Vattenbalansavdelning model, is a computer simulation used to analyze river discharge and water pollution. Developed originally for use in Scandinavia, this hydrological transport model has also been applied in a large number of catchments on most continents. Discharge modelling This is the major application of HBV, and has gone through much refinement. It comprises the following routines: Snow routine Soil moisture routine Response function Routing routine The HBV model is a lumped (or semi-distributed) bucket-type (or also called 'conceptual') catchment model that has relatively few model parameters and minimal forcing input requirements, usually the daily temperature and the daily precipitation. First the snow is calculated after defining a threshold melting temperature (TT usually 0 °C) and a parameter CMELT that reflects the equivalent melted snow for the difference of temperature. The result is divided into a liquid part that is the surface runoff and a second part that infiltrates. Second the soil moisture is calculated after defining an initial value and the field capacity (FC). Third calculation of the actual Evapotranspiration (ETPa), first by using an external model (ex: Penman) for finding the potential ETP and then fitting the result to the temperatures and the permanent wilting point(PWP) of the catchment in question. A parameter C which reflects the increase in the ETP with the differences in temperatures ( Actual Temperature and Monthly mean Temperature). The model considers the catchment as two reservoirs (S1 and S2) connected by a percolation flow, the inflow to the first reservoir is calculated as the surface runoff, which is what remains from the initial precipitations after calculating the infiltration and the evapotranspiration. The outflow from the first reservoir is divided into two separate flows (Q1 and Q2) where Q1 represents the fast flow which is triggered after a certain threshold L
https://en.wikipedia.org/wiki/Multicopy%20single-stranded%20DNA
Multicopy single-stranded DNA (msDNA) is a type of extrachromosomal satellite DNA that consists of a single-stranded DNA molecule covalently linked via a 2'-5'phosphodiester bond to an internal guanosine of an RNA molecule. The resultant DNA/RNA chimera possesses two stem-loops joined by a branch similar to the branches found in RNA splicing intermediates. The coding region for msDNA, called a "retron", also encodes a type of reverse transcriptase, which is essential for msDNA synthesis. Discovery Before the discovery of msDNA in myxobacteria, a group of swarming, soil-dwelling bacteria, it was thought that the enzymes known as reverse transcriptases (RT) existed only in eukaryotes and viruses. The discovery led to an increase in research of the area. As a result, msDNA has been found to be widely distributed among bacteria, including various strains of Escherichia coli and pathogenic bacteria. Further research discovered similarities between HIV-encoded reverse transcriptase and an open reading frame (ORF) found in the msDNA coding region. Tests confirmed the presence of reverse transcriptase activity in crude lysates of retron-containing strains. Although an RNase H domain was tentatively identified in the retron ORF, it was later found that the RNase H activity required for msDNA synthesis is actually supplied by the host. Retrons The discovery of msDNA has led to broader questions regarding where reverse transcriptase originated, as genes encoding for reverse transcriptase (not necessarily associated with msDNA) have been found in prokaryotes, eukaryotes, viruses and even archaea. After a DNA fragment coding for the production of msDNA in E. coli was discovered, it was conjectured that bacteriophages might have been responsible for the introduction of the RT gene into E. coli. These discoveries suggest that reverse transcriptase played a role in the evolution of viruses from bacteria, with one hypothesis stating that, with the help of reverse transcriptas
https://en.wikipedia.org/wiki/Electron%20emission
In physics, electron emission is the ejection of an electron from the surface of matter, or, in beta decay (β− decay), where a beta particle (a fast energetic electron or positron) is emitted from an atomic nucleus transforming the original nuclide to an isobar. Radioactive decay In Beta decay (β− decay), radioactive decay results in a beta particle (fast energetic electron or positron in β+ decay) being emitted from the nucleus Surface emission Thermionic emission, the liberation of electrons from an electrode by virtue of its temperature Schottky emission, due to the: Schottky effect or field enhanced thermionic emission Field electron emission, emission of electrons induced by an electrostatic field Devices An electron gun or electron emitter, is an electrical component in some vacuum tubes that uses surface emission Others Exoelectron emission, a weak electron emission, appearing only from pretreated objects Photoelectric effect, the emission of electrons when electromagnetic radiation, such as light, hits a material See also Positron emission, (of a positron or "antielectron") is one aspect of β+ decay Electron excitation, the transfer of an electron to a higher atomic orbital
https://en.wikipedia.org/wiki/Organization%20for%20Human%20Brain%20Mapping
The Organization for Human Brain Mapping (OHBM) is an organization of scientists with the main aim of organizing an annual meeting ("Annual Meeting of the Organization for Human Brain Mapping"). The organization was established in 1995 at the first conference which was a Paris satellite meeting of the meeting of the International Society for Cerebral Blood Flow and Metabolism (ISCBFM) in Cologne. Although the 1999 meetings of the two societies were coordinated, ISCBFM and OHBM are now completely split. The organizers of the Paris meeting were Bernard Mazoyer, Rüdiger Seitz and Per Roland. The stated mission of the organization is "to advance the understanding of the anatomical and functional organization of the human brain" by bringing "together scientists of various backgrounds who are engaged in investigations relevant to human brain organization" and engaging "in other activities to facilitate communication among these scientists and promote education in human brain organization." Among the past and present council members of the organizations have been David Van Essen, Russell Poldrack, Leslie Ungerleider, Alan Evans, Albert Gjedde, Richard Frackowiak, Marcus Raichle and Karl Friston. Since the human brain mapping field is cross-disciplinary the members range from neurologists, psychiatrists and psychologists to physicists, engineers, software developers and statisticians. The OHBM meetings now draws 2500–3000 attendees each year. Besides organizing meetings the organization has also advocated for data sharing in its field and established a task force on neuroinformatics. This was at a time when Michael Gazzaniga set up the fMRI Data Center, which required researchers to submit scans from functional magnetic resonance imaging when publishing in Journal of Cognitive Neuroscience. In 2014, the organization established the Glass Brain Award, a lifetime achievement award. The Young Investigator Award (formerly Wiley Young Investigator Award) is also awarded by
https://en.wikipedia.org/wiki/Baekje%20smile
In Korean art history, the Baekje smile is the common smile motif found in Baekje sculpture and bas-relief. Baekje figures express a unique smile that has been described as both enigmatic and subtle. The smile has been also been characterized in many different ways from "genuinely glowing" to "thin and mild" to "unfathomable and benevolent". Among the Three Kingdoms of Korea, Baekje art was stylistically the most realistic and technically sophisticated. While Goguryeo sculpture was highly rigid, and Silla sculpture was formalized, Baekje sculpture exhibited distinct characteristics of warmth, softness, and used relaxed poses. Sometimes, the Baekje style has been attributed to influence from the southern Chinese dynasties. The smile gives the Baekje statues a sense of friendliness and an air of pleasantness that is rarely found in other traditions of Buddhist sculpture. The smile is considered to be unique and distinctive. See also Archaic smile Baekje Korean art Gilt-bronze Maitreya in Meditation
https://en.wikipedia.org/wiki/Irregular%20bone
The irregular bones are bones which, from their peculiar form, cannot be grouped as long, short, flat or sesamoid bones. Irregular bones serve various purposes in the body, such as protection of nervous tissue (such as the vertebrae protect the spinal cord), affording multiple anchor points for skeletal muscle attachment (as with the sacrum), and maintaining pharynx and trachea support, and tongue attachment (such as the hyoid bone). They consist of cancellous tissue enclosed within a thin layer of compact bone. Irregular bones can also be used for joining all parts of the spinal column together. The spine is the place in the human body where the most irregular bones can be found. There are, in all, 33 irregular bones found here. The irregular bones are: the vertebrae, sacrum, coccyx, temporal, sphenoid, ethmoid, zygomatic, maxilla, mandible, palatine, inferior nasal concha, and hyoid. Additional images
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%201
Bone morphogenetic protein 1, also known as BMP1, is a protein which in humans is encoded by the BMP1 gene. There are seven isoforms of the protein created by alternate splicing. Function BMP1 belongs to the peptidase M12A family of bone morphogenetic proteins (BMPs). It induces bone and cartilage development. Unlike other BMPs, it does not belong to the TGFβ superfamily. It was initially discovered to work like other BMPs by inducing bone and cartilage development. It however, is a metalloprotease that cleaves the C-terminus of procollagen I, II and III. It has an astacin-like protease domain. It has been shown to cleave laminin 5 and is localized in the basal epithelial layer of bovine skin. The BMP1 locus encodes a protein that is capable of inducing formation of cartilage in vivo. Although other bone morphogenetic proteins are members of the TGF-beta superfamily, BMP1 encodes a protein that is not closely related to other known growth factors. BMP1 protein and procollagen C proteinase (PCP), a secreted metalloprotease requiring calcium and needed for cartilage and bone formation, are identical. PCP or BMP1 protein cleaves the C-terminal propeptides of procollagen I, II, and III and its activity is increased by the procollagen C-endopeptidase enhancer protein. The BMP1 gene is expressed as alternatively spliced variants that share an N-terminal protease domain but differ in their C-terminal region Structure A partial structure of BMP1 was determined through X-Ray diffraction with a resolution of 1.27 Å. Crystallization experiments were done by vapor diffusion at a pH of 7.5. This is important because it is close to the pH of the human body, where BMP1 resides in vivo. This BMP1 fragment is 202 residues in length. Its secondary structure is made up of 30% helices, or 10 helices, 61 residues in length, and 15% beta sheets, or 11 strands, 32 residues in length. It contains ligands of an acetyl group and a Zinc ion. A Ramachandran plot was constructed for
https://en.wikipedia.org/wiki/Hunter%20syndrome
Hunter syndrome, or mucopolysaccharidosis type II (MPS II), is a rare genetic disorder in which large sugar molecules called glycosaminoglycans (or GAGs or mucopolysaccharides) build up in body tissues. It is a form of lysosomal storage disease. Hunter syndrome is caused by a deficiency of the lysosomal enzyme iduronate-2-sulfatase (I2S). The lack of this enzyme causes heparan sulfate and dermatan sulfate to accumulate in all body tissues. Hunter syndrome is the only MPS syndrome to exhibit X-linked recessive inheritance. The symptoms of Hunter syndrome are comparable to those of MPS I. It causes abnormalities in many organs, including the skeleton, heart, and respiratory system. In severe cases, this leads to death during the teenaged years. Unlike MPS I, corneal clouding is not associated with this disease. Signs and symptoms Hunter syndrome may present with a wide variety of phenotypes. It has traditionally been categorized as either "mild" or "severe" depending on the presence of central nervous system symptoms, but this is an oversimplification. Patients with "attenuated" or "mild" forms of the disease may still have significant health issues. For severely affected patients, the clinical course is relatively predictable; patients will normally die at an early age. For those with milder forms of the disease, a wider variety of outcomes exist. Many live into their 20s and 30s, but some may have near-normal life expectancies. Cardiac and respiratory abnormalities are the usual cause of death for patients with milder forms of the disease. The symptoms of Hunter syndrome (MPS II) are generally not apparent at birth. Often, the first symptoms may include abdominal hernias, ear infections, runny noses, and colds. As the buildup of GAGs continues throughout the cells of the body, signs of MPS II become more visible. The physical appearance of many children with the syndrome include a distinctive coarseness in their facial features, including a prominent forehead, a
https://en.wikipedia.org/wiki/Crystal%20Ball%20%28detector%29
The Crystal Ball was a hermetic particle detector used initially with the SPEAR particle accelerator at the Stanford Linear Accelerator Center beginning in 1979. It was designed to detect neutral particles and was used to discover the ηc meson. Its central section was a spark chamber surrounded by a nearly-complete sphere of scintillating crystals (NaI(Tl)), for which it was named. With the addition of endcaps of similar construction, the detector covered 98% of the solid angle around the interaction point. After its decommissioning at SLAC, the detector was carried to DESY, where it was used for b-physics experiments. In 1996, it was moved to the Alternating Gradient Synchrotron (AGS) at Brookhaven National Laboratory, where it was used in a series of pion- and kaon-induced experiments on the proton. Currently it is located at Mainz Microtron facility, where it is being used by the A2 Collaboration for a diverse program of measurements using energy tagged Bremsstrahlung photons.
https://en.wikipedia.org/wiki/N-Acetylgalactosamine
N-Acetylgalactosamine (GalNAc), is an amino sugar derivative of galactose. Function In humans it is the terminal carbohydrate forming the antigen of blood group A. It is typically the first monosaccharide that connects serine or threonine in particular forms of protein O-glycosylation. N-Acetylgalactosamine is necessary for intercellular communication, and is concentrated in sensory nerve structures of both humans and animals. GalNAc is also used as a targeting ligand in investigational antisense oligonucleotides and siRNA therapies targeted to the liver, where it binds to the asialoglycoprotein receptors on hepatocytes. See also Galactosamine Globoside (N-Acetylglucosamine) GlcNAc
https://en.wikipedia.org/wiki/The%20Regenerative%20Medicine%20Institute
The Regenerative Medicine Institute (REMEDI), was established in 2003 as a Centre for Science, Technology & Engineering in collaboration with National University of Ireland, Galway. It obtained an award of €14.9 million from Science Foundation Ireland over five years. It conducts basic research and applied research in regenerative medicine, an emerging field that combines the technologies of gene therapy and adult stem cell therapy. The goal is to use cells and genes to regenerate healthy tissues that can be used to repair or replace other tissues and organs in a minimally invasive approach. Centres for Science, Engineering & Technology help link scientists and engineers in partnerships across academia and industry to address crucial research questions, foster the development of new and existing Irish-based technology companies, attract industry that could make an important contribution to Ireland and its economy, and expand educational and career opportunities in Ireland in science and engineering. CSETs must exhibit outstanding research quality, intellectual breadth, active collaboration, flexibility in responding to new research opportunities, and integration of research and education in the fields that SFI supports.
https://en.wikipedia.org/wiki/Cockade%20of%20Argentina
The Argentine cockade () is one of the national symbols of Argentina, instituted by decree on February 18, 1812 by the First Triumvirate, who determined that "the national cockade of the United Provinces of the Río de la Plata shall be of colours white and light blue [...]". The National Cockade Day is on May 18, the date on which it is assumed that the cockade was first used by the ladies of Buenos Aires during the events of the 1810 May Revolution. Origin The origin of the colours of the cockade and the reasons for their election cannot be accurately established. Among the several versions, one states that the colours white and light blue were first adopted during the British invasions of the Río de la Plata in 1806 and 1807 by the Regiment of Patricians, the first urban militia regiment of the Río de la Plata. Supposedly, a group of ladies from Buenos Aires first wore the cockade on May 19, 1810, in a visit to then-Colonel Cornelio Saavedra, head of the regiment. Between May 22 and 25 of the same year, it is known that the , or patriots, identified adherents to the May Revolution by giving them ribbons with those colours. An anonymous manuscript quoted by historian Marfany expresses that on May 21, a Monday, revolutionaries presented themselves as such with white ribbons on their clothes and hats. In Juan Manuel Beruti's memoirs, , it is commented on the use of white ribbons on clothes and cockades with olive branches on hats. It was also documented by Spanish functionary Faustino Ansay that when news of the revolution arrived to Mendoza, its supporters started to wear white stripes. A report attributed to Ramón Manuel de Pazos says that on May 21, 1810, Domingo French and Antonio Beruti distributed said stripes as a sign of peace and unity between patriots and supporters of the Spanish government, but given the hostility of the latter, on May 25 they began spreading red stripes as a reference to the Jacobins. Both colours were later adopted by the members o
https://en.wikipedia.org/wiki/Time%20to%20first%20fix
Time to first fix (TTFF) is a measure of the time required for a GPS navigation device to acquire satellite signals and navigation data, and calculate a position solution (called a fix). Scenarios The TTFF is commonly broken down into three more specific scenarios, as defined in the GPS equipment guide: Cold factory The receiver is missing or has inaccurate estimates of its position, velocity, the time, or the visibility of any of the GPS satellites. As such, the receiver must systematically search for all possible satellites. After acquiring a satellite signal, the receiver can begin to obtain approximate information on all the other satellites, called the almanac. This almanac is transmitted repeatedly over 12.5 minutes. Almanac data can be received from any of the GPS satellites and is considered valid for up to 180 days. Warm normal The receiver has estimates of the current time within 20 seconds, the current position within 100 kilometers, its velocity within 25 m/s, and it has valid almanac data. It must acquire each satellite signal and obtain that satellite's detailed orbital information, called ephemeris data. Each satellite broadcasts its ephemeris data every 30 seconds, considered valid for up to 4 hours. Hot standby The receiver has valid time, position, almanac, and ephemeris data, enabling a rapid acquisition of satellite signals. The time required of a receiver in this state to calculate a position fix may also be termed time to subsequent fix (TTSF). Many receivers can use as many as twelve channels simultaneously, allowing quicker fixes (especially in a cold case for the almanac download). Many cell phones reduce the time to first fix by using assisted GPS (A-GPS): they acquire almanac and ephemeris data over a fast network connection from the cell-phone operator rather than over the slow radio connection from the satellites. The TTFFs for a cold start is typically between 2 and 4 minutes, a warm start is 45 seconds (or shorter), and a
https://en.wikipedia.org/wiki/Warren%20%28burrow%29
A warren is a network of wild rodent or lagomorph, typically rabbit burrows. Domestic warrens are artificial, enclosed establishment of animal husbandry dedicated to the raising of rabbits for meat and fur. The term evolved from the medieval Anglo-Norman concept of free warren, which had been, essentially, the equivalent of a hunting license for a given woodland. Architecture of the domestic warren The cunicularia of the monasteries may have more closely resembled hutches or pens, than the open enclosures with specialized structures which the domestic warren eventually became. Such an enclosure or close was called a cony-garth, or sometimes conegar, coneygree or "bury" (from "burrow"). Moat and pale To keep the rabbits from escaping, domestic warrens were usually provided with a fairly substantive moat, or ditch filled with water. Rabbits generally do not swim and avoid water. A pale, or fence, was provided to exclude predators. Pillow mounds The most characteristic structure of the "cony-garth" ("rabbit-yard") is the pillow mound. These were "pillow-like", oblong mounds with flat tops, frequently described as being "cigar-shaped", and sometimes arranged like the letter ⟨E⟩ or into more extensive, interconnected rows. Often these were provided with pre-built, stone-lined tunnels. The preferred orientation was on a gentle slope, with the arms extending downhill, to facilitate drainage. The soil needed to be soft, to accommodate further burrowing. This type of architecture and animal husbandry has become obsolete, but numerous pillow mounds are still to be found in Britain, some of them maintained by English Heritage, with the greatest density being found on Dartmoor. Further evolution of the term Ultimately, the term "warren" was generalized to include wild burrows. According to the 1911 Encyclopædia Britannica: The word thus became used of a piece of ground preserved for these beasts of warren. It is now applied loosely to any piece of ground, whether pres
https://en.wikipedia.org/wiki/Arkanoid%3A%20Revenge%20of%20Doh
Arkanoid: Revenge of Doh (a.k.a. Arkanoid 2) is an arcade game released by Taito in 1987 as a sequel to Arkanoid. Plot The mysterious enemy known as DOH has returned to seek vengeance on the Vaus space vessel. The player must once again take control of the Vaus (paddle) and overcome many challenges in order to destroy DOH once and for all. Revenge of Doh sees the player battle through 34 rounds, taken from a grand total of 64. Gameplay Revenge of Doh differs from its predecessor with the introduction of "Warp Gates". Upon completion of a level or when the Break ("B") pill is caught, two gates appear at the bottom of the play area, on either side. The player can choose to go through either one of the gates - the choice will affect which version of the next level is provided. The fire-button is only used when the Laser Cannons ("L") or Catch ("C") pill is caught. The game has new power-ups and enemy types, and two new types of bricks. Notched silver bricks, like normal silver bricks, take several hits to destroy. However, after a short period of time after destruction, they regenerate at full strength. These bricks do not need to be destroyed in order to complete a level. In addition, some bricks move left to right as long as their sides are not obstructed by other bricks. The US version has an entirely different layout for Level 1 that feature an entire line of notched bricks, with all colored bricks above it moving from side to side. On round 17, the player must defeat a giant brain as a mini-boss. After completing all 33 rounds, the player faces DOH in two forms as a final confrontation: its original, statue-like incarnation, then a creature with waving tentacles that break off and regenerate when struck. Home versions include level editor, which players can use to create their own levels or edit and replace existing levels. Release Revenge of Doh initially released in arcades in June 1987. In June 1989, versions for the Tandy, Atari ST, Apple IIGS, and Co
https://en.wikipedia.org/wiki/Logic%20level
In digital circuits, a logic level is one of a finite number of states that a digital signal can inhabit. Logic levels are usually represented by the voltage difference between the signal and ground, although other standards exist. The range of voltage levels that represent each state depends on the logic family being used. A logic-level shifter can be used to allow compatibility between different circuits. 2-level logic In binary logic the two levels are logical high and logical low, which generally correspond to binary numbers 1 and 0 respectively or truth values true and false respectively. Signals with one of these two levels can be used in Boolean algebra for digital circuit design or analysis. Active state The use of either the higher or the lower voltage level to represent either logic state is arbitrary. The two options are active high (positive logic) and active low (negative logic). Active-high and active-low states can be mixed at will: for example, a read only memory integrated circuit may have a chip-select signal that is active-low, but the data and address bits are conventionally active-high. Occasionally a logic design is simplified by inverting the choice of active level (see De Morgan's laws). The name of an active-low signal is historically written with a bar above it to distinguish it from an active-high signal. For example, the name Q, read Q bar or Q not, represents an active-low signal. The conventions commonly used are: a bar above () a leading slash (/Q) a lower-case n prefix or suffix (nQ or Q_n) a trailing # (Q#), or an _B or _L suffix (Q_B or Q_L). Many control signals in electronics are active-low signals (usually reset lines, chip-select lines and so on). Logic families such as TTL can sink more current than they can source, so fanout and noise immunity increase. It also allows for wired-OR logic if the logic gates are open-collector/open-drain with a pull-up resistor. Examples of this are the I²C bus and the Controller Ar
https://en.wikipedia.org/wiki/Biological%20process
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples. Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes. Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule. Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature Organization: being structurally composed of one or more cells – the basic units of life Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life. Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter. Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis. Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms. Interaction between organisms. the processes
https://en.wikipedia.org/wiki/Stewart%E2%80%93Walker%20lemma
The Stewart–Walker lemma provides necessary and sufficient conditions for the linear perturbation of a tensor field to be gauge-invariant. if and only if one of the following holds 1. 2. is a constant scalar field 3. is a linear combination of products of delta functions Derivation A 1-parameter family of manifolds denoted by with has metric . These manifolds can be put together to form a 5-manifold . A smooth curve can be constructed through with tangent 5-vector , transverse to . If is defined so that if is the family of 1-parameter maps which map and then a point can be written as . This also defines a pull back that maps a tensor field back onto . Given sufficient smoothness a Taylor expansion can be defined is the linear perturbation of . However, since the choice of is dependent on the choice of gauge another gauge can be taken. Therefore the differences in gauge become . Picking a chart where and then which is a well defined vector in any and gives the result The only three possible ways this can be satisfied are those of the lemma. Sources Describes derivation of result in section on Lie derivatives Tensors Lemmas in analysis
https://en.wikipedia.org/wiki/SiRF
SiRF Technology, Inc. was a pioneer in the commercial use of GPS technology for consumer applications. The company was founded in 1995 and was headquartered in San Jose, California. Notable and founding members included Sanjai Kohli, Dado Banatao, and Kanwar Chadha. The company was acquired by British firm CSR plc in 2009, who were in turn subsequently acquired by American company Qualcomm on 13 August 2015. SiRF manufactured a range of patented GPS chipsets and software for consumer navigation devices and systems. The chips are based on ARM controllers integrated with low-noise radio receivers to decode GPS signals at very low signal levels (typically -160dBm). SiRF chips also support SBAS to allow for differentially corrected positions. SiRFstarIII SiRFstarIII architecture is designed to be useful in wireless and handheld location-based services (LBS) applications, for 2G, 2.5G, 3G asynchronous networks. The SiRFstarIII family comprises the GRF3w RF IC, the GSP3f digital section, and the GSW3 software that is API compatible with GSW2 and SiRFLoc. The chips have been adopted by major GPS manufacturers, including Sony, Micro Technologies, Garmin, TomTom and Magellan. SiRFatlas IV SiRFatlas IV is a multifunction location system processor and is meant for entry-level Personal Navigation Devices (PNDs). The SiRFatlas IV is a cheaper version of the very popular, but rather expensive SiRFPrima platform. Has GPS/Galileo baseband, LCD touch-screen controller, video input, 10-bit ADC and a high-speed USB 2.0. SiRFstarV SiRFstarV chips, launched in 2012, are capable of tracking NAVSTAR, GLONASS, Galileo, Compass, SBAS, and future GNSS signals. The SiRFusion platform integrates positioning from GNSS, terrestrial radio solutions such as Wi-Fi and cellular, and MEMS sensors including accelerometers, gyroscopes, and compasses. SiRFusion can then combine this real-time information with cellular base station and Wi-Fi access point location data, ephemeribased aiding inf
https://en.wikipedia.org/wiki/Wireless%20application%20service%20provider
A wireless application service provider (WASP) is the generic name for a firm that provides remote services, typically to handheld devices, such as cellphones or PDAs, that connect to wireless data networks. WASPs are a specific category of application service providers (ASPs), though the latter term may more often be associated with standard web services. They can also be used for wireless bridging between different types of network topologies. Wireless networking
https://en.wikipedia.org/wiki/Biracks%20and%20biquandles
In mathematics, biquandles and biracks are sets with binary operations that generalize quandles and racks. Biquandles take, in the theory of virtual knots, the place that quandles occupy in the theory of classical knots. Biracks and racks have the same relation, while a biquandle is a birack which satisfies some additional conditions. Definitions Biquandles and biracks have two binary operations on a set written and . These satisfy the following three axioms: 1. 2. 3. These identities appeared in 1992 in reference [FRS] where the object was called a species. The superscript and subscript notation is useful here because it dispenses with the need for brackets. For example, if we write for and for then the three axioms above become 1. 2. 3. If in addition the two operations are invertible, that is given in the set there are unique in the set such that and then the set together with the two operations define a birack. For example, if , with the operation , is a rack then it is a birack if we define the other operation to be the identity, . For a birack the function can be defined by Then 1. is a bijection 2. In the second condition, and are defined by and . This condition is sometimes known as the set-theoretic Yang-Baxter equation. To see that 1. is true note that defined by is the inverse to To see that 2. is true let us follow the progress of the triple under . So On the other hand, . Its progress under is Any satisfying 1. 2. is said to be a switch (precursor of biquandles and biracks). Examples of switches are the identity, the twist and where is the operation of a rack. A switch will define a birack if the operations are invertible. Note that the identity switch does not do this. Biquandles A biquandle is a birack which satisfies some additional structure, as described by Nelson and Rische. The axioms of a biquandle are "minimal" in the sense that they are the weakest restrictions that can
https://en.wikipedia.org/wiki/Urea-to-creatinine%20ratio
In medicine, the urea-to-creatinine ratio (UCR), known in the United States as BUN-to-creatinine ratio, is the ratio of the blood levels of urea (BUN) (mmol/L) and creatinine (Cr) (μmol/L). BUN only reflects the nitrogen content of urea (MW 28) and urea measurement reflects the whole of the molecule (MW 60), urea is just over twice BUN (60/28 = 2.14). In the United States, both quantities are given in mg/dL The ratio may be used to determine the cause of acute kidney injury or dehydration. The principle behind this ratio is the fact that both urea (BUN) and creatinine are freely filtered by the glomerulus; however, urea reabsorbed by the renal tubules can be regulated (increased or decreased) whereas creatinine reabsorption remains the same (minimal reabsorption). Definition Urea and creatinine are nitrogenous end products of metabolism. Urea is the primary metabolite derived from dietary protein and tissue protein turnover. Creatinine is the product of muscle creatine catabolism. Both are relatively small molecules (60 and 113 daltons, respectively) that distribute throughout total body water. In Europe, the whole urea molecule is assayed, whereas in the United States only the nitrogen component of urea (the blood or serum urea nitrogen, i.e., BUN or SUN) is measured. The BUN, then, is roughly one-half (7/15 or 0.466) of the blood urea. The normal range of urea nitrogen in blood or serum is 5 to 20 mg/dl, or 1.8 to 7.1 mmol urea per liter. The range is wide because of normal variations due to protein intake, endogenous protein catabolism, state of hydration, hepatic urea synthesis, and renal urea excretion. A BUN of 15 mg/dl would represent significantly impaired function for a woman in the thirtieth week of gestation. Her higher glomerular filtration rate (GFR), expanded extracellular fluid volume, and anabolism in the developing fetus contribute to her relatively low BUN of 5 to 7 mg/dl. In contrast, the rugged rancher who eats in excess of 125 g protein each
https://en.wikipedia.org/wiki/Synapse
In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron or to the target effector cell. Synapses are essential to the transmission of nervous impulses from one neuron to another. Neurons are specialized to pass signals to individual target cells, and synapses are the means by which they do so. At a synapse, the plasma membrane of the signal-passing neuron (the presynaptic neuron) comes into close apposition with the membrane of the target (postsynaptic) cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process. In many synapses, the presynaptic part is located on an axon and the postsynaptic part is located on a dendrite or soma. Astrocytes also exchange information with the synaptic neurons, responding to synaptic activity and, in turn, regulating neurotransmission. Synapses (at least chemical synapses) are stabilized in position by synaptic adhesion molecules (SAMs) projecting from both the pre- and post-synaptic neuron and sticking together where they overlap; SAMs may also assist in the generation and functioning of synapses. History Santiago Ramón y Cajal proposed that neurons are not continuous throughout the body, yet still communicate with each other, an idea known as the neuron doctrine. The word "synapse" was introduced in 1897 by the English neurophysiologist Charles Sherrington in Michael Foster's Textbook of Physiology. Sherrington struggled to find a good term that emphasized a union between two separate elements, and the actual term "synapse" was suggested by the English classical scholar Arthur Woollgar Verrall, a friend of Foster. The word was derived from the Greek synapsis (), meaning "conjunction", which in turn derives from synaptein (), from syn () "together" and haptein () "to fasten". However, while the synaptic gap remained a theoretical co
https://en.wikipedia.org/wiki/Immunological%20synapse
In immunology, an immunological synapse (or immune synapse) is the interface between an antigen-presenting cell or target cell and a lymphocyte such as a T/B cell or Natural Killer cell. The interface was originally named after the neuronal synapse, with which it shares the main structural pattern. An immunological synapse consists of molecules involved in T cell activation, which compose typical patterns—activation clusters. Immunological synapses are the subject of much ongoing research. Structure and function The immune synapse is also known as the supramolecular activation cluster or SMAC. This structure is composed of concentric rings each containing segregated clusters of proteins—often referred to as the bull’s-eye model of the immunological synapse: c-SMAC (central-SMAC) composed of the θ isoform of protein kinase C, CD2, CD4, CD8, CD28, Lck, and Fyn. p-SMAC (peripheral-SMAC) within which the lymphocyte function-associated antigen-1 (LFA-1) and the cytoskeletal protein talin are clustered. d-SMAC (distal-SMAC) enriched in CD43 and CD45 molecules. New investigations, however, have shown that a "bull’s eye" is not present in all immunological synapses. For example, different patterns appear in the synapse between a T-cell and a dendritic cell. This complex as a whole is postulated to have several functions including but not limited to: Regulation of lymphocyte activation Transfer of peptide-MHC complexes from APCs to lymphocytes Directing secretion of cytokines or lytic granules Recent research has proposed a striking parallel between the immunological synapse and the primary cilium based mainly on similar actin rearrangement, orientation of the centrosome towards the structure and involvement of similar transport molecules (such as IFT20, Rab8, Rab11). This structural and functional homology is the topic of ongoing research. Formation The initial interaction occurs between LFA-1 present in the p-SMAC of a T-cell, and non-specific adhesion molecules (
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%203
Bone morphogenetic protein 3, also known as osteogenin, is a protein in humans that is encoded by the BMP3 gene. The protein encoded by this gene is a member of the transforming growth factor beta superfamily. It, unlike other bone morphogenetic proteins (BMP's) inhibits the ability of other BMP's to induce bone and cartilage development. It is a disulfide-linked homodimer. It negatively regulates bone density. BMP3 is an antagonist to other BMP's in the differentiation of osteogenic progenitors. It is highly expressed in fractured tissues. Cancer BMP3 is hypermethylated in many cases of colorectal cancer (CRC) and hence along with other hypermethylated genes, may be used as a biomarker to detect early stage CRC.
https://en.wikipedia.org/wiki/Software%20security%20assurance
Software security assurance is a process that helps design and implement software that protects the data and resources contained in and controlled by that software. Software is itself a resource and thus must be afforded appropriate security. What is software security assurance? Software Security Assurance (SSA) is the process of ensuring that software is designed to operate at a level of security that is consistent with the potential harm that could result from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects. The software security assurance process begins by identifying and categorizing the information that is to be contained in, or used by, the software. The information should be categorized according to its sensitivity. For example, in the lowest category, the impact of a security violation is minimal (i.e. the impact on the software owner's mission, functions, or reputation is negligible). For a top category, however, the impact may pose a threat to human life; may have an irreparable impact on software owner's missions, functions, image, or reputation; or may result in the loss of significant assets or resources. Once the information is categorized, security requirements can be developed. The security requirements should address access control, including network access and physical access; data management and data access; environmental controls (power, air conditioning, etc.) and off-line storage; human resource security; and audit trails and usage records. What causes software security problems? All security vulnerabilities in software are the result of security bugs, or defects, within the software. In most cases, these defects are created by two primary causes: (1) non-conformance, or a failure to satisfy requirements; and (2) an error or omission in the software requirements. Non-conformance, or a failure to satisfy requirements A non-conformance may be simple–the most common
https://en.wikipedia.org/wiki/Quintaglio%20Ascension%20Trilogy
The Quintaglio Ascension Trilogy is a series of novels written by Canadian science fiction author Robert J. Sawyer. The books depict an Earth-like world on a moon which orbits a gas giant, inhabited by a species of highly evolved, sentient Tyrannosaurs, among various other creatures from the late Cretaceous period, imported to this moon by aliens 65 million years prior to the story. The series consists of three books: Far-Seer, Fossil Hunter, and Foreigner. The trilogy {| class="wikitable" width=100% ! width=75 | Cover ! Title ! Year ! ISBN |- | rowspan=2 | | height=20 | Far-Seer | 1992 | |- | colspan=3 | <blockquote>Far-Seer is the first book in the Quintaglio Ascension. Sixty-five million years ago, aliens transplanted Earth's dinosaurs to another world. Now, intelligent saurians -- the Quintaglio -- have emerged. Afsan, the Quintaglio counterpart of Galileo, must convince his people of the truth about their place in the universe before astronomical forces rip the dinosaurs' new home apart. </blockquote> |- |colspan="4" bgcolor="#CCCCCC"| |- | rowspan=2 | | height=20 | Fossil Hunter| 1993 | |- | colspan=3 | Fossil Hunter is the second book in the series. Toroca, a Quintaglio geologist (and son of Afsan, from the previous book), is under attack for his controversial theory of evolution. But the origins of his people turn out to be more complex than he ever imagined, for he soon discovers the wreckage of an ancient starship -- a relic of the aliens who transplanted Earth's dinosaurs to this solar system. Now Toroca must convince Emperor Dybo that evolution is true; otherwise, the territorial violence inherited from their Tyrannosaur ancestors will destroy the last survivors of Earth's prehistoric past. Meanwhile, Emperor Dybo's rule is challenged by his brother Dy-Rodlox, lord of Edz-Toolar. |- |colspan="4" bgcolor="#CCCCCC"| |- | rowspan=2 | | height=20 | Foreigner| 1994 | |- | colspan=3 | Foreigner is the final book in the series. In Far-Seer an
https://en.wikipedia.org/wiki/COST%20Hata%20model
The COST Hata model is a radio propagation model (i.e. path loss) that extends the urban Hata model (which in turn is based on the Okumura model) to cover a more elaborated range of frequencies (up to 2 GHz). It is the most often cited of the COST 231 models (EU funded research project ca. April 1986 – April 1996), also called the Hata Model PCS Extension. This model is the combination of empirical and deterministic models for estimating path loss in an urban area over frequency range of 800 MHz to 2000 MHz. COST (COopération européenne dans le domaine de la recherche Scientifique et Technique) is a European Union Forum for cooperative scientific research which has developed this model based on experimental measurements in multiple cities across Europe. Applicable to / under conditions This model is applicable to macro cells in urban areas. To further evaluate Path Loss in suburban or rural (quasi-)open areas, this path loss has to be substituted into Urban to Rural / Urban to Suburban Conversions. (Ray GAO, 09 Sep 2007) Coverage Frequency: 1500–2000 MHz Mobile station antenna height: 1–10 m Base station antenna height: 30–200 m Link distance: 1–20 km Mathematical formulation The COST Hata model is formulated as, where, Limitations This model requires that the base station antenna is higher than all adjacent rooftops. See also Hata model Radio propagation model
https://en.wikipedia.org/wiki/Successive-approximation%20ADC
A successive-approximation ADC is a type of analog-to-digital converter that converts a continuous analog waveform into a discrete digital representation using a binary search through all possible quantization levels before finally converging upon a digital output for each conversion. Algorithm The successive-approximation analog-to-digital converter circuit typically consists of four chief subcircuits: A sample-and-hold circuit to acquire the input voltage . An analog voltage comparator that compares to the output of the internal DAC and outputs the result of the comparison to the successive-approximation register (SAR). A successive-approximation register subcircuit designed to supply an approximate digital code of to the internal DAC. An internal reference DAC that, for comparison with , supplies the comparator with an analog voltage equal to the digital code output of the . The successive approximation register is initialized so that the most significant bit (MSB) is equal to a digital 1. This code is fed into the DAC, which then supplies the analog equivalent of this digital code into the comparator circuit for comparison with the sampled input voltage. If this analog voltage exceeds , then the comparator causes the SAR to reset this bit; otherwise, the bit is left as 1. Then the next bit is set to 1 and the same test is done, continuing this binary search until every bit in the SAR has been tested. The resulting code is the digital approximation of the sampled input voltage and is finally output by the SAR at the end of the conversion (EOC). Mathematically, let , so in is the normalized input voltage. The objective is to approximately digitize to an accuracy of . The algorithm proceeds as follows: Initial approximation . approximation , where, is the signum function ( for , for ). It follows using mathematical induction that . As shown in the above algorithm, a SAR ADC requires: An input voltage source . A reference voltage source to nor
https://en.wikipedia.org/wiki/Antibody%20microarray
An antibody microarray (also known as antibody array) is a specific form of protein microarray. In this technology, a collection of captured antibodies are spotted and fixed on a solid surface such as glass, plastic, membrane, or silicon chip, and the interaction between the antibody and its target antigen is detected. Antibody microarrays are often used for detecting protein expression from various biofluids including serum, plasma and cell or tissue lysates. Antibody arrays may be used for both basic research and medical and diagnostic applications. Background The concept and methodology of antibody microarrays were first introduced by Tse Wen Chang in 1983 in a scientific publication and a series of patents, when he was working at Centocor in Malvern, Pennsylvania. Chang coined the term “antibody matrix” and discussed “array” arrangement of minute antibody spots on small glass or plastic surfaces. He demonstrated that a 10×10 (100 in total) and 20×20 (400 in total) grid of antibody spots could be placed on a 1×1 cm surface. He also estimated that if an antibody is coated at a 10 μg/mL concentration, which is optimal for most antibodies, 1 mg of antibody can make 2,000,000 dots of 0.25 mm diameter. Chang's invention focused on the employment of antibody microarrays for the detection and quantification of cells bearing certain surface antigens, such as CD antigens and HLA allotypic antigens, particulate antigens, such as viruses and bacteria, and soluble antigens. The principle of "one sample application, multiple determinations", assay configuration, and mechanics for placing absorbent dots described in the paper and patents should be generally applicable to different kinds of microarrays. When Tse Wen Chang and Nancy T. Chang were setting up Tanox, Inc. in Houston, Texas in 1986, they purchased the rights on the antibody matrix patents from Centocor as part of the technology base to build their new startup. Their first product in development was an assay, te
https://en.wikipedia.org/wiki/Java%20Analysis%20Studio
Java Analysis Studio (JAS) is an object oriented data analysis package developed for the analysis of particle physics data. The latest major version is JAS3. JAS3 is a fully AIDA-compliant data analysis system. It is popular for data analysis in areas of particle physics which are familiar with the Java programming language. The Studio uses many other libraries from the FreeHEP project. External links Java Analysis Studio 3 website AIDA: Abstract Interfaces for Data Analysis — open interfaces and formats for particle physics data processing Data analysis software Experimental particle physics Free software programmed in Java (programming language) Free statistical software Numerical software Physics software
https://en.wikipedia.org/wiki/Expasy
Expasy is an online bioinformatics resource operated by the SIB Swiss Institute of Bioinformatics. It is an extensible and integrative portal which provides access to over 160 databases and software tools and supports a range of life science and clinical research areas, from genomics, proteomics and structural biology, to evolution and phylogeny, systems biology and medical chemistry. The individual resources (databases, web-based and downloadable software tools) are hosted in a decentralized way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Search engine Queries of Expasy allow: parallel searches SIB databases through a single search aggregated search results from the complete set of >160 resources accessible from the portal. Expasy provides up-to-date information from the most recent release of each resources. The terms used in Expasy are based on the EDAM comprehensive ontology. History Expasy was created in August 1993. Originally, it was called ExPASy (Expert Protein Analysis System) and acted as a proteomics server to analyze protein sequences and structures and two-dimensional gel electrophoresis (2-D Page electrophoresis). Among others, ExPASy hosted the protein sequence knowledge base, UniProtKB/Swiss-Prot, and its computer annotated supplement, UniProtKB/TrEMBL, before these moved to the UniProt website. ExPASy was the first website of the life sciences and among the first 150 websites in the world. , ExPASy had been consulted 1 billion times since its installation on 1 August 1993. In June 2011, it became the SIB ExPASy Bioinformatics Resources Portal: a diverse catalogue of bioinformatics resources developed by SIB Groups. The current version of Expasy was released in October 2020. Notes and references External links Official website Bioinformatics Science and technology in Switzerland
https://en.wikipedia.org/wiki/Nairi%20%28computer%29
The first Nairi (, ) computer was developed and launched into production in 1964, at the Yerevan Research Institute of Mathematical Machines (Yerevan, Armenia), and were chiefly designed by Hrachya Ye. Hovsepyan. In 1965, a modified version called Nairi-M, and in 1967 versions called Nairi-S and Nairi-2, were developed. Nairi-3 and Nairi-3-1, which used integrated hybrid chips, were developed in 1970. These computers were used for a wide class of tasks in a variety of areas, including Mechanical Engineering and the Economics. In 1971, the developers of the Nairi computer were awarded the State Prize of the USSR. Nairi-1 The development of the machine began in 1962, completed in 1964. The chief designer is Hrachya Yesaevich Hovsepyan, the leading design engineer is Mikhail Artavazdovich Khachatryan. The architectural solution used in this machine has been patented in England, Japan, France and Italy. Specification The processor is 36-bit. The clock frequency is 50 kHz. ROM (in the original documentation - DZU (long-term memory) of a cassette type, the volume of the cassette is 2048 words of 36 bits each; was used to store firmware (2048 72-bit cells) and firmware (12288 36-bit cells). Part of the ROM it was delivered "empty", with the ability for users to flash their most frequently used programs, thus getting rid of entering programs from the remote control or punched tape. The amount of RAM is 1024 words (8 cassettes of 128 cells), plus 5 registers. Operations speed on addition on fixed-point numbers - 2-3 thousand ops / s, multiplication - 100 ops / s, operations on floating point numbers - 100 ops / s. Since 1964, the machine has been produced at two factories in Armenia, as well as at the Kazan computer plant (from 1964 to 1970, about 500 machines were produced in total). In the spring of 1965, the computer was presented at a fair in Leipzig (Germany). There were a number machine's modifications: "Nairi-M" (1965) - the photoreader FS-1501 and the
https://en.wikipedia.org/wiki/Amylolytic%20process
Amylolytic process or amylolysis is the conversion of starch into sugar by the action of acids or enzymes such as amylase. Starch begins to pile up inside the leaves of plants during times of light when starch is able to be produced by photosynthetic processes. This ability to make starch disappears in the dark due to the lack of illumination; there is insufficient amount of light produced during the dark needed to carry this reaction forward. Turning starch into sugar is done by the enzyme amylase. Different pathways of amylase & location of amylase activity The process in which amylase breaks down starch for sugar consumption is not consistent with all organisms that use amylase to breakdown stored starch. There are different amylase pathways that are involved in starch degradation. The occurrence of starch degradation into sugar by the enzyme amylase was most commonly known to take place in the Chloroplast, but that has been proven wrong. One example is the spinach plant, in which the chloroplast contains both alpha and beta amylase (They are different versions of amylase involved in the breakdown of starch and they differ in their substrate specificity). In spinach leaves, the extrachloroplastic region contains the highest level of amylase degradation of starch. The difference between chloroplast and extrachloroplastic starch degradation is in the amylase pathway they prefer; either beta or alpha amylase. For spinach leaves, Alpha-amylase is preferred but for plants/organisms like wheat, barley, peas, etc. the Beta-amylase is preferred. Usage The amylolytic process is used in the brewing of alcohol from grains. Since grains contain starches but little to no simple sugars, the sugar needed to produce alcohol is derived from starch via the amylolytic process. In beer brewing, this is done through malting. In sake brewing, the mold Aspergillus oryzae provides amylolysis, and in Tapai, Saccharomyces cerevisiae. The amylolytic process can also be used to allow
https://en.wikipedia.org/wiki/Advanced%20Message%20Queuing%20Protocol
The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security. AMQP mandates the behavior of the messaging provider and client to the extent that implementations from different vendors are interoperable, in the same way as SMTP, HTTP, FTP, etc. have created interoperable systems. Previous standardizations of middleware have happened at the API level (e.g. JMS) and were focused on standardizing programmer interaction with different middleware implementations, rather than on providing interoperability between multiple implementations. Unlike JMS, which defines an API and a set of behaviors that a messaging implementation must provide, AMQP is a wire-level protocol. A wire-level protocol is a description of the format of the data that is sent across the network as a stream of bytes. Consequently, any tool that can create and interpret messages that conform to this data format can interoperate with any other compliant tool irrespective of implementation language. Overview AMQP is a binary, application layer protocol, designed to efficiently support a wide variety of messaging applications and communication patterns. It provides flow controlled, message-oriented communication with message-delivery guarantees such as at-most-once (where each message is delivered once or never), at-least-once (where each message is certain to be delivered, but may do so multiple times) and exactly-once (where the message will always certainly arrive and do so only once), and authentication and/or encryption based on SASL and/or TLS. It assumes an underlying reliable transport layer protocol such as Transmission Control Protocol (TCP). The AMQP specification is defined in several layers: (i) a type system, (ii) a symmetric, asynchronous protocol for the transfer of messages fr
https://en.wikipedia.org/wiki/Exotic%20material
Exotic Materials can include plastics, superalloys, semiconductors, superconductors, and ceramics. Exotic metals and alloys Examples of metals and alloys that can be exotic: Aluminum Nickel Chromium Cobalt Copper Hastelloy Inconel Mercury (element) (aka quicksilver, hydrargyrum) Molybdenum Monel Platinum Stainless steel Tantalum Titanium Tungsten or Wolframite Waspaloy Materials with high alloy content, known as super alloys or exotic alloys, offer enhanced performance properties including excellent strength and durability, and resistance to oxidation, corrosion and deforming at high temperatures or under extreme pressure. Because of these properties, super alloys make the best spring materials for demanding working conditions, which can be encountered across various industry sectors, including the automotive, marine and aerospace sectors as well as oil and gas extraction, thermal processing, petrochemical processing and power generation. Notes Materials
https://en.wikipedia.org/wiki/Apeirogon
In geometry, an apeirogon () or infinite polygon is a polygon with an infinite number of sides. Apeirogons are the two-dimensional case of infinite polytopes. In some literature, the term "apeirogon" may refer only to the regular apeirogon, with an infinite dihedral group of symmetries. Definitions Classical constructive definition Given a point A0 in a Euclidean space and a translation S, define the point Ai to be the point obtained from i applications of the translation S to A0, so Ai = Si(A0). The set of vertices Ai with i any integer, together with edges connecting adjacent vertices, is a sequence of equal-length segments of a line, and is called the regular apeirogon as defined by H. S. M. Coxeter. A regular apeirogon can be defined as a partition of the Euclidean line E1 into infinitely many equal-length segments. It generalizes the regular n-gon, which may be defined as a partition of the circle S1 into finitely many equal-length segments. Modern abstract definition An abstract polytope is a partially ordered set P (whose elements are called faces) with properties modeling those of the inclusions of faces of convex polytopes. The rank (or dimension) of an abstract polytope is determined by the length of the maximal ordered chains of its faces, and an abstract polytope of rank n is called an abstract n-polytope. For abstract polytopes of rank 2, this means that: A) the elements of the partially ordered set are sets of vertices with either zero vertex (the empty set), one vertex, two vertices (an edge), or the entire vertex set (a two-dimensional face), ordered by inclusion of sets; B) each vertex belongs to exactly two edges; C) the undirected graph formed by the vertices and edges is connected. An abstract polytope is called an abstract apeirotope if it has infinitely many elements; an abstract 2-apeirotope is called an abstract apeirogon. In an abstract polytope, a flag is a collection of one face of each dimension, all incident to each other (that is
https://en.wikipedia.org/wiki/WAN%20optimization
WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market. The most common measures of TCP data-transfer efficiencies (i.e., optimization) are throughput, bandwidth requirements, latency, protocol optimization, and congestion, as manifested in dropped packets. In addition, the WAN itself can be classified with regards to the distance between endpoints and the amounts of data transferred. Two common business WAN topologies are Branch to Headquarters and Data Center to Data Center (DC2DC). In general, "Branch" WAN links are closer, use less bandwidth, support more simultaneous connections, support smaller connections and more short-lived connections, and handle a greater variety of protocols. They are used for business applications such as email, content management systems, database application, and Web delivery. In comparison, "DC2DC" WAN links tend to require more bandwidth, are more distant, and involve fewer connections, but those connections are bigger (100 Mbit/s to 1 Gbit/s flows) and of longer duration. Traffic on a "DC2DC" WAN may include replication, back up, data migration, virtualization, and other Business Continuity/Disaster Recovery (BC/DR) flows. WAN optimization has been the subject of extensive academic research almost since the advent of the WAN. In the early 2000s, research in both the private and public sectors turned to improving the end-to-end throughput of TCP, and the target of the first proprietary WAN optimization solutions was the Branch WAN. In recent years, however, the rapid growth of digital data, and the concomitant needs to store and protect it, has presented a need for DC2DC WAN optimization. For example, such optimizations can be performed t
https://en.wikipedia.org/wiki/Demand%20characteristics
In social research, particularly in psychology, the term demand characteristic refers to an experimental artifact where participants form an interpretation of the experiment's purpose and subconsciously change their behavior to fit that interpretation. Typically, demand characteristics are considered an extraneous variable, exerting an effect on behavior other than that intended by the experimenter. Pioneering research was conducted on demand characteristics by Martin Orne. A possible cause for demand characteristics is participants' expectations that they will somehow be evaluated, leading them to figure out a way to 'beat' the experiment to attain good scores in the alleged evaluation. Rather than giving an honest answer, participants may change some or all of their answers to match the experimenter's requirements, that demand characteristics can change participant's behaviour to appear more socially or morally responsible. Demand characteristics cannot be eliminated from experiments, but demand characteristics can be studied to see their effect on such experiments. Examples of common demand characteristics Common demand characteristics include: Rumors of the study – any information, true or false, circulated about the experiment outside of the experiment itself. Setting of the laboratory – the location where the experiment is being performed, if it is significant. Explicit or implicit communication – any communication between the participant and experimenter, whether it be verbal or non-verbal, that may influence their perception of the experiment. Weber and Cook have described some demand characteristics as involving the participant taking on a role in the experiment. These roles include: The good-participant role (also known as the please-you effect) in which the participant attempts to discern the experimenter's hypotheses and to confirm them. The participant does not want to "ruin" the experiment. The negative-participant role (also known as the screw
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%208A
Bone morphogenetic protein 8A (BMP8A) is a protein that in humans is encoded by the BMP8A gene. BMP8A is a polypeptide member of the TGFβ superfamily of proteins. It, like other bone morphogenetic proteins (BMPs), is involved in the development of bone and cartilage. BMP8A may be involved in epithelial osteogenesis. It also plays a role in bone homeostasis. It is a disulfide-linked homodimer.
https://en.wikipedia.org/wiki/Paleobiology
Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth. Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees. An investigator in this field is known as a paleobiologist. Important research areas Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology. Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology. Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology. Paleovirology examines the evolutionary history of viruses on paleobiological timescales. Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic. Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life. Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism. Paleoichnology analyzes the tracks, bo
https://en.wikipedia.org/wiki/Osborne%20Fire%20Finder
The Osborne Fire Finder is a type of alidade used by fire lookouts to find a directional bearing (azimuth) to smoke in order to alert fire crews to a wildland fire. History and development The forerunner to the device was invented around 1840 by Sir Francis Ronalds to help combat fire in London – he also named his innovation the “Fire Finder”. Ronalds' fire finder comprised a theodolite atop a watchtower. Bearings and vertical angles from the horizon to surrounding features were recorded either on a surrounding cylinder in the form of a panorama or on a circular table at the base of the instrument. The location of any fire could thus be pinpointed even in the dark. The modern version was created by William "W.B." Osborne, a United States Forest Service employee from Portland, Oregon, and has been in service since 1915. Mr. Osborne also designed the photo-recording transit for making panoramic records of forest conditions, as well as a collapsible water-bag knapsack for firefighting (U.S. patented in 1935). Many fire finders were manufactured from 1920 through 1935, but the manufacturer, Leupold & Stevens, Inc., stopped production of replacement parts after 1975. In recent years, with the resurgence and recovery of fire lookout towers, new Osborne devices were needed. The U.S. Forest Service, San Dimas Technology and Development Center (SDTDC) was contacted regarding the deteriorating condition of the Osborne Fire Finders housed in fire lookouts throughout the United States. A pilot program to create new Osbornes was coordinated with manufacturer Palmquist Tooling, Inc., and now Osborne Fire Finders are once again available. Use The system is composed of a topographic map of the area oriented and centered on a horizontal table with a circular rim graduated in degrees (and fractions). Two sighting apertures are mounted above the map on opposite sides of the ring and slide around the arc. The device is used by moving the sights until the observer can peek t
https://en.wikipedia.org/wiki/GeeXboX
GeeXboX (stylized as GEExBox) is a free Linux distribution providing a media center software suite for personal computers. GeeXboX 2.0 and later uses XBMC for media playback and is implemented as Live USB and Live CD options. As such, the system does not need to be permanently installed to a hard drive, as most modern operating systems would. Instead, the computer can be booted with the GeeXboX CD when media playback is desired. It is based on the Debian distribution of Linux. This is a reasonable approach for those who do not need media playback services while performing other tasks with the same computer, for users who wish to repurpose older computers as media centers, and for those seeking a free alternative to Windows XP Media Center Edition. An unofficial port of GeeXboX 1.x also runs on the Wii. History See also List of free television software XBMC Media Center, the cross-platform open source media player software that GeeXboX 2.0 and later uses as a front end GUI.
https://en.wikipedia.org/wiki/Anabasine
Anabasine is a pyridine and piperidine alkaloid found in the Tree Tobacco (Nicotiana glauca) plant, as well as in the close relative of the common tobacco plant (Nicotiana tabacum). It is a structural isomer of, and chemically similar to, nicotine. Its principal (historical) industrial use is as an insecticide. Anabasine is present in trace amounts in tobacco smoke, and can be used as an indicator of a person's exposure to tobacco smoke. Pharmacology Anabasine is a nicotinic acetylcholine receptor agonist. In high doses, it produces a depolarizing block of nerve transmission, which can cause symptoms similar to those of nicotine poisoning and, ultimately, death by asystole. In larger amounts it is thought to be teratogenic in swine. The intravenous LD50 of anabasine ranges from 11 mg/kg to 16 mg/kg in mice, depending on the enantiomer. Analogs B. Bhatti, et al. made some higher potency sterically strained bicyclic analogs of anabasine: 2-(Pyridin-3-yl)-1-azabicyclo[3.2.2]nonane (TC-1698) 2-(Pyridin-3-yl)-1-azabicyclo[2.2.2]octane, and 2-(Pyridin-3-yl)-1-azabicyclo[3.2.1]octane. See also Anatabine
https://en.wikipedia.org/wiki/Signed%20distance%20function
In mathematics and its applications, the signed distance function (or oriented distance function) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space, with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside). Definition If Ω is a subset of a metric space X with metric d, then the signed distance function f is defined by where denotes the boundary of For any where denotes the infimum. Properties in Euclidean space If Ω is a subset of the Euclidean space Rn with piecewise smooth boundary, then the signed distance function is differentiable almost everywhere, and its gradient satisfies the eikonal equation If the boundary of Ω is Ck for k ≥ 2 (see Differentiability classes) then d is Ck on points sufficiently close to the boundary of Ω. In particular, on the boundary f satisfies where N is the inward normal vector field. The signed distance function is thus a differentiable extension of the normal vector field. In particular, the Hessian of the signed distance function on the boundary of Ω gives the Weingarten map. If, further, Γ is a region sufficiently close to the boundary of Ω that f is twice continuously differentiable on it, then there is an explicit formula involving the Weingarten map Wx for the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, if T(∂Ω, μ) is the set of points within distance μ of the boundary of Ω (i.e. the tubular neighbourhood of radius μ), and g is an absolutely integrable function on Γ, then where denotes the determinant and dSu indicates that we are taking the surface integral. Algorithms Algorit
https://en.wikipedia.org/wiki/Sergey%20Mergelyan
Sergey Mergelyan (; 19 May 1928 – 20 August 2008) was a Soviet and Armenian mathematician, who made major contributions to the Approximation theory. The modern Complex Approximation Theory is based on Mergelyan's classical work. Corresponding Member of the Academy of Sciences of the Soviet Union (since 1953), member of NAS ASSR (since 1956). The surname "Mergelov" given at birth was changed for patriotic reasons to the more Armenian-sounding "Mergelyan" by the mathematician himself before his trip to Moscow. He was a laureate of the Stalin Prize (1952) and the Order of St. Mesrop Mashtots (2008). He was the youngest Doctor of Sciences in the history of the USSR (at the age of 20), and the youngest corresponding member of the Academy of Sciences of the Soviet Union (the title was conferred at the age of 24). During his postgraduate studies, the 20-year-old Mergelyan solved one of the fundamental problems of the mathematical theory of functions, which had not been solved for more than 70 years. His theorem on the possibility of uniform polynomial approximation of functions of a complex variable is recognized by the classical Mergelyan theorem, and is included in the course of the theory of functions. Although he himself was not a computer designer, Mergelyan was a pioneer in Soviet computational mathematics. Biography Early years Sergey Mergelyan was born on 19 May 1928 in Simferopol in an Armenian family. His father Nikita (Mkrtich) Ivanovich Mergelov, a former private entrepreneur (Nepman), his mother Lyudmila Ivanovna Vyrodova, the daughter of the manager of the Azov-Black Sea bank, who was shot in 1918. In 1936 Sergey's father was building a paper mill in Yelets, but soon together with his family was deported to the Siberian settlement of Narym, Tomsk Oblast. In the Siberian frost, Sergey suffered from a serious illness and narrowly survived. In 1937, the mother and son were acquitted by the court's decision and returned to Kerch, and in 1938 Lyudmila Ivanov
https://en.wikipedia.org/wiki/Dietrich%20Stoyan
Dietrich Stoyan (born 1940, Germany) is a German mathematician and statistician who made contributions to queueing theory, stochastic geometry, and spatial statistics. Education and career Stoyan studied mathematics at Technical University Dresden; applied research at Deutsches Brennstoffinstitut Freiberg, 1967 PhD, 1975 Habilitation. Since 1976 at TU Bergakademie Freiberg, Rektor of that university in 1991—1997; he became famous by his statistical research of the diffusion of euro coins in Germany and Europe after the introduction of the euro in 2002. Research Queueing Theory Qualitative theory, in particular inequalities, for queueing systems and related stochastic models. The books D. Stoyan: Comparison Methods for Queues and other Stochastic Models. J. Wiley and Sons, Chichester, 1983 and A. Mueller and D. Stoyan: Comparison Methods for Stochastic Models and Risks, J. Wiley and Sons, Chichester, 2002 report on the results. The work goes back to 1969 when he discovered the monotonicity of the GI/G/1 waiting times with respect to the convex order. Stochastic Geometry Stereological formulae, applications for marked point process, development of stochastic models. Successful joint work with Joseph Mecke led to the first exact proof of the fundamental stereological formulae. The book Stochastic Geometry and its Applications, by D. Stoyan, W.S. Kendall and J. Mecke reports on the results. The book of 1995 is the key reference for applied stochastic geometry. Spatial Statistics Statistical methods for point processes, random sets and many other random geometrical structures such as fibre processes. Results can be found in the 1995 book on stochastic geometry and in the book, Fractals, Random Shapes and Point Fields by D. and H. Stoyan. (J. Wiley and Sons, Chichester, 1994). A particular strength of Stoyan is second-order methods. At the moment Dietrich Stoyan is working (together with three colleagues) for a new book on point process statistics. He used packi
https://en.wikipedia.org/wiki/Progressive%20muscular%20atrophy
Progressive muscular atrophy (PMA), also called Duchenne–Aran disease and Duchenne–Aran muscular atrophy, is a disorder characterised by the degeneration of lower motor neurons, resulting in generalised, progressive loss of muscle function. PMA is classified among motor neuron diseases (MND) where it is thought to account for around 4% of all MND cases. PMA affects only the lower motor neurons, in contrast to amyotrophic lateral sclerosis (ALS), the most common MND, which affects both the upper and lower motor neurons, or primary lateral sclerosis, another MND, which affects only the upper motor neurons. The distinction is important because PMA is associated with a better prognosis than ALS. Signs and symptoms As a result of lower motor neuron degeneration, the symptoms of PMA include: muscle weakness muscle atrophy fasciculations Some patients have symptoms restricted only to the arms or legs (or in some cases just one of either). These cases are referred to as flail limb (either flail arm or flail leg) and are associated with a better prognosis. Diagnosis PMA is a diagnosis of exclusion, there is no specific test which can conclusively establish whether a patient has the condition. Instead, a number of other possibilities have to be ruled out, such as multifocal motor neuropathy or spinal muscular atrophy. Tests used in the diagnostic process include MRI, clinical examination, and EMG. EMG tests in patients who do have PMA usually show denervation (neuron death) in most affected body parts, and in some unaffected parts too. It typically takes longer to be diagnosed with PMA than ALS, an average of 20 months for PMA vs 15 months in ALS. Differential diagnosis In contrast to amyotrophic lateral sclerosis or primary lateral sclerosis, PMA is distinguished by the absence of: brisk reflexes spasticity Babinski's sign emotional lability The importance of correctly recognizing progressive muscular atrophy as opposed to ALS is important for several reasons.
https://en.wikipedia.org/wiki/Immunoglobulin%20superfamily
The immunoglobulin superfamily (IgSF) is a large protein superfamily of cell surface and soluble proteins that are involved in the recognition, binding, or adhesion processes of cells. Molecules are categorized as members of this superfamily based on shared structural features with immunoglobulins (also known as antibodies); they all possess a domain known as an immunoglobulin domain or fold. Members of the IgSF include cell surface antigen receptors, co-receptors and co-stimulatory molecules of the immune system, molecules involved in antigen presentation to lymphocytes, cell adhesion molecules, certain cytokine receptors and intracellular muscle proteins. They are commonly associated with roles in the immune system. Otherwise, the sperm-specific protein IZUMO1, a member of the immunoglobulin superfamily, has also been identified as the only sperm membrane protein essential for sperm-egg fusion. Immunoglobulin domains Proteins of the IgSF possess a structural domain known as an immunoglobulin (Ig) domain. Ig domains are named after the immunoglobulin molecules. They contain about 70-110 amino acids and are categorized according to their size and function. Ig-domains possess a characteristic Ig-fold, which has a sandwich-like structure formed by two sheets of antiparallel beta strands. Interactions between hydrophobic amino acids on the inner side of the sandwich and highly conserved disulfide bonds formed between cysteine residues in the B and F strands, stabilize the Ig-fold. Classification The Ig like domains can be classified as IgV, IgC1, IgC2, or IgI. Most Ig domains are either variable (IgV) or constant (IgC). IgV: IgV domains with 9 beta strands are generally longer than IgC domains with 7 beta strands. IgC1 and IgC2: Ig domains of some members of the IgSF resemble IgV domains in the amino acid sequence, yet are similar in size to IgC domains. These are called IgC2 domains, while standard IgC domains are called IgC1 domains. IgI: Other Ig domains exi
https://en.wikipedia.org/wiki/Henyey%20track
The Henyey track is a path taken by pre-main-sequence stars with masses greater than 0.5 solar masses in the Hertzsprung–Russell diagram after the end of the Hayashi track. The astronomer Louis G. Henyey and his colleagues in the 1950s showed that the pre-main-sequence star can remain in radiative equilibrium throughout some period of its contraction to the main sequence. The Henyey track is characterized by a slow collapse in near hydrostatic equilibrium, approaching the main sequence almost horizontally in the Hertzsprung–Russell diagram (i.e. the luminosity remains almost constant). See also Historical brightest stars List of brightest stars List of most luminous stars List of nearest bright stars List of Solar System objects in hydrostatic equilibrium Stellar evolution Stellar birthline Stellar isochrone
https://en.wikipedia.org/wiki/Dry%20run%20%28testing%29
A dry run (or practice run) is a software testing process used to make sure that a system works correctly and will not result in severe failure. For example, rsync, a utility for transferring and synchronizing data between networked computers or storage drives, has a "dry-run" option users can use to check that their command-line arguments are valid and to simulate what would happen when actually copying the data. In acceptance procedures (such as factory acceptance testing, for example), a "dry run" is when the factory, a subcontractor, performs a complete test of the system it has to deliver before it is actually accepted by the customer. Etymology The term dry run appears to have originated from fire departments in the US. In order to practice, they would carry out dispatches of the fire brigade where water was not pumped. A run with real fire and water was referred to as a wet run. The more general usage of the term seems to have arisen from widespread use by the United States Armed Forces during World War II. See also Code review Pilot experiment Preview (computing)
https://en.wikipedia.org/wiki/Newport%20Museum
Newport Museum and Art Gallery () (known locally as the City Museum ()) is a museum, library and art gallery in the city of Newport, South Wales. It is located in Newport city centre on John Frost Square and is adjoined to the Kingsway Shopping Centre. The collections Newport Museum opened in 1888. The collections include Archaeology, Social History, Art and Natural History. The most ancient artefacts in the museum are tools made by hunter-gatherers who walked the shores of the Severn estuary hundreds of thousands of years ago. The Roman collections rank amongst the best in Wales, comprising material excavated from the Roman town of Caerwent and the fortress at Caerleon. The Medieval and later collections feature finds from local castles and priories, including an outstanding assemblage from Penhow Castle. The most significant items of Social History are the Chartist collection of weapons, broadsheets, prints and silver from the 1839 Chartist uprising in Newport and the Transporter Bridge archive, which includes all of the original designs for the bridge and photographs of its construction. The Fine Arts collections includes paintings by Sir Stanley Spencer, Dame Laura Knight and L S Lowry, and Welsh artists such as Kyffin Williams, Ceri Richards and Stanley Lewis. The Decorative Art collections feature the John Wait teapot collection and the Iris Fox collection of porcelain and Wemyss ware and sculpture by Sir Jacob Epstein and studio ceramics by Lucy Rie and Ewen Henderson. Art gallery As well as a museum, the building is home to Newport's principal art gallery. The gallery hosts a wide variety of British paintings, watercolours and contemporary artworks. The largest collection is known as the John & Elizabeth Wait Collection. Past exhibitions at the gallery have attracted controversy. In 2008 a painting of a naked woman smoking was removed from display after a complaint from a bishop. When it was put back, 20,000 people queued to see it. In October 2011 the
https://en.wikipedia.org/wiki/Lorcon
lorcon (acronym for Loss Of Radio CONnectivity) is an open source network tool. It is a library for injecting 802.11 (WLAN) frames, capable of injecting via multiple driver frameworks, without the need to change the application code. Lorcon is built by patching the third-party MadWifi-driver for cards based on the Qualcomm Atheros wireless chipset. The project is maintained by Joshua Wright and Michael Kershaw ("dragorn").
https://en.wikipedia.org/wiki/Slip%20melting%20point
The Slip melting point (SMP) or "slip point" is one conventional definition of the melting point of a waxy solid. It is determined by casting a 10 mm column of the solid in a glass tube with an internal diameter of about 1 mm and a length of about 80 mm, and then immersing it in a temperature-controlled water bath. The slip point is the temperature at which the column of the solid begins to rise in the tube due to buoyancy, and because the outside surface of the solid is molten. This is a popular method for fats and waxes, because they tend to be mixtures of compounds with a range of molecular masses, without well-defined melting points.
https://en.wikipedia.org/wiki/Quasi-open%20map
In topology a branch of mathematics, a quasi-open map or quasi-interior map is a function which has similar properties to continuous maps. However, continuous maps and quasi-open maps are not related. Definition A function between topological spaces and is quasi-open if, for any non-empty open set , the interior of in is non-empty. Properties Let be a map between topological spaces. If is continuous, it need not be quasi-open. Conversely if is quasi-open, it need not be continuous. If is open, then is quasi-open. If is a local homeomorphism, then is quasi-open. The composition of two quasi-open maps is again quasi-open. See also Notes
https://en.wikipedia.org/wiki/Gain%20%28projection%20screens%29
Gain is a property of a projection screen, and is one of the specifications quoted by projection screen manufacturers. Interpretation The number that is typically measured is called the peak gain at zero degrees viewing axis, and represents the gain value for a viewer seated along a line perpendicular to the screen's viewing surface. The gain value represents the ratio of brightness of the screen relative to a set standard (in this case, a sheet of magnesium carbonate). Screens with a higher brightness than this standard are rated with a gain higher than 1.0, while screens with lower brightness are rated from 0.0 to 1.0. Since a projection screen is designed to scatter the impinging light back to the viewers, the scattering can either be highly diffuse or highly concentrated. Highly concentrated scatter results in a higher screen gain (a brighter image) at the cost of a more limited viewing angle (as measured by the half-gain viewing angle), whereas highly diffuse scattering results in lower screen gain (a dimmer image) with the benefit of a wider viewing angle. Sources Display technology
https://en.wikipedia.org/wiki/Nanocomposite
Nanocomposite is a multiphase solid material where one of the phases has one, two or three dimensions of less than 100 nanometers (nm) or structures having nano-scale repeat distances between the different phases that make up the material. The idea behind Nanocomposites is to use building blocks with dimensions in nanometre range to design and create new materials with unprecedented flexibility and improvement in their physical properties. In the broadest sense this definition can include porous media, colloids, gels and copolymers, but is more usually taken to mean the solid combination of a bulk matrix and nano-dimensional phase(s) differing in properties due to dissimilarities in structure and chemistry. The mechanical, electrical, thermal, optical, electrochemical, catalytic properties of the nanocomposite will differ markedly from that of the component materials. Size limits for these effects have been proposed: <5 nm for catalytic activity <20 nm for making a hard magnetic material soft <50 nm for refractive index changes <100 nm for achieving superparamagnetism, mechanical strengthening or restricting matrix dislocation movement Nanocomposites are found in nature, for example in the structure of the abalone shell and bone. The use of nanoparticle-rich materials long predates the understanding of the physical and chemical nature of these materials. Jose-Yacaman et al. investigated the origin of the depth of colour and the resistance to acids and bio-corrosion of Maya blue paint, attributing it to a nanoparticle mechanism. From the mid-1950s nanoscale organo-clays have been used to control flow of polymer solutions (e.g. as paint viscosifiers) or the constitution of gels (e.g. as a thickening substance in cosmetics, keeping the preparations in homogeneous form). By the 1970s polymer/clay composites were the topic of textbooks, although the term "nanocomposites" was not in common use. In mechanical terms, nanocomposites differ from conventional composite m
https://en.wikipedia.org/wiki/Fruit%20syrup
Fruit syrups or fruit molasses are concentrated fruit juices used as sweeteners. Fruit syrups have been used in many cuisines: in Arab cuisine, rub, jallab; in Ancient Greek cuisine, epsima; in Greek cuisine, petimezi; in Indian cuisine, drakshasava; in Ottoman cuisine, pekmez; in Persian cuisine, robb-e anâr; in Ancient Roman cuisine, defrutum, carenum, and sapa. Some foods are made using fruit syrups or molasses: Churchkhela, a sausage-shaped candy made from grape must and nuts In modern industrial foods, they are often made from a less expensive fruit (such as apples, pears, or pineapples) and used to sweeten more expensive fruits or products and to extend their quantity. A typical use would be for an "all-fruit" strawberry spread that contains apple juice as well as strawberries. See also Cheong Grape syrup List of syrups Squash (drink)
https://en.wikipedia.org/wiki/Signal-to-noise%20statistic
In mathematics the signal-to-noise statistic distance between two vectors a and b with mean values and and standard deviation and respectively is: In the case of Gaussian-distributed data and unbiased class distributions, this statistic can be related to classification accuracy given an ideal linear discrimination, and a decision boundary can be derived. This distance is frequently used to identify vectors that have significant difference. One usage is in bioinformatics to locate genes that are differential expressed on microarray experiments. See also Distance Uniform norm Manhattan distance Signal-to-noise ratio Signal to noise ratio (imaging) Notes Statistical distance Statistical ratios
https://en.wikipedia.org/wiki/Principal%20meridian
A principal meridian is a meridian used for survey control in a large region. Canada The Dominion Land Survey of Western Canada took its origin at the First (or Principal) Meridian, located at 97°27′28.41″ west of Greenwich, just west of Winnipeg, Manitoba. This line is exactly ten miles west of the Red River at the Canada–United States border. Six other meridians were designated at four-degree intervals westward, with the seventh located in British Columbia; the second and fourth meridians form the general eastern border and the western border of Saskatchewan. United States In the United States Public Land Survey System, a principal meridian is the principal north-south line used for survey control in a large region, and which divides townships between east and west. The meridian meets its corresponding baseline at the point of origin, or initial point, for the land survey. For example, the Mount Diablo Meridian, used for surveys in California and Nevada, runs north-south through the summit of Mount Diablo. Often, meridians are marked with roads, such as the Meridian Avenue in San Jose, California, Meridian Road in Vacaville, California, both on the Mount Diablo Meridian, Meridian Road in Wichita, Kansas on the Sixth Principal Meridian, and Meridian Avenue in several western Washington counties generally following the Willamette Meridian. Baseline Road or Base Line Street extends for about from Highland, California east of San Bernardino to La Verne, California where it meets Foothill Boulevard. See also Cardo Baseline (surveying) List of principal and guide meridians and base lines of the United States External links The Principal Meridian Project (US) History of the Rectangular Survey System Note: this is a large file, approximately 46MB. Searchable PDF prepared by the author, C. A. White. Resources page of the U.S. Department of the Interior, Bureau of Land Management Surveying Meridians (geography)
https://en.wikipedia.org/wiki/Bhabha%20scattering
In quantum electrodynamics, Bhabha scattering is the electron-positron scattering process: There are two leading-order Feynman diagrams contributing to this interaction: an annihilation process and a scattering process. Bhabha scattering is named after the Indian physicist Homi J. Bhabha. The Bhabha scattering rate is used as a luminosity monitor in electron-positron colliders. Differential cross section To leading order, the spin-averaged differential cross section for this process is where s,t, and u are the Mandelstam variables, is the fine-structure constant, and is the scattering angle. This cross section is calculated neglecting the electron mass relative to the collision energy and including only the contribution from photon exchange. This is a valid approximation at collision energies small compared to the mass scale of the Z boson, about 91 GeV; at higher energies the contribution from Z boson exchange also becomes important. Mandelstam variables In this article, the Mandelstam variables are defined by {| |align="right"| |align="right"| |align="right"| |align="right"| |align="right"| |rowspan="3"|         |- |align="right"| |align="right"| |align="right"| | | |- |align="right"| |align="right"| |align="right"| | | |} where the approximations are for the high-energy (relativistic) limit. Deriving unpolarized cross section Matrix elements Both the scattering and annihilation diagrams contribute to the transition matrix element. By letting k and k' represent the four-momentum of the positron, while letting p and p' represent the four-momentum of the electron, and by using Feynman rules one can show the following diagrams give these matrix elements: {| border="0" cellpadding="5" cellspacing="0" | |align="center" | |align="center" | |Where we use: are the Gamma matrices, are the four-component spinors for fermions, while are the four-component spinors for anti-fermions (see Four spinors). |- | |align="center" | (scattering) |align="center" | (anni
https://en.wikipedia.org/wiki/Specific%20speed
Specific speed Ns, is used to characterize turbomachinery speed. Common commercial and industrial practices use dimensioned versions which are of equal utility. Specific speed is most commonly used in pump applications to define the suction specific speed —a quasi non-dimensional number that categorizes pump impellers as to their type and proportions. In Imperial units it is defined as the speed in revolutions per minute at which a geometrically similar impeller would operate if it were of such a size as to deliver one gallon per minute against one foot of hydraulic head. In metric units flow may be in l/s or m³/s and head in m, and care must be taken to state the units used. Performance is defined as the ratio of the pump or turbine against a reference pump or turbine, which divides the actual performance figure to provide a unitless figure of merit. The resulting figure would more descriptively be called the "ideal-reference-device-specific performance." This resulting unitless ratio may loosely be expressed as a "speed," only because the performance of the reference ideal pump is linearly dependent on its speed, so that the ratio of [device-performance to reference-device-performance] is also the increased speed at which the reference device would need to operate, in order to produce the performance, instead of its reference speed of "1 unit." Specific speed is an index used to predict desired pump or turbine performance. i.e. it predicts the general shape of a pump's impeller. It is this impeller's "shape" that predicts its flow and head characteristics so that the designer can then select a pump or turbine most appropriate for a particular application. Once the desired specific speed is known, basic dimensions of the unit's components can be easily calculated. Several mathematical definitions of specific speed (all of them actually ideal-device-specific) have been created for different devices and applications. Pump specific speed Low-specific speed ra
https://en.wikipedia.org/wiki/De%20novo%20synthesis
In chemistry, de novo synthesis () refers to the synthesis of complex molecules from simple molecules such as sugars or amino acids, as opposed to recycling after partial degradation. For example, nucleotides are not needed in the diet as they can be constructed from small precursor molecules such as formate and aspartate. Methionine, on the other hand, is needed in the diet because while it can be degraded to and then regenerated from homocysteine, it cannot be synthesized de novo. Nucleotide De novo pathways of nucleotides do not use free bases: adenine (abbreviated as A), guanine (G), cytosine (C), thymine (T), or uracil (U). The purine ring is built up one atom or a few atoms at a time and attached to ribose throughout the process. Pyrimidine ring is synthesized as orotate and attached to ribose phosphate and later converted to common pyrimidine nucleotides. Cholesterol Cholesterol is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. In mammals cholesterol is either absorbed from dietary sources or is synthesized de novo. Up to 70-80% of de novo cholesterol synthesis occurs in the liver, and about 10% of de novo cholesterol synthesis occurs in the small intestine. Cancer cells require cholesterol for cell membranes, so cancer cells contain many enzymes for de novo cholesterol synthesis from acetyl-CoA. Fatty-acid (de novo lipogenesis) De novo lipogenesis (DNL) is the process by which carbohydrates (primarily, especially after a high-carbohydrate meal) from the circulation are converted into fatty acids, which can be further converted into triglycerides or other lipids. Acetate and some amino acids (notably leucine and isoleucine) can also be carbon sources for DNL. Normally, de novo lipogenesis occurs primarily in adipose tissue. But in conditions of obesity, insulin resistance, or type 2 diabetes de novo lipogenesis is reduced in adipos
https://en.wikipedia.org/wiki/System%20of%20systems%20engineering
System of systems engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges. Overview System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements. System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems. Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Sys
https://en.wikipedia.org/wiki/Laboratory%20automation
Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible. The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories. The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications. History At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the s
https://en.wikipedia.org/wiki/List%20of%20causes%20of%20death%20by%20rate
The following is a list of the causes of human deaths worldwide for different years arranged by their associated mortality rates. In 2002, there were about 57 million deaths. In 2005, according to the World Health Organization (WHO) using the International Classification of Diseases (ICD), about 58 million people died. In 2010, according to the Institute for Health Metrics and Evaluation, 52.8 million people died. In 2016, the WHO recorded 56.7 million deaths with the leading cause of death as cardiovascular disease causing more than 17 million deaths (about 31% of the total) as shown in the chart to the side. Some causes listed include deaths also included in more specific subordinate causes, and some causes are omitted, so the percentages may only sum approximately to 100%. The causes listed are relatively immediate medical causes, but the ultimate cause of death might be described differently. For example, tobacco smoking often causes lung disease or cancer, and alcohol use disorder can cause liver failure or a motor vehicle accident. For statistics on preventable ultimate causes, see preventable causes of death. Besides frequency, other measures to compare, consider and monitor trends of causes of deaths include disability-adjusted life year (DALY) and years of potential life lost (YPLL). By frequency Age standardized death rate, per 100,000, by cause, in 2017, and percentage change 2007–2017. Overview table This first table gives a convenient overview of the general categories and broad causes. The leading cause is cardiovascular disease at 31.59% of all deaths. Developed vs. developing economies Top causes of death, according to the World Health Organization report for the calendar year 2001: Detailed table This table gives a more detailed and specific breakdown of the causes for the year 2017: By lost years Underlying causes Causes of death can be structured into immediate causes of death or primary causes of death, conditions leading to cause of
https://en.wikipedia.org/wiki/Global%20Environment%20for%20Network%20Innovations
The Global Environment for Network Innovations (GENI) is a facility concept being explored by the United States computing community with support from the National Science Foundation. The goal of GENI is to enhance experimental research in computer networking and distributed systems, and to accelerate the transition of this research into products and services that will improve the economic competitiveness of the United States. GENI planning efforts are organized around several focus areas, including facility architecture, the backbone network, distributed services, wireless/mobile/sensor subnetworks, and research coordination amongst these. See also Internet2 Future Internet AKARI Project in Japan
https://en.wikipedia.org/wiki/Turboexpander
A turboexpander, also referred to as a turbo-expander or an expansion turbine, is a centrifugal or axial-flow turbine, through which a high-pressure gas is expanded to produce work that is often used to drive a compressor or generator. Because work is extracted from the expanding high-pressure gas, the expansion is approximated by an isentropic process (i.e., a constant-entropy process), and the low-pressure exhaust gas from the turbine is at a very low temperature, −150 °C or less, depending upon the operating pressure and gas properties. Partial liquefaction of the expanded gas is not uncommon. Turboexpanders are widely used as sources of refrigeration in industrial processes such as the extraction of ethane and natural-gas liquids (NGLs) from natural gas, the liquefaction of gases (such as oxygen, nitrogen, helium, argon and krypton) and other low-temperature processes. Turboexpanders currently in operation range in size from about 750 W to about 7.5 MW (1 hp to about 10,000 hp). Applications Although turboexpanders are commonly used in low-temperature processes, they are used in many other applications. This section discusses one of the low-temperature processes, as well as some of the other applications. Extracting hydrocarbon liquids from natural gas Raw natural gas consists primarily of methane (CH4), the shortest and lightest hydrocarbon molecule, along with various amounts of heavier hydrocarbon gases such as ethane (C2H6), propane (C3H8), normal butane (n-C4H10), isobutane (i-C4H10), pentanes and even higher-molecular-mass hydrocarbons. The raw gas also contains various amounts of acid gases such as carbon dioxide (CO2), hydrogen sulfide (H2S) and mercaptans such as methanethiol (CH3SH) and ethanethiol (C2H5SH). When processed into finished by-products (see Natural-gas processing), these heavier hydrocarbons are collectively referred to as NGL (natural-gas liquids). The extraction of the NGL often involves a turboexpander and a low-temperature d
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%201
Mothers against decapentaplegic homolog 1 also known as SMAD family member 1 or SMAD1 is a protein that in humans is encoded by the SMAD1 gene. Nomenclature SMAD1 belongs to the SMAD, a family of proteins similar to the gene products of the Drosophila gene 'mothers against decapentaplegic' (Mad) and the C. elegans gene Sma. The name is a combination of the two; and based on a tradition of such unusual naming within the gene research community. It was found that a mutation in the 'Drosophila' gene, MAD, in the mother, repressed the gene, decapentaplegic, in the embryo. Mad mutations can be placed in an allelic series based on the relative severity of the maternal effect enhancement of weak dpp alleles, thus explaining the name Mothers against dpp. Function SMAD proteins are signal transducers and transcriptional modulators that mediate multiple signaling pathways. This protein mediates the signals of the bone morphogenetic proteins (BMPs), which are involved in a range of biological activities including cell growth, apoptosis, morphogenesis, development and immune responses. In response to BMP ligands, this protein can be phosphorylated and activated by the BMP receptor kinase. The phosphorylated form of this protein forms a complex with SMAD4, which is important for its function in the transcription regulation. This protein is a target for SMAD-specific E3 ubiquitin ligases, such as SMURF1 and SMURF2, and undergoes ubiquitination and proteasome-mediated degradation. Alternatively spliced transcript variants encoding the same protein have been observed. SMAD1 is a receptor regulated SMAD (R-SMAD) and is activated by bone morphogenetic protein type 1 receptor kinase.
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%202
Mothers against decapentaplegic homolog 2 also known as SMAD family member 2 or SMAD2 is a protein that in humans is encoded by the SMAD2 gene. MAD homolog 2 belongs to the SMAD, a family of proteins similar to the gene products of the Drosophila gene 'mothers against decapentaplegic' (Mad) and the C. elegans gene Sma. SMAD proteins are signal transducers and transcriptional modulators that mediate multiple signaling pathways. Function SMAD2 mediates the signal of the transforming growth factor (TGF)-beta, and thus regulates multiple cellular processes, such as cell proliferation, apoptosis, and differentiation. This protein is recruited to the TGF-beta receptors through its interaction with the SMAD anchor for receptor activation (SARA) protein. In response to TGF-beta signal, this protein is phosphorylated by the TGF-beta receptors. The phosphorylation induces the dissociation of this protein with SARA and the association with the family member SMAD4. The association with SMAD4 is important for the translocation of this protein into the cell nucleus, where it binds to target promoters and forms a transcription repressor complex with other cofactors. This protein can also be phosphorylated by activin type 1 receptor kinase, and mediates the signal from the activin. Alternatively spliced transcript variants encoding the same protein have been observed. Like other Smads, Smad2 plays a role in the transmission of extracellular signals from ligands of the Transforming Growth Factor beta (TGFβ) superfamily of growth factors into the cell nucleus. Binding of a subgroup of TGFβ superfamily ligands to extracellular receptors triggers phosphorylation of Smad2 at a Serine-Serine-Methionine-Serine (SSMS) motif at its extreme C-terminus. Phosphorylated Smad2 is then able to form a complex with Smad4. These complexes accumulate in the cell nucleus, where they are directly participating in the regulation of gene expression. Nomenclature The SMAD proteins are homologs of bot
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%203
Mothers against decapentaplegic homolog 3 also known as SMAD family member 3 or SMAD3 is a protein that in humans is encoded by the SMAD3 gene. SMAD3 is a member of the SMAD family of proteins. It acts as a mediator of the signals initiated by the transforming growth factor beta (TGF-β) superfamily of cytokines, which regulate cell proliferation, differentiation and death. Based on its essential role in TGF beta signaling pathway, SMAD3 has been related with tumor growth in cancer development. Gene The human SMAD3 gene is located on chromosome 15 on the cytogenic band at 15q22.33. The gene is composed of 9 exons over 129,339 base pairs. It is one of several human homologues of a gene that was originally discovered in the fruit fly Drosophila melanogaster. The expression of SMAD3 has been related to the mitogen-activated protein kinase (MAPK/ERK pathway), particularly to the activity of mitogen-activated protein kinase kinase-1 (MEK1). Studies have demonstrated that inhibition of MEK1 activity also inhibits SMAD3 expression in epithelial cells and smooth muscle cells, two cell types highly responsive to TGF-β1. Protein SMAD3 is a polypeptide with a molecular weight of 48,080 Da. It belongs to the SMAD family of proteins. SMAD3 is recruited by SARA (SMAD Anchor for Receptor Activation) to the membrane, where the TGF-β receptor is located. The receptors for TGF-β, (including nodal, activin, myostatin and other family members) are membrane serine/threonine kinases that preferentially phosphorylate and activate SMAD2 and SMAD3. Once SMAD3 is phosphorylated at the C-terminus, it dissociates from SARA and forms a heterodimeric complex with SMAD4, which is required for the transcriptional regulation of many target genes. The complex of two SMAD3 (or of two SMAD2) and one SMAD4 binds directly to DNA though interactions of the MH1 domain. These complexes are recruited to sites throughout the genome by cell lineage-defining transcription factors (LDTFs) that determine
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%204
SMAD4, also called SMAD family member 4, Mothers against decapentaplegic homolog 4, or DPC4 (Deleted in Pancreatic Cancer-4) is a highly conserved protein present in all metazoans. It belongs to the SMAD family of transcription factor proteins, which act as mediators of TGF-β signal transduction. The TGFβ family of cytokines regulates critical processes during the lifecycle of metazoans, with important roles during embryo development, tissue homeostasis, regeneration, and immune regulation. SMAD 4 belongs to the co-SMAD group (common mediator SMAD), the second class of the SMAD family. SMAD4 is the only known co-SMAD in most metazoans. It also belongs to the Darwin family of proteins that modulate members of the TGFβ protein superfamily, a family of proteins that all play a role in the regulation of cellular responses. Mammalian SMAD4 is a homolog of the Drosophila protein "Mothers against decapentaplegic" named Medea. SMAD4 interacts with R-Smads, such as SMAD2, SMAD3, SMAD1, SMAD5 and SMAD8 (also called SMAD9) to form heterotrimeric complexes. Transcriptional coregulators, such as WWTR1 (TAZ) interact with SMADs to promote their function. Once in the nucleus, the complex of SMAD4 and two R-SMADS binds to DNA and regulates the expression of different genes depending on the cellular context. Intracellular reactions involving SMAD4 are triggered by the binding, on the surface of the cells, of growth factors from the TGFβ family. The sequence of intracellular reactions involving SMADS is called the SMAD pathway or the transforming growth factor beta (TGF-β) pathway since the sequence starts with the recognition of TGF-β by cells. Gene In mammals, SMAD4 is coded by a gene located on chromosome 18. In humans, the SMAD4 gene contains 54 829 base pairs and is located from pair n° 51,030,212 to pair 51,085,041 in the region 21.1 of the chromosome 18. Protein SMAD4 is a 552 amino-acid polypeptide with a molecular weight of 60.439 Da. SMAD4 has two functional domains
https://en.wikipedia.org/wiki/Estimation%20lemma
In mathematics the estimation lemma, also known as the inequality, gives an upper bound for a contour integral. If is a complex-valued, continuous function on the contour and if its absolute value is bounded by a constant for all on , then where is the arc length of . In particular, we may take the maximum as upper bound. Intuitively, the lemma is very simple to understand. If a contour is thought of as many smaller contour segments connected together, then there will be a maximum for each segment. Out of all the maximum s for the segments, there will be an overall largest one. Hence, if the overall largest is summed over the entire path then the integral of over the path must be less than or equal to it. Formally, the inequality can be shown to hold using the definition of contour integral, the absolute value inequality for integrals and the formula for the length of a curve as follows: The estimation lemma is most commonly used as part of the methods of contour integration with the intent to show that the integral over part of a contour goes to zero as goes to infinity. An example of such a case is shown below. Example Problem. Find an upper bound for where is the upper half-circle with radius traversed once in the counterclockwise direction. Solution. First observe that the length of the path of integration is half the circumference of a circle with radius , hence Next we seek an upper bound for the integrand when . By the triangle inequality we see that therefore because on . Hence Therefore, we apply the estimation lemma with . The resulting bound is See also Jordan's lemma
https://en.wikipedia.org/wiki/First%20Step%20to%20Nobel%20Prize%20in%20Physics
The First Step to Nobel Prize in Physics is an annual international competition in research projects in physics. It originated and is based in Poland. Participants All the secondary high school students regardless of the country, type of the school, sex, nationality etc. are eligible for the competition. The only conditions are that the school cannot be considered as a university college and the age of the participants should not exceed 20 years on March 31 (every year March 31 is the deadline for submitting the competition papers). There are no restrictions concerning the subject matter of the papers, their level, methods applied etc. All these are left to the participants' choice. The papers, however, have to have a research character and deal with physics topics or topics directly related to physics. The papers are evaluated by the Evaluating Committee, which is nominated by the Organizing Committee. It was recently won by David Rosengarten. History In the first two competitions, only Polish physicists participated in the Evaluation Committee. In the third competition, one non-Polish judge took part in evaluation of the papers. In the fourth competition, the number of physicists from other countries was 10, with 14 being present in the fifth competition. Plans are in place to increase the number of physicists involved from other countries in future competitions. An International Advisory Committee (IAC) was also established. At present, it consists of 25 physicists from different countries. Competition and evaluation The materials on the competition are disseminated to all the countries via diplomatic channels. The competition is also advertised in different physics magazines for pupils and teachers. (Every year about 30 articles on the First Step are published in different countries). Also, different private channels are used. In the first eight competitions the pupils from 67 countries participated. The criteria used when evaluating the papers submitted a
https://en.wikipedia.org/wiki/Total%20variation%20diminishing
In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten. Model equation In systems described by partial differential equations, such as the following hyperbolic advection equation, the total variation (TV) is given by and the total variation for the discrete case is, where . A numerical method is said to be total variation diminishing (TVD) if, Characteristics A numerical scheme is said to be monotonicity preserving if the following properties are maintained: If is monotonically increasing (or decreasing) in space, then so is . proved the following properties for a numerical scheme, A monotone scheme is TVD, and A TVD scheme is monotonicity preserving. Application in CFD In Computational Fluid Dynamics, TVD scheme is employed to capture sharper shock predictions without any misleading oscillations when variation of field variable “” is discontinuous. To capture the variation fine grids ( very small) are needed and the computation becomes heavy and therefore uneconomic. The use of coarse grids with central difference scheme, upwind scheme, hybrid difference scheme, and power law scheme gives false shock predictions. TVD scheme enables sharper shock predictions on coarse grids saving computation time and as the scheme preserves monotonicity there are no spurious oscillations in the solution. Discretisation Consider the steady state one-dimensional convection diffusion equation, , where is the density, is the velocity vector, is the property being transported, is the coefficient of diffusion and is the source term responsible for generation of the property . Making the flux balance of this property about a control volume we get, Here is the normal to the surface of control volume. Ignoring the source term, the
https://en.wikipedia.org/wiki/Godunov%27s%20theorem
In numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high-resolution schemes for the numerical solution of partial differential equations. The theorem states that: Linear numerical schemes for solving partial differential equations (PDE's), having the property of not generating new extrema (monotone scheme), can be at most first-order accurate. Professor Sergei Godunov originally proved the theorem as a Ph.D. student at Moscow State University. It is his most influential work in the area of applied and numerical mathematics and has had a major impact on science and engineering, particularly in the development of methods used in computational fluid dynamics (CFD) and other computational fields. One of his major contributions was to prove the theorem (Godunov, 1954; Godunov, 1959), that bears his name. The theorem We generally follow Wesseling (2001). Aside Assume a continuum problem described by a PDE is to be computed using a numerical scheme based upon a uniform computational grid and a one-step, constant step-size, M grid point, integration algorithm, either implicit or explicit. Then if and , such a scheme can be described by In other words, the solution at time and location is a linear function of the solution at the previous time step . We assume that determines uniquely. Now, since the above equation represents a linear relationship between and we can perform a linear transformation to obtain the following equivalent form, Theorem 1: Monotonicity preserving The above scheme of equation (2) is monotonicity preserving if and only if Proof - Godunov (1959) Case 1: (sufficient condition) Assume (3) applies and that is monotonically increasing with . Then, because it therefore follows that because This means that monotonicity is preserved for this case. Case 2: (necessary condition) We prove the ne
https://en.wikipedia.org/wiki/Local%20time%20%28mathematics%29
In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context of random fields. Formal definition For a continuous real-valued semimartingale , the local time of at the point is the stochastic process which is informally defined by where is the Dirac delta function and is the quadratic variation. It is a notion invented by Paul Lévy. The basic idea is that is an (appropriately rescaled and time-parametrized) measure of how much time has spent at up to time . More rigorously, it may be written as the almost sure limit which may be shown to always exist. Note that in the special case of Brownian motion (or more generally a real-valued diffusion of the form where is a Brownian motion), the term simply reduces to , which explains why it is called the local time of at . For a discrete state-space process , the local time can be expressed more simply as Tanaka's formula Tanaka's formula also provides a definition of local time for an arbitrary continuous semimartingale on A more general form was proven independently by Meyer and Wang; the formula extends Itô's lemma for twice differentiable functions to a more general class of functions. If is absolutely continuous with derivative which is of bounded variation, then where is the left derivative. If is a Brownian motion, then for any the field of local times has a modification which is a.s. Hölder continuous in with exponent , uniformly for bounded and . In general, has a modification that is a.s. continuous in and càdlàg in . Tanaka's formula provides the explicit Doob–Meyer decomposition for the one-dimensional reflecting Brownia
https://en.wikipedia.org/wiki/Flux%20limiter
Flux limiters are used in high resolution schemes – numerical schemes used to solve problems in science and engineering, particularly fluid dynamics, described by partial differential equations (PDEs). They are used in high resolution schemes, such as the MUSCL scheme, to avoid the spurious oscillations (wiggles) that would otherwise occur with high order spatial discretization schemes due to shocks, discontinuities or sharp changes in the solution domain. Use of flux limiters, together with an appropriate high resolution scheme, make the solutions total variation diminishing (TVD). Note that flux limiters are also referred to as slope limiters because they both have the same mathematical form, and both have the effect of limiting the solution gradient near shocks or discontinuities. In general, the term flux limiter is used when the limiter acts on system fluxes, and slope limiter is used when the limiter acts on system states (like pressure, velocity etc.). How they work The main idea behind the construction of flux limiter schemes is to limit the spatial derivatives to realistic values – for scientific and engineering problems this usually means physically realisable and meaningful values. They are used in high resolution schemes for solving problems described by PDEs and only come into operation when sharp wave fronts are present. For smoothly changing waves, the flux limiters do not operate and the spatial derivatives can be represented by higher order approximations without introducing spurious oscillations. Consider the 1D semi-discrete scheme below, where, and represent edge fluxes for the i-th cell. If these edge fluxes can be represented by low and high resolution schemes, then a flux limiter can switch between these schemes depending upon the gradients close to the particular cell, as follows, where is the low resolution flux, is the high resolution flux, is the flux limiter function, and represents the ratio of successive gradients on the solu
https://en.wikipedia.org/wiki/Orbital%20septum
In anatomy, the orbital septum (palpebral fascia) is a membranous sheet that acts as the anterior (frontal) boundary of the orbit. It extends from the orbital rims to the eyelids. It forms the fibrous portion of the eyelids. Structure In the upper eyelid, the orbital septum blends with the tendon of the levator palpebrae superioris, and in the lower eyelid with the tarsal plate. When the eyes are closed, the whole orbital opening is covered by the septum and tarsi. Medially it is thin, and, becoming separated from the medial palpebral ligament, attaches to the lacrimal bone at its posterior crest. The medial ligament and its much weaker lateral counterpart, attached to the septum and orbit, keep the lids stable as the eye moves. The septum is perforated by the vessels and nerves which pass from the orbital cavity to the face and scalp. Clinical significance Orbital septum supports the orbital contents located posterior to it, especially orbital fat. The septum can be weakened by trauma or due to hereditary diseases. Anatomical structures important in the blepharoplasty operation (operation to strengthen the orbital septum) are located posterior to the orbital septum. The orbital septum is an important structure that separates anterior and posterior extent of the orbit. Orbital septum acts as a physical barrier that prevents the infection of the anterior part of the eye spreading posteriorly. For example, preseptal cellulitis mainly infects the eyelids, anterior to the orbital septum. Meanwhile, orbital cellulitis is located posterior the orbital septum, due to infections spreading from the ethmoidal sinuses. The porous lamina papyracea separating the orbit from the ethmoidal sinus causes infection to spread between the orbit and ethmoidal sinuses. Infection of the ethmoidal sinuses can spread to the brain, causing meningitis and cerebral abscess. Orbital cellulitis can also spread to the anterior orbit, by lifting the loosely attached periosteum, causing sub
https://en.wikipedia.org/wiki/List%20of%20aging%20processes
Accumulation of lipofuscin Aging brain Calorie restriction Cross-link Crosslinking of DNA Degenerative disease DNA damage theory of aging Exposure to ultraviolet light Free-radical damage Glycation Life expectancy Longevity Maximum life span Senescence Stem cell theory of aging See also Index of topics related to life extension Aging processes
https://en.wikipedia.org/wiki/Jacopo%20Berengario%20da%20Carpi
Jacopo Berengario da Carpi (also known as Jacobus Berengarius Carpensis, Jacopo Barigazzi, Giacomo Berengario da Carpi or simply Carpus; c. 1460 – c. 1530) was an Italian physician. His book "Isagoge breves" published in 1522 made him the most important anatomist before Andreas Vesalius. Early years Jacopo Berengario da Carpi was the son of a surgeon. As a youth he assisted his father in surgical work, and his surgical skills became the basis of his later work as a physician. In his late teens, through the association of his family with Lionello Pio, Berengario came under the tutelage of the great humanist printer, Aldo Manuzio who came to Carpi to tutor Alberto III Pio, Prince of Carpi and apparently included Berengario in his instruction. In the 1480s, Berengario attended university in Bologna receiving his degree in medicine in 1489. Fame through mercury cure for syphilis After obtaining his degree, Berengario returned to his father and assisted him with his surgery practice for a short time, but the influx of the "French disease" in 1494 provided Berengario with a chance to advance his career as a physician. Traveling to Rome, he treated several patients who suffered from the ailment. Judging by an admittedly one-sided account, his work in Rome was a mix of financial success and medical failure. As quoted in Lind's introduction to the Isagoge, Benvenuto Cellini provided a scathing account of Berengario's practice of treating syphilis with doses of mercury while charging "hundreds of crowns" paid in advance. Berengario apparently developed enough of a reputation that the Pope invited him into his service, but he turned down the offer and left Rome shortly thereafter. Anatomy in Bologna Shortly after his work in Rome, he was appointed Maestro nello Studio at Bologna, a university whose faculty were only rarely foreign and then only when they were scholars of considerable reputations. Berengario’s reputation and personal connections with powerful patrons we
https://en.wikipedia.org/wiki/High-resolution%20scheme
High-resolution schemes are used in the numerical solution of partial differential equations where high accuracy is required in the presence of shocks or discontinuities. They have the following properties: Second- or higher-order spatial accuracy is obtained in smooth parts of the solution. Solutions are free from spurious oscillations or wiggles. High accuracy is obtained around shocks and discontinuities. The number of mesh points containing the wave is small compared with a first-order scheme with similar accuracy. General methods are often not adequate for accurate resolution of steep gradient phenomena; they usually introduce non-physical effects such as smearing of the solution or spurious oscillations. Since publication of Godunov's order barrier theorem, which proved that linear methods cannot provide non-oscillatory solutions higher than first order (, ), these difficulties have attracted much attention and a number of techniques have been developed that largely overcome these problems. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a Total Variation Diminishing (TVD) characteristic are especially attractive. Two techniques that are proving to be particularly effective are MUSCL (Monotone Upstream-Centered Schemes for Conservation Laws), a flux/slope limiter method (, , , , ) and the WENO (Weighted Essentially Non-Oscillatory) method (, ). Both methods are usually referred to as high resolution schemes (see diagram). MUSCL methods are generally second-order accurate in smooth regions (although they can be formulated for higher orders) and provide good resolution, monotonic solutions around discontinuities. They are straightforward to implement and are computationally efficient. For problems comprising both shocks and complex smooth solution structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Most applications tend to use a fifth o
https://en.wikipedia.org/wiki/Stopped-flow
Stopped-flow is an experimental technique for studying chemical reactions with a half time of the order of 1 ms, introduced by Britton Chance and extended by Quentin Gibson (Other techniques, such as the temperature-jump method, are available for much faster processes.) Description of the method Summary Stopped-flow spectrometry allows chemical kinetics of fast reactions (with half times of the order of milliseconds) to be studied in solution. It was first used primarily to study enzyme-catalyzed reactions. Then the stopped-flow rapidly found its place in almost all biochemistry, biophysics, and chemistry laboratories with a need to follow chemical reactions in the millisecond time scale. In its simplest form, a stopped-flow mixes two solutions. Small volumes of solutions are rapidly and continuously driven into a high-efficiency mixer. This mixing process then initiates an extremely fast reaction. The newly mixed solution travels to the observation cell and pushes out the contents of the cell (the solution remaining from the previous experiment or from necessary washing steps). The time required for this solution to pass from the mixing point to the observation point is known as dead time. The minimum injection volume will depend on the volume of the mixing cell. Once enough solution has been injected to completely remove the previous solution, the instrument reaches a stationary state and the flow can be stopped. Depending on the syringe drive technology, the flow stop is achieved by using a stop valve called the hard-stop or by using a stop syringe. The stopped-flow also sends a ‘start signal’ to the detector called the trigger so the reaction can be observed. The timing of the trigger is usually software controlled so the user can trigger at the same time the flow stops or a few milliseconds before the stop to check the stationary state has been reached. So this is a very economical technique. Reactant syringes Two syringes are filled with solutions that
https://en.wikipedia.org/wiki/Block%20and%20bleed%20manifold
A Block and bleed manifold is a hydraulic manifold that combines one or more block/isolate valves, usually ball valves, and one or more bleed/vent valves, usually ball or needle valves, into one component for interface with other components (pressure measurement transmitters, gauges, switches, etc.) of a hydraulic (fluid) system. The purpose of the block and bleed manifold is to isolate or block the flow of fluid in the system so the fluid from upstream of the manifold does not reach other components of the system that are downstream. Then they bleed off or vent the remaining fluid from the system on the downstream side of the manifold. For example, a block and bleed manifold would be used to stop the flow of fluids to some component, then vent the fluid from that component’s side of the manifold, in order to effect some kind of work (maintenance/repair/replacement) on that component. Types of valves Block and Bleed A block and bleed manifold with one block valve and one bleed valve is also known as an isolation valve or block and bleed valve; a block and bleed manifold with multiple valves is also known as an isolation manifold. This valve is used in combustible gas trains in many industrial applications. Block and bleed needle valves are used in hydraulic and pneumatic systems because the needle valve allows for precise flow regulation when there is low flow in a non-hazardous environment. Double Block and Bleed (DBB Valves) These valves replace existing traditional techniques employed by pipeline engineers to generate a double block and bleed configuration in the pipeline. Two block valves and a bleed valve are as a unit, or manifold, to be installed for positive isolation. Used for critical process service, DBB valves are for high pressure systems or toxic/hazardous fluid processes. Applications that use DBB valves include instrument drain, chemical injection connection, chemical seal isolation, and gauge isolation. DBB valves do the work of three separa
https://en.wikipedia.org/wiki/Gonadal%20dysgenesis
Gonadal dysgenesis is classified as any congenital developmental disorder of the reproductive system in humans. It is atypical development of gonads in an embryo,. One type of gonadal dysgenesis is the development of functionless, fibrous tissue, termed streak gonads, instead of reproductive tissue. Streak gonads are a form of aplasia, resulting in hormonal failure that manifests as sexual infantism and infertility, with no initiation of puberty and secondary sex characteristics. Gonadal development is a process, which is primarily controlled genetically by the chromosomal sex (XX or XY), which directs the formation of the gonad (ovary or testicle). Differentiation of the gonads requires a tightly regulated cascade of genetic, molecular and morphogenic events. At the formation of the developed gonad, steroid production influences local and distant receptors for continued morphological and biochemical changes. This results in the phenotype corresponding to the karyotype (46,XX for females and 46,XY for males). Gonadal dysgenesis arises from a difference in signalling in this tightly regulated process during early foetal development. Manifestations of gonadal dysgenesis are dependent on the aetiology and severity of the underlying causes. Causes Pure gonadal dysgenesis 46,XX also known as XX gonadal dysgenesis Pure gonadal dysgenesis 46,XY also known as XY gonadal dysgenesis Mixed gonadal dysgenesis also known as partial gonadal dysgenesis, and 45,X/46,XY mosaicism Turner syndrome also known as 45,X or 45,X0 Endocrine disruptions Pathogenesis 46,XX gonadal dysgenesis 46,XX gonadal dysgenesis is characteristic of female hypogonadism with a karyotype of 46,XX. Streak ovaries are present with non-functional tissues unable to produce the required sex steroid oestrogen. Low levels of oestrogen effect the HPG axis with no feedback to the anterior pituitary to inhibit the secretion of FSH and LH. FSH and LH are secreted at elevated levels. Increased levels of
https://en.wikipedia.org/wiki/Utah%20Education%20Network
The Utah Education Network (UEN) is a broadband and digital broadcast network serving public education, higher education, applied technology campuses, libraries, and public charter schools throughout the state of Utah. The Network facilitates interactive video conferencing, provides instructional support services, and operates a public television station (KUEN) on behalf of the Utah State Board of Regents. UEN services benefit more than 60,000 faculty and staff, and more than 780,000 students from pre-schoolers in Head Start programs through grandparents in graduate school. UEN headquarters are in Salt Lake City at the Eccles Broadcast Center on the University of Utah campus. History The Utah State Legislature formally established UEN in 1989, but the statewide collaboration of public education and higher education started more than two decades earlier when KUED-Channel 7 signed on the air in 1958. The station built translator towers to beam its signal to remote communities, and eventually the station placed analog microwave equipment on some of those towers enabling two-way teleconferencing for education and government. The system was initially named SETOC (State Educational Technical Operations Center), renamed as EDNET and is now UEN IVC (Interactive Video Conferencing). In December 1986, KULC-Channel 9 (now UEN-TV) started broadcasting as Utah’s Learning Channel, and in 1994 UEN started UtahLINK, the Internet component of the Network. All of those services now operate as the Utah Education Network. UEN-TV and sister station KUED were broadcasting digitally by June 2012, but experiments in statewide digital broadcasting were underway as early as 2004. Services Three infrastructure services are integral to UEN’s mission of networking for education. Students, parents, educators, and local communities all benefit from these services. They include: Networking Services, to extend and maintain UEN’s broadband and digital TV networks, including the UEN Wide Area Net
https://en.wikipedia.org/wiki/Dried%20and%20salted%20cod
Dried and salted cod, sometimes referred to as salt cod or saltfish or salt dolly, is cod which has been preserved by drying after salting. Cod which has been dried without the addition of salt is stockfish. Salt cod was long a major export of the North Atlantic region, and has become an ingredient of many cuisines around the Atlantic and in the Mediterranean. Dried and salted cod has been produced for over 500 years in Newfoundland, Iceland, and the Faroe Islands, and most particularly in Norway where it is called klippfisk, literally "cliff-fish". Traditionally it was dried outdoors by the wind and sun, often on cliffs and other bare rock-faces. Today klippfisk is usually dried indoors with the aid of electric heaters. History Salt cod formed a vital item of international commerce between the New World and the Old, and formed one leg of the so-called triangular trade. Thus, it spread around the Atlantic and became a traditional ingredient not only in Northern European cuisine, but also in Mediterranean, West African, Caribbean, and Brazilian cuisines. The drying of food is the world's oldest known preservation method, and dried fish has a storage life of several years. Traditionally, salt cod was dried only by the wind and the sun, hanging on wooden scaffolding or lying on clean cliffs or rocks near the seaside. Drying preserves many nutrients, and the process of salting and drying codfish is said to make it tastier. Salting became economically feasible during the 17th century, when cheap salt from Southern Europe became available to the maritime nations of Northern Europe. The method was cheap, and the work could be done by the fisherman or his family. The resulting product was easily transported to market, and salt cod became a staple item in the diet of the populations of Catholic countries on 'meatless' Fridays and during Lent. The British Newfoundland Colony lacked the cold dry weather necessary to make stockfish and the plentiful salt required to mak
https://en.wikipedia.org/wiki/Instrument%20control
Instrument control consists of connecting a desktop instrument to a computer and taking measurements. History In the late 1960s the first bus used for communication was developed by Hewlett-Packard and was called HP-IB (Hewlett-Packard Interface Bus). Since HP-IB was originally designed to only work with HP instruments, the need arose for a standard, high-speed interface for communication between instruments and controllers from a variety of vendors. This need was addressed in 1975 by the Institute of Electrical and Electronics Engineers (IEEE) published ANSI/IEEE Standard 488-1975, IEEE Standard Digital Interface for Programmable Instrumentation, which contained the electrical, mechanical, and functional specifications of an interfacing system. The standard was updated in 1987 and again in 1992 This bus is known by three different names, General Purpose Interface Bus (GPIB), Hewlett-Packard Interface Bus (HP-IB), and IEEE-488 Bus, and is used worldwide. Today, there are several other buses in addition to the GPIB that can be used for instrument control. These include: Ethernet, USB, Serial, PCI, and PXI. Software In addition to the hardware bus to control an instrument, software for the PC is also needed. Virtual Instrument Software Architecture, or VISA, was developed by the VME eXtensions for Instrumentation (VXI) plug and play Systems Alliance as a specification for I/O software. VISA was a step toward industry-wide software compatibility. The VISA specification defines a software standard for VXI, and for GPIB, serial, Ethernet and other interfaces. More than 35 of the largest instrumentation companies in the industry endorse VISA as the standard. The alliance created distinct frameworks by grouping the most popular operating systems, application development environments, and programming languages and defined in-depth specifications to guarantee interoperability of components within each framework. Instruments can be programmed by sending and receiving text
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%206
SMAD family member 6, also known as SMAD6, is a protein that in humans is encoded by the SMAD6 gene. SMAD6 is a protein that, as its name describes, is a homolog of the Drosophila gene "mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of modulators. Like many other TGFβ family members SMAD6 is involved in cell signalling. It acts as a regulator of TGFβ family (such as bone morphogenetic proteins) activity by competing with SMAD4 and preventing the transcription of SMAD4's gene products. There are two known isoforms of this protein. Nomenclature The SMAD proteins are homologs of both the drosophila protein, mothers against decapentaplegic (MAD) and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene MAD in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was added as a humorous take-off on organizations opposing various issues e.g., Mothers Against Drunk Driving, or MADD; and based on a tradition of such unusual naming within the gene research community. Disease associations Heterozygous, damaging mutations in SMAD6 are the most frequent genetic cause of non-syndromic craniosynostosis identified to date. Interactions Mothers against decapentaplegic homolog 6 has been shown to interact with: HOXC8, MAP3K7, Mothers against decapentaplegic homolog 7, PIAS4, and STRAP.
https://en.wikipedia.org/wiki/Caruru%20%28food%29
Caruru () is a Brazilian food made from okra, onion, shrimp, palm oil and toasted nuts (peanuts and/or cashews). It is a typical condiment in the northeastern state of Bahia, where it is commonly eaten with acarajé, an Afro-Brazilian street food made from mashed black-eyed peas formed into a ball and then deep-fried in palm oil. Etymology "Caruru" comes from the African term kalalu. Another possibility is that it is a Tupi noun, caá-laughu, the eating herb, as defined by Câmara Cascudo. It is a curious case of similar words, which generates some confusion between the plant and the dish. Origin Guilherme Piso, who lived in Pernambuco (1638-1644), reports the caruru made with medicinal and food herb (and not with okra). In his story in Historia Naturalis Brasiliae, the doctor from Count Maurício de Nassau informs that "this bredo (caruru) is eaten as a vegetable and cooked instead of spinach ...". Another account, in 1820, in the Amazon, by Von Martius, mentions the "caruru-açu" during a meal with the natives near the Madeira River, when he experienced "a delicacy of chestnuts pounded with an herb similar to spinach ..." . In other words: this description is of a caruru made with the caruru plant, originally from the Americas (and not from Africa, which is the case of okra). For the dish, the cuisine of Dahome Nagô, from Yoruba Nigeria, and indigenous from Bahia would have been combined. During his visit to Africa, at the end of the 18th century, Father Vicente Ferreira Pires called the meal in Daomé "chicken caruru", revealing that the use of oil palm, a palm of African origin. Originally, the Brazilian caruru was a stew of herbs that served to accompany another dish (meat or fish). The current version of the caruru, however, is more African than indigenous, being made with okra, chili, dried shrimp and palm oil. See also Callaloo Vatapá List of Brazilian dishes