source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Incremental%20backup
An incremental backup is one in which successive copies of the data contain only the portion that has changed since the preceding backup copy was made. When a full recovery is needed, the restoration process would need the last full backup plus all the incremental backups until the point of restoration. Incremental backups are often desirable as they reduce storage space usage, and are quicker to perform than differential backups. Variants Incremental The most basic form of incremental backup consists of identifying, recording and thus, preserving only those files that have changed since the last backup. Since changes are typically low, incremental backups are much smaller and quicker than full backups. For instance, following a full backup on Friday, a Monday backup will contain only those files that changed since Friday. A Tuesday backup contains only those files that changed since Monday, and so on. A full restoration of data will naturally be slower, since all increments must be restored. Should any one of the copies created fail, including the first (full), restoration will be incomplete. A Unix example would be: rsync -e ssh -va --link-dest=$dst/hourly.1 $remoteserver:$remotepath $dst/hourly.0 The use of rsync's option is what makes this command an example of incremental backup. Multilevel incremental A more sophisticated incremental backup scheme involves multiple numbered backup levels. A full backup is level 0. A level n backup will back up everything that has changed since the most recent level n-1 backup. Suppose for instance that a level 0 backup was taken on a Sunday. A level 1 backup taken on Monday would include only changes made since Sunday. A level 2 backup taken on Tuesday would include only changes made since Monday. A level 3 backup taken on Wednesday would include only changes made since Tuesday. If a level 2 backup was taken on Thursday, it would include all changes made since Monday because Monday was the most recent level n-1 b
https://en.wikipedia.org/wiki/Anthroposystem
The term anthroposystem is used to describe the anthropological analogue to the ecosystem. In other words, the anthroposystem model serves to compare the flow of materials through human systems to those in naturally occurring systems. As defined by Santos, an anthroposystem is "the orderly combination or arrangement of physical and biological environments for the purpose of maintaining human civilization...built by man to sustain his kind." The anthroposystem is intimately linked to economic and ecological systems as well. Description Both the anthroposystem and ecosystem can be divided into three groups: producers, consumers, and recyclers. In the ecosystem, the producers or autotrophs consist of plants and some bacteria capable of producing their own food via photosynthesis or chemical synthesis, the consumers consist of animals that obtain energy from grazing and/or by feeding on other animals and the recyclers consist of decomposers such as fungi and bacteria. In the anthroposystem, the producers consist of the energy production through fossil fuels, manufacturing with non-fuel minerals and growing food; the consumers consist of humans and domestic animals and the recyclers consist of the decomposing or recycling activities (i.e. waste water treatment, metal and solid waste recycling). The ecosystem is sustainable whereas the anthroposystem is not. The ecosystem is a closed loop in which nearly everything is recycled whereas the anthroposystem is an open loop where very little is recycled. In contrast to the ecosystem, the anthroposystem's producers and consumers are significantly more spatially displaced than those in the ecosystem and thus, more energy is required to transfer matter to a producer or recycler. Currently, a large majority of this energy comes from non-renewable fossil fuels. Additionally, recycling is a naturally occurring component of the ecosystem, and is responsible for much of the resources used by the system. Under the anthroposyste
https://en.wikipedia.org/wiki/GoShogun
is a super robot anime series created by Takeshi Shudo. It was produced and aired in 1981 in Japan, with a movie special released in 1982 and a film sequel, GoShogun: The Time Étranger or Time Stranger, in 1985. Its title has been variously translated into English as "Demon God of the War-Torn Land GoShogun", "Warring Demon God GoShogun", and "Civil War Devil-God GoShogun", but in the US and parts of Europe it is primarily known as Macron 1, the title of its North American adaptation. The GoShogun series and its film sequel, The Time Étranger, were both written by Takeshi Shudo and directed by Kunihiko Yuyama. The series is noted for its witty dialogue and lighthearted parody of its own genre conventions. The Time Étranger shifts away from the original genre, leaving the robot aside entirely to focus on the strong and complex heroine. It has been praised for its serious tone, psychological intensity, and handling of mature themes. Original story The story is set in the early 21st century, in which a covert evil organization, Dokuga, led by lord NeoNeros, holds near total political, economic, and military control of the world. Dokuga agents try to forcibly recruit a brilliant physicist, Professor Sanada, who sets off a suicide bomb rather than let Dokuga acquire his secret research. His son Kenta becomes Dokuga's next target, but is saved by his father's colleague and taken on board a teleporting fortress, Good Thunder. Teleportation is enabled by a mysterious form of energy, called Beamler, which was discovered by Sanada. The same energy also powers a giant battle robot, GoShogun, which is operated by three pilots. The crew of Good Thunder travels the world, repeatedly fighting off NeoNeros's forces with GoShogun and often hampering Dokuga's influence on the local level, whether by destroying their bases and businesses, assisting popular rebellions, or by averting environmental disasters. On at least one occasion, GoShogun pilots must team up with Dokuga's three c
https://en.wikipedia.org/wiki/Genome%20browser
The completion of the human genome sequencing in the early 2000s was a turning point in genomics research. Scientists have conducted series of research into the activities of genes and the genome as a whole. The human genome contains around 3 billion base pairs nucleotide, and the huge quantity of data created necessitates the development of an accessible tool to explore and interpret this information in order to investigate the genetic basis of disease, evolution, and biological processes. The field of genomics has continued to grow, with new sequencing technologies and computational tool making it easier to study the genome. The genome browser is an important tool for studying the genome. In bioinformatics, a genome browser is a graphical interface for displaying information from a biological database for genomic data. It is a software tool that displays genetic data in graphical form. Genome browsers enable users to visualize and browse entire genomes with annotated data, including gene prediction, gene structure, protein, expression, regulation, variation, and comparative analysis. Annotated data is usually from multiple diverse sources. They differ from ordinary biological databases in that they display data in a graphical format, with genome coordinates on one axis with annotations or space-filling graphics to show analyses of the genes, such as the frequency of the genes and their expression profiles. The software allows users to navigate the genome, view numerous features, analyze and investigate the relationships between various genomic elements. History The first genome browser, known as the Ensembl Genome Browser, was develop as part of the Human Genome Project by a group of researchers from the European Bioinformatics Institute (EBI). It was created with the aim of providing a complete resource for the human genome sequence, with focus on gene annotation. It is a user-friendly interface for exploring the human genome and other organism's genomes. S
https://en.wikipedia.org/wiki/Kohn%E2%80%93Sham%20equations
The Kohn-Sham equations are a set of mathematical equations used in quantum mechanics to simplify the complex problem of understanding how electrons behave in atoms and molecules. They introduce fictitious non-interacting electrons and use them to find the most stable arrangement of electrons, which helps scientists understand and predict the properties of matter at the atomic and molecular scale. Description In physics and quantum chemistry, specifically density functional theory, the Kohn–Sham equation is the non-interacting Schrödinger equation (more clearly, Schrödinger-like equation) of a fictitious system (the "Kohn–Sham system") of non-interacting particles (typically electrons) that generate the same density as any given system of interacting particles. In the Kohn–Sham theory the introduction of the noninteracting kinetic energy functional Ts into the energy expression leads, upon functional differentiation, to a collection of one-particle equations whose solutions are the Kohn–Sham orbitals. The Kohn–Sham equation is defined by a local effective (fictitious) external potential in which the non-interacting particles move, typically denoted as vs(r) or veff(r), called the Kohn–Sham potential. If the particles in the Kohn–Sham system are non-interacting fermions (non-fermion Density Functional Theory has been researched), the Kohn–Sham wavefunction is a single Slater determinant constructed from a set of orbitals that are the lowest-energy solutions to This eigenvalue equation is the typical representation of the Kohn–Sham equations. Here εi is the orbital energy of the corresponding Kohn–Sham orbital , and the density for an N-particle system is History The Kohn–Sham equations are named after Walter Kohn and Lu Jeu Sham, who introduced the concept at the University of California, San Diego, in 1965. Kohn received a Nobel Prize in Chemistry in 1998 for the Kohn–Sham equations and other work related to density functional theory (DFT). Kohn–Sham
https://en.wikipedia.org/wiki/De%20Furtivis%20Literarum%20Notis
De Furtivis Literarum Notis (On the Secret Symbols of Letters) is a 1563 book on cryptography written by Giambattista della Porta. The book includes three sets of cypher discs for coding and decoding messages and a substitution cipher improving on the work of Al-Qalqashandi.
https://en.wikipedia.org/wiki/Ethernet%20Automatic%20Protection%20Switching
Ethernet Automatic Protection Switching (EAPS) is used to create a fault tolerant topology by configuring a primary and secondary path for each VLAN. Invented by Extreme Networks and submitted to IETF as RFC3619. The idea is to provide highly available Ethernet switched rings (commonly used in Metro Ethernet) to replace legacy TDM based transport protection fiber rings. Other implementations include Ethernet Protection Switching Ring (EPSR) by Allied Telesis which enhanced EAPS to provide full protected transport of IP Triple Play services (voice, video and internet traffic) for xDSL/FTTx deployments. EAPS/EPSR is the most widely deployed Ethernet protection switching solution deployed with major multi-vendor inter-operability support. The EAPS/EPSR are the basis of the ITU G.8032 Ethernet Protection recommendation. Operation A ring is formed by configuring a Domain. Each domain has a single "master node" and many "transit nodes". Each node will have a primary port and a secondary port, both known to be able to send control traffic to the master node. Under normal operation, the secondary port on the master is blocked for all protected vlans. When there is a link down situation, the devices that detect the failure send a control message to the master, and the master will then unblock the secondary port and instruct the transits to flush their forwarding databases. The next packets sent by the network can then be flooded and learned out of the (now enabled) secondary port without any network disruption. Fail-over times are demonstrably in the region of 50ms. The same switch can belong to multiple domains and thus multiple rings. However, these act as independent entities and can be controlled individually. EAPS v2 EAPSv2 is configured and enabled to avoid the potential of super-loops in environments where multiple EAPS domains share a common link. EAPSv2 works using the concept of a controller and partner mechanism. Shared port status is verified using
https://en.wikipedia.org/wiki/Ecospirituality
Ecospirituality connects the science of ecology with spirituality. It brings together religion and environmental activism. Ecospirituality has been defined as "a manifestation of the spiritual connection between human beings and the environment." The new millennium and the modern ecological crisis has created a need for environmentally based religion and spirituality. Ecospirituality is understood by some practitioners and scholars as one result of people wanting to free themselves from a consumeristic and materialistic society. Ecospirituality has been critiqued for being an umbrella term for concepts such as deep ecology, ecofeminism, and nature religion. Proponents may come from a range of faiths including: Islam; Jainism; Christianity (Catholicism, Evangelicalism and Orthodox Christianity); Judaism; Hinduism; Buddhism and Indigenous traditions. Although many of their practices and beliefs may differ, a central claim is that there is "a spiritual dimension to our present ecological crisis." According to the environmentalist Sister Virginia Jones, "Eco-spirituality is about helping people experience 'the holy' in the natural world and to recognize their relationship as human beings to all creation. Ecospirituality has been influenced by the ideas of deep ecology, which is characterized by "recognition of the inherent value of all living beings and the use of this view in shaping environmental policies" Similarly to ecopsychology, it refers to the connections between the science of ecology and the study of psychology. 'Earth-based' spirituality is another term related to ecospirituality; it is associated with pagan religious traditions and the work of prominent ecofeminist, Starhawk. Ecospirituality refers to the intertwining of intuition and bodily awareness pertaining to a relational view between human beings and the planet. Origins Ecospirituality finds its history in the relationship between spirituality and the environment. Some scholars say it "flows fr
https://en.wikipedia.org/wiki/Photochemical%20logic%20gate
A photochemical logic gate is based on the photochemical intersystem crossing and molecular electronic transition between photochemically active molecules, leading to logic gates that can be produced. The OR gate electron–photon transfer chain _A* A* = excited state of molecule A _B* _C* _A _B _C The OR gate is based on the activation of molecule A, and thus pass electron / photon to molecule C's excited state orbitals (C*). The electron from molecule A inter system crosses to C* via the excited state orbitals of B, eventually utilised as a signal in the C* emission. The 'OR' gate uses two inputs of light (photons) to molecule A in two separate electron transfer chains, both of which are capable of transferring to C* and thus producing the output of an OR gate. Therefore, if either electron transfer chain is activated, molecule C's excitation produces a valid/ output emission. Input Input A D ↘ ↙ B E ↘↙ C output The 'AND' gate _C** Second excited state of molecule C _A* _B* _C* _A _B _C Excitation A→A* by photon, whereby the promoted electron is passed down to the C* molecular orbital. A second photon applied to the system () causes the excitation of the electron in the C* molecular orbital to the C** molecular orbital -analogous pump probe spectroscopy. _**        Second excited state of molecule C ↑ _* ↑ _C Above, the energy level diagram illustrating the principle of pump probe spectroscopy –the excitation of an excited state. The AND gate is produced by the necessity of both A→A* and the C**→C excitations occurring at the same time -input and , are simultaneously required. To prevent erroneous emissions of light from a single input to the AND gate, it would be necessary to have an electron transfer series with ability accept any electrons (energy) from C* energy level. The electron transfer series would ter
https://en.wikipedia.org/wiki/Period-doubling%20bifurcation
In dynamical systems theory, a period-doubling bifurcation occurs when a slight change in a system's parameters causes a new periodic trajectory to emerge from an existing periodic trajectory—the new one having double the period of the original. With the doubled period, it takes twice as long (or, in a discrete dynamical system, twice as many iterations) for the numerical values visited by the system to repeat themselves. A period-halving bifurcation occurs when a system switches to a new behavior with half the period of the original system. A period-doubling cascade is an infinite sequence of period-doubling bifurcations. Such cascades are a common route by which dynamical systems develop chaos. In hydrodynamics, they are one of the possible routes to turbulence. Examples Logistic map The logistic map is where is a function of the (discrete) time . The parameter is assumed to lie in the interval , in which case is bounded on . For between 1 and 3, converges to the stable fixed point . Then, for between 3 and 3.44949, converges to a permanent oscillation between two values and that depend on . As grows larger, oscillations between 4 values, then 8, 16, 32, etc. appear. These period doublings culminate at , beyond which more complex regimes appear. As increases, there are some intervals where most starting values will converge to one or a small number of stable oscillations, such as near . In the interval where the period is for some positive integer , not all the points actually have period . These are single points, rather than intervals. These points are said to be in unstable orbits, since nearby points do not approach the same orbit as them. quadratic map Real version of complex quadratic map is related with real slice of the Mandelbrot set. Kuramoto–Sivashinsky equation The Kuramoto–Sivashinsky equation is an example of a spatiotemporally continuous dynamical system that exhibits period doubling. It is one of the most well-studied nonlin
https://en.wikipedia.org/wiki/Zenith%20Z-89
The Z-89 is a personal computer introduced in 1979 by Heathkit, but produced primarily by Zenith Data Systems (ZDS) in the early 1980s. It combined an updated version of the Heathkit H8 microcomputer and H19 terminal in a new case that also provided room for a built-in floppy disk on the right side of the display. Based on the Zilog Z80 microprocessor it is capable of running CP/M as well as Heathkit's own HDOS. Description The Zenith Z-89 is based on the Zilog Z80 microprocessor running at 2.048 MHz, and supports the HDOS and CP/M operating systems. The US$2295 Z-89 is integrated in a terminal-like enclosure with a non-detachable keyboard, 12-inch monochrome CRT with a 80x25 character screen, 48 KB RAM, and a 5.25" floppy disk drive. The keyboard is of high build quality and has an unusual number of special purpose keys: , , , , , , , , , , , and three with white, red, and blue squares. There are five function keys and a numeric keypad. The video display has reverse video and character graphics are available. The computer has two small card cages inside the cabinet on either side of the CRT, each of which accept up to three proprietary circuit cards. Upgrade cards available for this included disk controller cards (see below), a 16 KB RAM card that upgrades the standard 48 KB RAM to 64 KB, a RAM memory card accessible as a ramdrive using a special driver (above the Z80's 64 KB memory limit) and a multi-serial card providing extra RS-232 ports. The 2 MHz Z80 could be upgraded to 4 MHz. In 1979, prior to Zenith's purchase of Heath Company, Heathkit designed and marketed this computer in kit form as the Heath H89, assembled as the WH89, and without the floppy but with a cassette interface card as the H88. (Prior to the Zenith purchase, the Heathkit model numbers did not include the dash). Heath/Zenith also made a serial terminal, the H19/Z-19, based on the same enclosure (with a blank cover over the diskette drive cut-out) and terminal controller. The company of
https://en.wikipedia.org/wiki/Electron-withdrawing%20group
An electron-withdrawing group (EWG) is a group or an atom which attracts electron density towards itself and away from other adjacent atoms. An electron-withdrawing substituent has the following chemical implications: With regards to electron transfer, electron-withdrawing groups enhance the oxidizing power tendency of the appended species. Tetracyanoethylene is an oxidant because the alkene is appended to four cyano substituents, which are electron-withdrawing. With regards to acid-base reactions, acids with electron-withdrawing groups species have high acid dissociation constants. For EWG's attached to benzoic acids, this effect is described by the Hammett equation, which allows EWGs to be discussed quantitatively. With regards to nucleophilic substitution reactions, electron-withdrawing groups are susceptible to attack by weak nucleophiles. For example, compared to chlorobenzene, chlorodinitrobenzene is susceptible to reactions that displace chloride. Electron-withdrawing substituents enhance the Lewis acidity. Relative to methyl, fluorine is a strong EWG. It follows that boron trifluoride is a stronger Lewis acid than is trimethylborane. Electron-withdrawing substituents often lower the Lewis basicity. See also Electron-donating group
https://en.wikipedia.org/wiki/Neuromodulation
Neuromodulation is the physiological process by which a given neuron uses one or more chemicals to regulate diverse populations of neurons. Neuromodulators typically bind to metabotropic, G-protein coupled receptors (GPCRs) to initiate a second messenger signaling cascade that induces a broad, long-lasting signal. This modulation can last for hundreds of milliseconds to several minutes. Some of the effects of neuromodulators include: altering intrinsic firing activity, increasing or decreasing voltage-dependent currents, altering synaptic efficacy, increasing bursting activity and reconfigurating synaptic connectivity. Major neuromodulators in the central nervous system include: dopamine, serotonin, acetylcholine, histamine, norepinephrine, nitric oxide, and several neuropeptides. Cannabinoids can also be powerful CNS neuromodulators. Neuromodulators can be packaged into vesicles and released by neurons, secreted as hormones and delivered through the circulatory system. A neuromodulator can be conceptualized as a neurotransmitter that is not reabsorbed by the pre-synaptic neuron or broken down into a metabolite. Some neuromodulators end up spending a significant amount of time in the cerebrospinal fluid (CSF), influencing (or "modulating") the activity of several other neurons in the brain. Neuromodulatory systems The major neurotransmitter systems are the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system. Drugs targeting the neurotransmitter of such systems affect the whole system, which explains the mode of action of many drugs. Most other neurotransmitters, on the other hand, e.g. glutamate, GABA and glycine, are used very generally throughout the central nervous system. Noradrenaline system The noradrenaline system consists of around 15,000 neurons, primarily in the locus coeruleus. This is diminutive compared to the more than 100 billion neurons in the brain. As with dopaminergic neurons in the
https://en.wikipedia.org/wiki/Elgato
Elgato is a brand of consumer technology products. The brand was manufactured and designed by Elgato Systems, founded in 2010 by Markus Fest and was headquartered in Munich, Germany, until 2018 when the brand was sold to Corsair. History The brand, Elgato, was formerly a brand of Elgato Systems. The Elgato brand was used to refer to the company gaming and thunderbolt devices and was commonly called Elgato Gaming. On June 28, 2018, Corsair acquired the Elgato brand from Elgato Systems, while Elgato Systems kept their smart home division and renamed the company to Eve Systems. Products Thunderbolt dock Elgato introduced a Thunderbolt docking station in June 2014. A computer is plugged into the dock using a Thunderbolt port in order to gain access to the dock's three USB ports, audio jacks, HDMI and Ethernet. It is typically used to plug a Macbook into an office setting (printer, monitor, keyboard) or to provide additional ports not available in the MacBook Air. A review in The Register said it was compact and useful, but Windows users should consider a USB 3.0 dock. The Register and CNET disagreed on whether it was competitively priced. Reviews in TechRadar and Macworld gave it 4 out of 5 stars. Thunderbolt SSD Elgato introduced two external solid-state drives in September 2012 called Thunderbolt Drive. Benchmark tests by MacWorld and Tom's Hardware said that the hard drive was slower than other products they tested, despite being connected through a faster Thunderbolt port, rather than Firewire. The following year, in 2013, Elgato replaced them with similar drives identified as "Thunderbolt Drive +", which added USB 3.0 support and was claimed to be faster than the previous iteration. A CNET review of a Thunderbolt Drive+ drive gave it a 4.5 out of 5 star rating. It said the drive was "blazing fast" and "the most portable drive to date" but was also expensive. An article in The Register explained that the original drives introduced in 2012 didn't perform well
https://en.wikipedia.org/wiki/Solvated%20electron
A solvated electron is a free electron in (solvated in) a solution, and is the smallest possible anion. Solvated electrons occur widely. Often, discussions of solvated electrons focus on their solutions in ammonia, which are stable for days, but solvated electrons also occur in water and other solvents in fact, in any solvent that mediates outer-sphere electron transfer. The solvated electron is responsible for a great deal of radiation chemistry. Ammonia solutions Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu, and Yb (also Mg using an electrolytic process), giving characteristic blue solutions. For alkali metals in liquid ammonia, the solution is blue when dilute and copper-colored when more concentrated (> 3 molar). These solutions conduct electricity. The blue colour of the solution is due to ammoniated electrons, which absorb energy in the visible region of light. The diffusivity of the solvated electron in liquid ammonia can be determined using potential-step chronoamperometry. Solvated electrons in ammonia are the anions of salts called electrides. Na + 6 NH3 → [Na(NH3)6]+e− The reaction is reversible: evaporation of the ammonia solution produces a film of metallic sodium. Case study: Li in NH3 A lithium–ammonia solution at −60 °C is saturated at about 15 mol% metal (MPM). When the concentration is increased in this range electrical conductivity increases from 10−2 to 104 Ω−1cm−1 (larger than liquid mercury). At around 8 MPM, a "transition to the metallic state" (TMS) takes place (also called a "metal-to-nonmetal transition" (MNMT)). At 4 MPM a liquid-liquid phase separation takes place: the less dense gold-colored phase becomes immiscible from a denser blue phase. Above 8 MPM the solution is bronze/gold-colored. In the same concentration range the overall density decreases by 30%. Other solvents Alkali metals also dissolve in some small primary amines, such as methylamine and ethylami
https://en.wikipedia.org/wiki/Geometric%20topology%20%28object%29
In mathematics, the geometric topology is a topology one can put on the set H of hyperbolic 3-manifolds of finite volume. Use Convergence in this topology is a crucial ingredient of hyperbolic Dehn surgery, a fundamental tool in the theory of hyperbolic 3-manifolds. Definition The following is a definition due to Troels Jorgensen: A sequence in H converges to M in H if there are a sequence of positive real numbers converging to 0, and a sequence of -bi-Lipschitz diffeomorphisms where the domains and ranges of the maps are the -thick parts of either the 's or M. Alternate definition There is an alternate definition due to Mikhail Gromov. Gromov's topology utilizes the Gromov-Hausdorff metric and is defined on pointed hyperbolic 3-manifolds. One essentially considers better and better bi-Lipschitz homeomorphisms on larger and larger balls. This results in the same notion of convergence as above as the thick part is always connected; thus, a large ball will eventually encompass all of the thick part. On framed manifolds As a further refinement, Gromov's metric can also be defined on framed hyperbolic 3-manifolds. This gives nothing new but this space can be explicitly identified with torsion-free Kleinian groups with the Chabauty topology. See also Algebraic topology (object)
https://en.wikipedia.org/wiki/Soap%20dispenser
A soap dispenser (in Europe mostly known as a soap squirter) is a device that, when manipulated or triggered appropriately, dispenses soap (usually in small, single-use quantities). It can be manually operated using a handle or can be automatic. Soap dispensers are often found in public toilets. Design Manual soap dispensers The design of a manual soap dispenser is generally determined by whether the soap comes in liquid, powder, or foam form. Liquid soap When soap is dispensed in liquid form, it is generally in a squeeze bottle or pump. The most popular soap dispensers of this type are plastic pump bottles, many of which are disposable. William Quick patented liquid soap on August 22, 1865. Minnetonka Corporation introduced the first modern liquid soap in 1980, and bought up the entire supply of the plastic pumps used in their dispensers, to delay competition entering the market. Parts of the liquid soap dispenser Actuator – This is the top of the pump from which is pressed down to get the liquid out Closure – Closure is the bottle that is fastened to the bottle's neck. it has a smooth or ribbed surface Outer gasket – Made up of plastic or rubber, it is fit inside the closure and prevents leakage Housing – The main pump that keeps the other components in the right place and sends liquid to the actuator from the dip tube Dip tube – This is the visible tube that carries liquid from the bottom of the bottle up to the housing Interior components – There is a variety of components including a spring, ball, piston, or stem that help moves the liquid to the actuator How does the liquid soap dispenser work? The handwash bottle acts much like an air suction device that draws liquid upwards to the user’s hands against the force of gravity. When the user presses down the actuator, the piston compresses the spring and upward air pressure pulls the ball upward, along with the liquid product into the dip tube and then reaches the housing. When the user releases the
https://en.wikipedia.org/wiki/Password%20strength
Password strength is a measure of the effectiveness of a password against guessing or brute-force attacks. In its usual form, it estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and unpredictability. Using strong passwords lowers the overall risk of a security breach, but strong passwords do not replace the need for other effective security controls. The effectiveness of a password of a given strength is strongly determined by the design and implementation of the authentication factors (knowledge, ownership, inherence). The first factor is the main focus of this article. The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g. three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secured with relatively simple passwords. However, the system must store information about the user's passwords in some form and if that information is stolen, say by breaching system security, the user's passwords can be at risk. In 2019, the United Kingdom's NCSC analyzed public databases of breached accounts to see which words, phrases, and strings people used. The most popular password on the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while the top five included "qwerty", "password", and 1111111. Password creation Passwords are created either automatically (using randomizing equipment) or by a human; the latter case is more common. While the strength of randomly chosen passwords against a brute-force attack can be calculated with precision, determining the strength of human-generated passwords is difficult. Typically, humans are asked to choose a
https://en.wikipedia.org/wiki/Sparassis
Sparassis (also known as cauliflower mushroom) is a genus of parasitic and saprobic mushroom characterised by its unique shape and appearance and is found around the globe. Its appearance can be described as similar to a sea sponge, a brain or a head of cauliflower, hence its popular name. It is increasingly cultivated and sold in Korea, Japan, the United States and Australia. The generic name comes from the Greek sparassein, meaning to tear. Species The following species are recognised in the genus Sparassis: Sparassis americana R.H. Petersen Sparassis brevipes Krombh. Sparassis crispa (Wulfen) Fr. Sparassis cystidiosa Desjardin & Zheng Wang Sparassis foliacea St.-Amans Sparassis herbstii Peck Sparassis kazachstanica Shvartsman Sparassis laminosa Fries Sparassis latifolia Y.C. Dai & Zheng Wang Sparassis miniensis Blanco-Dios & Z. Wang Sparassis minoensis Blanco-Dios & Z. Wang Sparassis nemecii Pilát & Veselý Sparassis radicata Weir Sparassis simplex D.A. Reid Sparassis spathulata (Schwein.) Fr. Sparassis subalpina Q. Zhao, Zhu L. Yang & Y.C. Dai Sparassis tremelloides Berkeley The best-known and most widely collected species are S. crispa (found in Europe and eastern North America) and S. radicata (found in western North America). These species have a very similar appearance and some authorities treat them as conspecific. Their colour ranges from light brown-yellow to yellow-grey or a creamy-white cauliflower colour. They are normally 10 to 25 cm tall but can grow to be quite large, with reported cases of fruiting bodies more than 50cm tall and 14 kg in weight. Their unique look and size means they are unlikely to be mistaken for any poisonous/inedible mushrooms. They grow as parasites or saprobes on the roots or bases of various species of hardwoods, especially oak, and conifers, and hence are most commonly found growing close to fir, pine, oak or spruce trees. Edibility Sparassis crispa can be very tasty but should be thoroughly cleaned before use. The
https://en.wikipedia.org/wiki/Sparassis%20crispa
Sparassis crispa is a species of fungus in the family Sparassidaceae. It is sometimes called cauliflower fungus. Description S. crispa grows in an entangled globe that is up to in diameter. The lobes, which carry the spore-bearing surface, are flat and wavy, resembling lasagna noodles, coloured white to creamy yellow. When young they are tough and rubbery but later they become soft (they are monomitic). The odour is pleasant and the taste of the flesh mild. The spore print is cream, the smooth oval spores measuring about 5–7 µm by 3.5–5 µm. The flesh contains clamp connections. Ecology, distribution and related species This species is a brown rot fungus, found growing at the base of conifer trunks, often pines, but also spruce, cedar, larch and others. It is fairly common in Great Britain and temperate Europe (but not in the boreal zone). In Europe there is also a less well-known species of the same genus, Sparassis brevipes, which can be distinguished by its less crinkled, zoned folds and lack of clamp connections. Culinary use It is considered a good edible fungus when young and fresh, though it is difficult to clean (a toothbrush and running water are recommended for that process). One French cookbook, which gives four recipes for this species, says that grubs and pine needles can get caught up in holes in the jumbled mass of flesh. The Sparassis should be blanched in boiling water for 2–3 minutes before being added to the rest of the dish. It should be cooked slowly. See also Sparassis spathulata
https://en.wikipedia.org/wiki/Multiple%20single-level
Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware. The drive to develop MLS operating systems was severely hampered by the dramatic fall in data processing costs in the early 1990s. Before the advent of desktop computing, users with classified processing requirements had to either spend a lot of money for a dedicated computer or use one that hosted an MLS operating system. Throughout the 1990s, however, many offices in the defense and intelligence communities took advantage of falling computing costs to deploy desktop systems classified to operate only at the highest classification level used in their organization. These desktop computers operated in system high mode and were connected with LANs that carried traffic at the same level as the computers. MSL implementations such as these neatly avoided the complexities of MLS but traded off technical simplicity for inefficient use of space. Because most users in classified environments also needed unclassified systems, users often had at least two computers and sometimes more (one for unclassified processing and one for each classification level processed). In addition, each computer was connected to its own LAN at the appropriate classification level, meaning that multiple dedicated cabling plants were incorporated (at considerable cost in terms of both installation and maintenance). Limits of MSL versus MLS The obvious shortcoming of MSL (as compared to MLS) is that it does not support immixture of various classification levels in any manner. For example, the notion of concatenating a SECRET data stream (taken from a SECRET file) with a TOP SECRET data stream (read from a TOP SECRET file) and directing the resultant TOP SECRET data stream into
https://en.wikipedia.org/wiki/NetTop
NetTop is an NSA project to run Multiple Single-Level systems with a Security-Enhanced Linux host running VMware with Windows as a guest operating system. NetTop has . External links NSA web page on NetTop VMware PR page on NetTop HP NetTop web page TCS Trusted Workstation based on NetTop Linux security software National Security Agency operations
https://en.wikipedia.org/wiki/SBML
The Systems Biology Markup Language (SBML) is a representation format, based on XML, for communicating and storing computational models of biological processes. It is a free and open standard with widespread software support and a community of users and developers. SBML can represent many different classes of biological phenomena, including metabolic networks, cell signaling pathways, regulatory networks, infectious diseases, and many others. It has been proposed as a standard for representing computational models in systems biology today. History Late in the year 1999 through early 2000, with funding from the Japan Science and Technology Corporation (JST), Hiroaki Kitano and John C. Doyle assembled a small team of researchers to work on developing better software infrastructure for computational modeling in systems biology. Hamid Bolouri was the leader of the development team, which consisted of Andrew Finney, Herbert Sauro, and Michael Hucka. Bolouri identified the need for a framework to enable interoperability and sharing between the different simulation software systems for biology in existence during the late 1990s, and he organized an informal workshop in December 1999 at the California Institute of Technology to discuss the matter. In attendance at that workshop were the groups responsible for the development of DBSolve, E-Cell, Gepasi, Jarnac, StochSim, and The Virtual Cell. Separately, earlier in 1999, some members of these groups also had discussed the creation of a portable file format for metabolic network models in the BioThermoKinetics (BTK) group. The same groups who attended the first Caltech workshop met again on April 28–29, 2000, at the first of a newly created meeting series called Workshop on Software Platforms for Systems Biology. It became clear during the second workshop that a common model representation format was needed to enable the exchange of models between software tools as part of any functioning interoperability framework, and th
https://en.wikipedia.org/wiki/Vine%20pull%20schemes
Vine pull schemes are programs whereby grape growers receive a financial incentive to pull up their grape vines, a process known as arrachage in French. A large program of this kind was initiated by the European Union (EU) in 1988 to reduce the wine lake glut from overproduction and declining demand. In the first five years of the program, growers, mainly in southern France and southern Italy, were paid to destroy 320,000 hectares or of vineyard. This was the equivalent to the entire vineyard area of the world's fourth largest grower of grapes, the United States. The EU has recently resumed a vine pull scheme and Plan Bordeaux proposes additional vine pulls to increase prices for generic Bordeaux wine. It is believed that this has contributed to the "global wine shortage". Research by America's Morgan Stanley financial services firm says demand for wine "exceeded supply by 300m cases in 2012".
https://en.wikipedia.org/wiki/Chondrogenesis
Chondrogenesis is the process by which cartilage is developed. Cartilage in fetal development In embryogenesis, the skeletal system is derived from the mesoderm germ layer. Chondrification (also known as chondrogenesis) is the process by which cartilage is formed from condensed mesenchyme tissue, which differentiates into chondrocytes and begins secreting the molecules that form the extracellular matrix. Early in fetal development, the greater part of the skeleton is cartilaginous. This temporary cartilage is gradually replaced by bone (endochondral ossification), a process that ends at puberty. In contrast, the cartilage in the joints remains unossified during the whole of life and is, therefore, permanent. Mineralization Adult hyaline articular cartilage is progressively mineralized at the junction between cartilage and bone. It is then termed articular calcified cartilage. A mineralization front advances through the base of the hyaline articular cartilage at a rate dependent on cartilage load and shear stress. Intermittent variations in the rate of advance and mineral deposition density of the mineralizing front, lead to multiple "tidemarks" in the articular calcified cartilage. Adult articular calcified cartilage is penetrated by vascular buds, and new bone produced in the vascular space in a process similar to endochondral ossification at the physis. A cement line demarcates articular calcified cartilage from subchondral bones. Repair Once damaged, cartilage has limited repair capabilities. Because chondrocytes are bound in lacunae, they cannot migrate to damaged areas. Also, because hyaline cartilage does not have a blood supply, the deposition of new matrix is slow. Damaged hyaline cartilage is usually replaced by fibrocartilage scar tissue. Over the last years, surgeons and scientists have elaborated a series of cartilage repair procedures that help to postpone the need for joint replacement. In a 1994 trial, Swedish doctors repaired damaged knee
https://en.wikipedia.org/wiki/Radius%20of%20curvature%20%28optics%29
Radius of curvature (ROC) has specific meaning and sign convention in optical design. A spherical lens or mirror surface has a center of curvature located either along or decentered from the system local optical axis. The vertex of the lens surface is located on the local optical axis. The distance from the vertex to the center of curvature is the radius of curvature of the surface. The sign convention for the optical radius of curvature is as follows: If the vertex lies to the left of the center of curvature, the radius of curvature is positive. If the vertex lies to the right of the center of curvature, the radius of curvature is negative. Thus when viewing a biconvex lens from the side, the left surface radius of curvature is positive, and the right radius of curvature is negative. Note however that in areas of optics other than design, other sign conventions are sometimes used. In particular, many undergraduate physics textbooks use the Gaussian sign convention in which convex surfaces of lenses are always positive. Care should be taken when using formulas taken from different sources. Aspheric surfaces Optical surfaces with non-spherical profiles, such as the surfaces of aspheric lenses, also have a radius of curvature. These surfaces are typically designed such that their profile is described by the equation where the optic axis is presumed to lie in the z direction, and is the sag—the z-component of the displacement of the surface from the vertex, at distance from the axis. If and are zero, then is the radius of curvature and is the conic constant, as measured at the vertex (where ). The coefficients describe the deviation of the surface from the axially symmetric quadric surface specified by and . See also Radius of curvature (applications) Radius Base curve radius Cardinal point (optics) Vergence (optics)
https://en.wikipedia.org/wiki/Double%20heterostructure
A double heterostructure, sometimes called double heterojunction, is formed when two semiconductor materials are grown into a "sandwich". One material (such as AlGaAs) is used for the outer layers (or cladding), and another of smaller band gap (such as GaAs) is used for the inner layer. In this example, there are two AlGaAs-GaAs junctions (or boundaries), one at each side of the inner layer. There must be two boundaries for the device to be a double heterostructure. If there was only one side of cladding material, the device would be a simple, or single, heterostructure. The double heterostructure is a very useful structure in optoelectronic devices and has interesting electronic properties. If one of the cladding layers is p-doped, the other cladding layer n-doped and the smaller energy gap semiconductor material undoped, a p-i-n structure is formed. When a current is applied to the ends of the pin structure, electrons and holes are injected into the heterostructure. The smaller energy gap material forms energy discontinuities at the boundaries, confining the electrons and holes to the smaller energy gap semiconductor. The electrons and holes recombine in the intrinsic semiconductor emitting photons. If the width of the intrinsic region is reduced to the order of the de Broglie wavelength, the energies in the intrinsic region no longer become continuous but become discrete. (Actually, they are not continuous but the energy levels are very close together so we think of them as being continuous.) In this situation the double heterostructure becomes a quantum well.
https://en.wikipedia.org/wiki/Jorge%20Allende
Jorge Eduardo Allende Rivera, (born 11 November 1934) is a Chilean biochemist and biophysicist known for his contributions to the understanding of proteic biosynthesis and how transfer RNA is generated, and the regulation of maturation of amphibian eggs. He has been a foreign associate of the United States National Academy of Sciences since 2001, and was awarded the Chilean National Prize for Nature Sciences (Chile) in 1992. Early life Jorge Allende was born in Cartago, Costa Rica, son of Octavio Allende Echeverría, Chilean Consul in the city of Puntarenas, and Amparo Rivera Ortiz, a Costa Rican artist. Because of his father's job as a diplomat, he spent his childhood years between Costa Rica, Chile and the United States. He finished high school in a Jesuit School in New Orleans, Louisiana, where his father was appointed as the Chilean Consul. Subsequently, he studied at Louisiana State University in Baton Rouge, Louisiana. He obtained the Bachelor of Science in chemistry degree in 1957. Career He carried out his doctoral studies at Yale University in New Haven, Connecticut, United States, obtaining his Ph.D. in 1961 under the tutorship of Prof. F.M. Richards. He did post doctoral work with Prof. Fritz Lipmann at Rockefeller University and with Marshall Warren Nirenberg at NIH. During the 1960s, his research was focused on protein synthesis, a field in which he made crucial contributions. In the 1970s he was a pioneer in studying the mechanism of hormonal induction of oocyte maturation. His later research is focused in two ubiquitous protein kinases, CK1 and CK2, involved in the phosphorylation of key cellular proteins. He devoted much of his life to organizing activities for the scientific integration in Latin America especially through organizing series of training courses in molecular biology techniques, and through the creation of the Latin American Network of Biological Sciences. In recent years, Doctor Allende has been an promoter of science educat
https://en.wikipedia.org/wiki/GNU%20Binutils
The GNU Binary Utilities, or , are a set of programming tools for creating and managing binary programs, object files, libraries, profile data, and assembly source code. Tools They were originally written by programmers at Cygnus Solutions. The GNU Binutils are typically used in conjunction with compilers such as the GNU Compiler Collection (), build tools like , and the GNU Debugger (). Through the use of the Binary File Descriptor library (), most tools support the various object file formats supported by . Commands The include the following commands: elfutils Ulrich Drepper wrote , to partially replace GNU Binutils, purely for Linux and with support only for ELF and DWARF. It distributes three libraries with it for programmatic access. See also GNU Core Utilities GNU Debugger ldd (Unix), list symbols imported by the object file; similar to List of Unix commands llvm provides similar set of tools strace, a tool for system call debugging (enabled by kernel functionality) available on many distributions
https://en.wikipedia.org/wiki/Neurodegenerative%20disease
A neurodegenerative disease is caused by the progressive loss of structure or function of neurons, in the process known as neurodegeneration. Such neuronal damage may ultimately involve cell death. Neurodegenerative diseases include amyotrophic lateral sclerosis, multiple sclerosis, Parkinson's disease, Alzheimer's disease, Huntington's disease, multiple system atrophy, tauopathies, and prion diseases. Neurodegeneration can be found in the brain at many different levels of neuronal circuitry, ranging from molecular to systemic. Because there is no known way to reverse the progressive degeneration of neurons, these diseases are considered to be incurable; however research has shown that the two major contributing factors to neurodegeneration are oxidative stress and inflammation. Biomedical research has revealed many similarities between these diseases at the subcellular level, including atypical protein assemblies (like proteinopathy) and induced cell death. These similarities suggest that therapeutic advances against one neurodegenerative disease might ameliorate other diseases as well. Within neurodegenerative diseases, it is estimated that 55 million people worldwide had dementia in 2019, and that by 2050 this figure will increase to 139 million people. Specific disorders Alzheimer's disease Alzheimer's disease (AD) is a chronic neurodegenerative disease that results in the loss of neurons and synapses in the cerebral cortex and certain subcortical structures, resulting in gross atrophy of the temporal lobe, parietal lobe, and parts of the frontal cortex and cingulate gyrus. It is the most common neurodegenerative disease. Even with billions of dollars being used to find a treatment for Alzheimer's disease, no effective treatments have been found. However, clinical trials have developed certain compounds that could potentially change the future of Alzheimer's disease treatments. Within clinical trials stable and effective AD therapeutic strategies have a 99.5
https://en.wikipedia.org/wiki/Intego
Intego is a Mac and Windows security software company founded in 1997 by Jean-Paul Florencio and Laurent Marteau. The company creates Internet security software for macOS and Windows, including: antivirus, firewall, anti-spam, backup software and data protection software. Intego currently has offices in the U.S. in Seattle, Washington, and Austin, Texas, and international offices in Paris, France, and Nagano, Japan. All of Intego's products are universal binaries, and are supported in several languages: English, French, German, Japanese, and Spanish, and previously Italian. History Co-founded by former CEO Laurent Marteau and Jean-Paul Florencio and based in Paris, France, Intego released its first antivirus product in 1997: Rival, an antivirus for Mac OS 8. Two years later in July 1999, Intego released NetBarrier, the first personal security software suite for Mac OS 8. Then in October 2000, Intego released its legacy antivirus software, VirusBarrier 1.0, for Mac OS 8 and Mac OS 9. Intego launched The Mac Security Blog, a blog that covers Mac security news, Apple security updates, Mac malware alerts, as well as news and opinion pieces related to Apple products, in mid-2007. The company launched a podcast in October 2017, called the Intego Mac Podcast. Intego released its current X9 version of antivirus and security software in June 2016, which has since had several under-the-hood updates, including compatibility with new macOS releases and Apple silicon processors. Kape Technologies announced in July 2018 that it was acquiring Intego to "enhance [Kape's] arsenal of products in cyber protection." Intego released a Windows version of its antivirus software in July 2020. Intego's newest product is Intego Privacy Protection, a VPN solution for macOS and Windows that launched circa June 2021. Products VirusBarrier NetBarrier ContentBarrier Personal Backup Mac Washing Machine Intego Antivirus for Windows Intego Privacy Protection Remote Management Console Fil
https://en.wikipedia.org/wiki/Allan%20Gibbard
Allan Fletcher Gibbard (born 1942) is the Richard B. Brandt Distinguished University Professor of Philosophy Emeritus at the University of Michigan, Ann Arbor. Gibbard has made major contributions to contemporary ethical theory, in particular metaethics, where he has developed a contemporary version of non-cognitivism. He has also published articles in the philosophy of language, metaphysics, and social choice theory. Life and career Allan Fletcher Gibbard was born on April 7, 1942, in Providence, Rhode Island. He received his BA in mathematics from Swarthmore College in 1963 with minors in physics and philosophy. After teaching mathematics and physics in Ghana with the Peace Corps (1963–1965), Gibbard studied philosophy at Harvard University, participating in the seminar on social and political philosophy with John Rawls, Kenneth J. Arrow, Amartya K. Sen, and Robert Nozick. In 1971 Gibbard earned his PhD, writing a dissertation under the direction of John Rawls. He served as professor of philosophy at the University of Chicago (1969–1974), and the University of Pittsburgh (1974–1977), before joining the University of Michigan where he spent the remainder of his career until his retirement in 2016. Gibbard chaired the University of Michigan's philosophy department (1987–1988) and has held the title of Richard B. Brandt Distinguished University Professor of Philosophy since 1994. Gibbard was elected a Fellow of the American Academy of Arts and Sciences in 1990 and was elected a Fellow of the National Academy of Sciences in 2009, one of only two living philosophers to be so honored (the other being Brian Skyrms),. He is also a Fellow of the Econometric Society, and has received Fellowships from the National Endowment for the Humanities. He served as President of the Central Division of the American Philosophical Association from 2001 to 2002. He gave the Tanner Lectures at the University of California, Berkeley, in 2006. Philosophical work Social choice theory
https://en.wikipedia.org/wiki/Cytochalasin%20E
Cytochalasin E, a member of the cytochalasin group, is an inhibitor of actin polymerization in blood platelets. It inhibits angiogenesis and tumor growth. Unlike cytochalasin A and cytochalasin B, it does not inhibit glucose transport. Cytochalasin E, however, was noted to decrease glucose absorption in mice around the intestinal tissues by increasing the Km needed for glucose to reach the Vmax, which meant that a higher concentration of glucose was required in its presence to attain Vmax. Since Vmax remained the same according to another study, it is evident that CE is indeed a competitive inhibitor at the intestinal receptor sites for glucose. Because of its antiangiogenic effect, cytochalasin E is a potential drug for age-related macular degeneration, a kind of blindness caused by an abnormal proliferation of blood vessels in the eye. Cytochalasin E was found to be a potent and selective inhibitor of bovine capillary endothelial (BCE) cell proliferation. Cytochalasin E differs from other cytochalasin molecules by having an epoxide, which is required for specificity and potency. Cytochalasin E is a potent antiangiogenic agent that may be useful for treatments of cancer and other pathologic angiogenesis. Cytochalasin E was also found to inhibit autophagy, a process vital in recycling dysfunctional cells and cellular components. Cancer cells thus favor autophagy due to its role in countering metabolic stresses induced by anti-cancer drugs such as bortezomib in order to regenerate healthier cancer cells for continued proliferation and growth. In a study, it was confirmed that when CE was used, fusion of autophagosomes with lysozyme was inhibited and so cell death due to bortezomib, a proteasome inhibitor, was amplified as unnecessary proteins would continue to build up inside cancer cells unable to be further recycled through autophagy, leading to apoptosis.
https://en.wikipedia.org/wiki/Karak%20%28mascot%29
Karak was the mascot for the 2006 Commonwealth Games. He was modelled on a red-tailed black cockatoo, a threatened species within the host country, Australia. His biography, according to Commonwealth Games organisers: Comes from a long line of squawkers. His Mum nested at an early age and foraged for the family. His Gran was famous in the area for her seed cakes. He has two brothers who were well-known badminton shuttlecocks, and a sister who passed her school exams with flying colours! Four years at Treetops College studying Australian Endangered Species. Ran the Uni Sports Society. Apparently egged the principal's car during Orientation Week but nothing's ever been proven. Despite his initial acceptance by Australians, particularly children, and despite appearing on a lot of the foreign made merchandise, Karak was noticeably absent from the Games, particularly the Opening and Closing Ceremonies, where he was inexplicably replaced by a white duck. See also List of Australian sporting mascots List of Commonwealth Games mascots Borobi (mascot) Clyde (mascot) Matilda (mascot) Shera (mascot)
https://en.wikipedia.org/wiki/Thyroid%20function%20tests
Thyroid function tests (TFTs) is a collective term for blood tests used to check the function of the thyroid. TFTs may be requested if a patient is thought to suffer from hyperthyroidism (overactive thyroid) or hypothyroidism (underactive thyroid), or to monitor the effectiveness of either thyroid-suppression or hormone replacement therapy. It is also requested routinely in conditions linked to thyroid disease, such as atrial fibrillation and anxiety disorder. A TFT panel typically includes thyroid hormones such as thyroid-stimulating hormone (TSH, thyrotropin) and thyroxine (T4), and triiodothyronine (T3) depending on local laboratory policy. Thyroid-stimulating hormone Thyroid-stimulating hormone (TSH, thyrotropin) is generally increased in hypothyroidism and decreased in hyperthyroidism, making it the most important test for early detection of both of these conditions. The result of this assay is suggestive of the presence and cause of thyroid disease, since a measurement of elevated TSH generally indicates hypothyroidism, while a measurement of low TSH generally indicates hyperthyroidism. However, when TSH is measured by itself, it can yield misleading results, so additional thyroid function tests must be compared with the result of this test for accurate diagnosis. TSH is produced in the pituitary gland. The production of TSH is controlled by thyrotropin-releasing hormone (TRH), which is produced in the hypothalamus. TSH levels may be suppressed by excess free T3 (fT3) or free T4 (fT4) in the blood. History First-generation TSH assays were done by radioimmunoassay and were introduced in 1965. There were variations and improvements upon TSH radioimmunoassay, but their use declined as a new immunometric assay technique became available in the middle of the 1980s. The new techniques were more accurate, leading to the second, third, and even fourth generations of TSH assay, with each generation possessing ten times greater functional sensitivity than the last.
https://en.wikipedia.org/wiki/Q%20%28emulator%29
Q is a free emulator software that runs on Mac OS X, including OS X on PowerPC. Q is Mike Kronenberg's port of the open source and generic processor emulator QEMU. Q uses Cocoa and other Apple technologies, such as Core Image and Core Audio, to achieve its emulation. Q can be used to run Windows, or any other operating system based on the x86 architecture, on the Macintosh. Q is available as a Universal Binary and, as such, can run on Intel or PowerPC based Macintosh systems. However, some target guest architectures are unsupported on Lion (due to the removal of Rosetta) such as SPARC, MIPS, ARM and x86_64 since the softmmus are PowerPC only binaries. Unlike QEMU, which is a command-line application, Q has a native graphical interface for managing and configuring virtual machines. As of June 2022, the project was "on hold". See also qcow Comparison of platform virtualization software SPIM Emulator QEMU
https://en.wikipedia.org/wiki/Multiplicity%20of%20infection
In microbiology, the multiplicity of infection or MOI is the ratio of agents (e.g. phage or more generally virus, bacteria) to infection targets (e.g. cell). For example, when referring to a group of cells inoculated with virus particles, the MOI is the ratio of the number of virus particles to the number of target cells present in a defined space. Interpretation The actual number of viruses or bacteria that will enter any given cell is a stochastic process: some cells may absorb more than one infectious agent, while others may not absorb any. Before determining the multiplicity of infection, it's absolutely necessary to have a well-isolated agent, as crude agents may not produce reliable and reproducible results. The probability that a cell will absorb virus particles or bacteria when inoculated with an MOI of can be calculated for a given population using a Poisson distribution. This application of Poisson's distribution was applied and described by Ellis and Delbrück. where is the multiplicity of infection or MOI, is the number of infectious agents that enter the infection target, and is the probability that an infection target (a cell) will get infected by infectious agents. In fact, the infectivity of the virus or bacteria in question will alter this relationship. One way around this is to use a functional definition of infectious particles rather than a strict count, such as a plaque forming unit for viruses. For example, when an MOI of 1 (1 infectious viral particle per cell) is used to infect a population of cells, the probability that a cell will not get infected is , and the probability that it be infected by a single particle is , by two particles is , by three particles is , and so on. The average percentage of cells that will become infected as a result of inoculation with a given MOI can be obtained by realizing that it is simply . Hence, the average fraction of cells that will become infected following an inoculation with an MOI of is give
https://en.wikipedia.org/wiki/Meat%20raffle
A meat raffle is a tradition of raffling off meat, often in pubs and bars in Australia, in some areas of Britain and the US, and in Western Canada. A meat raffle is also sometimes called a meat draw. In some cases the raffle is operated by a designated charity, though in Britain most of the proceeds are spent on prizes and the raffle is run as a social occasion and a method of enticing customers into a local pub. The meat ranges in animal and cut and often comes from local butchers. In the UK, a typical meat raffle would have approximately 25-30 tickets sold at £1 each, though there is considerable variation and some raffles are much larger. Depending on the specific raffle, when a winning number is called the winner can either pick their cut of meat or opt for a gift certificate. Also simply known as a meat tray, the tradition is well known in Australian and New Zealand pubs. The trays of meat raffled vary in content: a barbecue style mix of steaks, lamb chops, sausages etc. is the most common, however "breakfast trays" (bacon, eggs, sausages) and "seafood trays" (prawns, oysters, mussels) are also common. Meat trays are usually raffled to raise money for local sporting teams, often those associated with the particular pub the raffle occurs in. The proceeds often help fund the team's end of season trip. Care must be taken with seafood trays given the propensity for the contents to spoil in the heat as the lucky winner continues drinking; often a friendly publican will store the tray in the fridge until the winner is sufficiently refreshed and ready to head home. An Australian variant of the meat raffle is the chook raffle, where a chicken is raffled off. A Canadian variant popular primarily in the Sudbury area is "Porketta Bingo", in which a traditional Italian porchetta is given as a prize in a card game as a fundraiser for local minor hockey leagues. Part of the interval at the BDO World Darts Championship included a meat raffle to raise funds for youth team
https://en.wikipedia.org/wiki/Concurrent%20constraint%20logic%20programming
Concurrent constraint logic programming is a version of constraint logic programming aimed primarily at programming concurrent processes rather than (or in addition to) solving constraint satisfaction problems. Goals in constraint logic programming are evaluated concurrently; a concurrent process is therefore programmed as the evaluation of a goal by the interpreter. Syntactically, concurrent constraint logic programs are similar to non-concurrent programs, the only exception being that clauses include guards, which are constraints that may block the applicability of the clause under some conditions. Semantically, concurrent constraint logic programming differs from its non-concurrent versions because a goal evaluation is intended to realize a concurrent process rather than finding a solution to a problem. Most notably, this difference affects how the interpreter behaves when more than one clause is applicable: non-concurrent constraint logic programming recursively tries all clauses; concurrent constraint logic programming chooses only one. This is the most evident effect of an intended directionality of the interpreter, which never revise a choice it has previously taken. Other effects of this are the semantical possibility of having a goal that cannot be proved while the whole evaluation does not fail, and a particular way for equating a goal and a clause head. Constraint handling rules can be seen as a form of concurrent constraint logic programming, but are used for programming a constraint simplifier or solver rather than concurrent processes. Description In constraint logic programming, the goals in the current goal are evaluated sequentially, usually proceeding in a LIFO order in which newer goals are evaluated first. The concurrent version of logic programming allows for evaluating goals in parallel: every goal is evaluated by a process, and processes run concurrently. These processes interact via the constraint store: a process can add a constraint to
https://en.wikipedia.org/wiki/Vertebral%20vein
The vertebral vein is formed in the suboccipital triangle, from numerous small tributaries which spring from the internal vertebral venous plexuses and issue from the vertebral canal above the posterior arch of the atlas. They unite with small veins from the deep muscles at the upper part of the back of the neck, and form a vessel which enters the foramen in the transverse process of the atlas, and descends, forming a dense plexus around the vertebral artery, in the canal formed by the transverse foramina of the upper six cervical vertebrae. This plexus ends in a single trunk, which emerges from the transverse foramina of the sixth cervical vertebra, and opens at the root of the neck into the back part of the innominate vein near its origin, its mouth being guarded by a pair of valves. On the right side, it crosses the first part of the subclavian artery. Additional images
https://en.wikipedia.org/wiki/Anterior%20jugular%20vein
The anterior jugular vein is a vein in the neck. Structure The anterior jugular vein lies lateral to the cricothyroid ligament. It begins near the hyoid bone by the confluence of several superficial veins from the submandibular region. Its tributaries are some laryngeal veins, and occasionally a small thyroid vein. It descends between the median line and the anterior border of the sternocleidomastoid muscle, and, at the lower part of the neck, passes beneath that muscle to open into the termination of the external jugular vein, or, in some instances, into the subclavian vein. Just above the sternum the two anterior jugular veins communicate by a transverse trunk, the venous jugular arch, which receive tributaries from the inferior thyroid veins; each also communicates with the internal jugular. There are no valves in this vein. The pretracheal lymph nodes follow the anterior jugular vein on each side of the midline. Variation The anterior jugular vein varies considerably in size, bearing usually an inverse proportion to the external jugular. Most frequently, there are two anterior jugulars, a right and left. However, there is sometimes only one. A duplicate anterior jugular vein may be present on one side, which may cross over the midline. Clinical significance Ultrasound The anterior jugular vein, if present, is easily identified using ultrasound of the neck. Tracheotomy The anterior jugular vein may be damaged during tracheotomy, causing significant bleeding. The significant variation in vein course, such as duplicate veins, creates this risk. Performing a midline incision helps to avoid the anterior jugular vein. Additional images
https://en.wikipedia.org/wiki/Posterior%20external%20jugular%20vein
The posterior external jugular vein begins in the occipital region and returns the blood from the skin and superficial muscles in the upper and back part of the neck, lying between the Splenius and Trapezius. It runs down the back part of the neck, and opens into the external jugular vein just below the middle of its course. See also jugular vein
https://en.wikipedia.org/wiki/Cre%20recombinase
Cre recombinase is a tyrosine recombinase enzyme derived from the P1 bacteriophage. The enzyme uses a topoisomerase I-like mechanism to carry out site specific recombination events. The enzyme (38kDa) is a member of the integrase family of site specific recombinase and it is known to catalyse the site specific recombination event between two DNA recognition sites (LoxP sites). This 34 base pair (bp) loxP recognition site consists of two 13 bp palindromic sequences which flank an 8bp spacer region. The products of Cre-mediated recombination at loxP sites are dependent upon the location and relative orientation of the loxP sites. Two separate DNA species both containing loxP sites can undergo fusion as the result of Cre mediated recombination. DNA sequences found between two loxP sites are said to be "floxed". In this case the products of Cre mediated recombination depends upon the orientation of the loxP sites. DNA found between two loxP sites oriented in the same direction will be excised as a circular loop of DNA whilst intervening DNA between two loxP sites that are opposingly orientated will be inverted. The enzyme requires no additional cofactors (such as ATP) or accessory proteins for its function. The enzyme plays important roles in the life cycle of the P1 bacteriophage, such as cyclization of the linear genome and resolution of dimeric chromosomes that form after DNA replication. Cre recombinase is a widely used tool in the field of molecular biology. The enzyme's unique and specific recombination system is exploited to manipulate genes and chromosomes in a huge range of research, such as gene knock out or knock in studies. The enzyme's ability to operate efficiently in a wide range of cellular environments (including mammals, plants, bacteria, and yeast) enables the Cre-Lox recombination system to be used in a vast number of organisms, making it a particularly useful tool in scientific research. Discovery Studies carried out in 1981 by Sternberg and Ham
https://en.wikipedia.org/wiki/Pseudodiploid
Pseudodiploid or pseudoploid refers to one of the essential components in viral reproduction. It means having two RNA genomes per virion but giving rise to only one DNA copy in infected cells. The term is also used to refer to cells that are diploid, but have chromosomal translocations. Overview Retrovirions for example are considered pseudoploid – they have two genomes within each capsid, but in general only one provirus is seen after infection with a single virion. Retrovirus particles contain two copies of the RNA genome held together by multiple regions of base pairing (strongest pairing at 5’ ends) which is also called 70S complex (dimer of 35S genomes). This gives viruses evolutionary advantages such as the ability to survive extensive damage to their genomes, as at least parts of both genomes are used during the reverse transcription process. It also explains the high rates of genetic recombination in retroviruses. The retroviral genome is coated by a viral nucleocapsid protein that may function like a single stranded binding protein and therefore enhances processivity and facilitates template exchanges. The nucleocapsid first organizes RNA genomes within the virion and then facilitates reverse transcription within the infected cell.
https://en.wikipedia.org/wiki/Geisenheim%20Grape%20Breeding%20Institute
The Geisenheim Grape Breeding Institute was founded in 1872 and is located in the town of Geisenheim, in Germany's Rheingau region. In 1876 Swiss-born professor Hermann Müller joined the institute, where he developed his namesake grape variety Müller-Thurgau, which became Germany's most-planted grape variety in the 1970s. Professor Helmut Becker worked at the institute from 1964 until his death in 1989. Academic Grade Geisenheim is the only German institution to award higher academic degrees in winemaking. Formally, undergraduate level viticulture and enology, ending with a bachelor's degree in engineering is awarded by the University of Applied Sciences in Wiesbaden, and the newly introduced master's degree is awarded by the Giessen University. Breeds White: Müller-Thurgau, Arnsburger, Ehrenfelser, Saphira, Reichensteiner, Ehrenbreitsteiner, Prinzipal, Osteiner, Witberger, Schönburger, Primera, Rabaner, Hibernal Red: Rotberger, Dakapo Improvements: Rondo, Orléans, Dunkelfelder See also Wine German wine Geilweilerhof Institute for Grape Breeding
https://en.wikipedia.org/wiki/Transistor%20count
The transistor count is the number of transistors in an electronic device (typically on a single substrate or "chip"). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in the cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observed that the transistor count doubles approximately every two years. However, being directly proportional to the area of a chip, transistor count does not represent how advanced the corresponding manufacturing technology is: a better indication of this is the transistor density (the ratio of a chip's transistor count to its area). , the highest transistor count in flash memory is Micron's 2terabyte (3D-stacked) 16-die, 232-layer V-NAND flash memory chip, with 5.3trillion floating-gate MOSFETs (3bits per transistor). The highest transistor count in a single chip processor is that of the deep learning processor Wafer Scale Engine 2 by Cerebras. It has 2.6trillion MOSFETs in 84 exposed fields (dies) on a wafer, manufactured using TSMC's 7 nm FinFET process. As of 2023, the GPU with the highest transistor count is AMD's MI300X, built on TSMC's N5 process and totalling 153 billion MOSFETs. The highest transistor count in a consumer microprocessor is 134billion transistors, in Apple's ARM-based dual-die M2 Ultra system on a chip, which is fabricated using TSMC's 5 nm semiconductor manufacturing process. In terms of computer systems that consist of numerous integrated circuits, the supercomputer with the highest transistor count was the Chinese-designed Sunway TaihuLight, which has for all CPUs/nodes combined "about 400 trillion transistors in the processing part of the hardware" and "the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors." To compare, the smallest comp
https://en.wikipedia.org/wiki/Wilson%20current%20mirror
A Wilson current mirror is a three-terminal circuit (Fig. 1) that accepts an input current at the input terminal and provides a "mirrored" current source or sink output at the output terminal. The mirrored current is a precise copy of the input current. It may be used as a Wilson current source by applying a constant bias current to the input branch as in Fig. 2. The circuit is named after George R. Wilson, an integrated circuit design engineer who worked for Tektronix. Wilson devised this configuration in 1967 when he and Barrie Gilbert challenged each other to find an improved current mirror overnight that would use only three transistors. Wilson won the challenge. Circuit operation There are three principal metrics of how well a current mirror will perform as part of a larger circuit. The first measure is the static error, the difference between the input and output currents expressed as a fraction of the input current. Minimizing this difference is critical in such applications of a current mirror as the differential to single-ended output signal conversion in a differential amplifier stage because this difference controls the common mode and power supply rejection ratios. The second measure is the output impedance of the current source or equivalently its inverse, the output conductance. This impedance affects stage gain when a current source is used as an active load and affects common mode gain when the source provides the tail current of a differential pair. The last metric is the pair of minimum voltages from the common terminal, usually a power rail connection, to the input and output terminals that are required for proper operation of the circuit. These voltages affect the headroom to the power supply rails that is available for the circuitry in which the current mirror is embedded. An approximate analysis due to Gilbert shows how the Wilson current mirror works and why its static error should be very low. Transistors Q1 and Q2 in Fig. 1 are a mat
https://en.wikipedia.org/wiki/Density%20gradient
Density gradient is a spatial variation in density over an area. The term is used in the natural sciences to describe varying density of matter, but can apply to any quantity whose density can be measured. Aerodynamics In the study of supersonic flight, Schlieren photography observes the density gradient of air as it interacts with aircraft. Also in the field of Computational Fluid Dynamics, Density gradient is used to observe the acoustic waves, shock waves or expansion waves in the flow field. Water A steep density gradient in a body of water can have the effect of trapping energy and preventing convection, such a gradient is employed in solar ponds. In the case of salt water, sharp gradients can lead to stratification of different concentrations of salinity. This is called a Halocline. Biology In the life sciences, a special technique called density gradient separation is used for isolating and purifying cells, viruses and subcellular particles. Variations of this include Isopycnic centrifugation, Differential centrifugation, and Sucrose gradient centrifugation. A blood donation technique called Pheresis involves density gradient separation. Geophysics The understanding of what is at the centre of the earth, the earth core, requires the framework of density gradients in which elements and compounds then interact. The existence of a fast breeder nuclear reactor system at the core of the earth is one theory by reason of density gradient. A popular model for the density gradient of Earth is one given by the Preliminary Reference Earth Model (PREM) that is based on the observed oscillations of the planet and the travel times of thousands of seismic waves. Several models for density gradient have been built on the basis of PREM but they acknowledge that the "distribution of density as a function of position within the Earth is much less well constrained than the seismic velocities. The primary information comes from the mass and moment of inertia of the Earth
https://en.wikipedia.org/wiki/Intercavernous%20sinuses
The intercavernous sinuses are two in number, an anterior and a posterior, and connect the two cavernous sinuses across the middle line. The anterior passes in front of the hypophysis cerebri (pituitary gland), the posterior behind it, and they form with the cavernous sinuses a venous circle (circular sinus) around the hypophysis. The anterior one is usually the larger of the two, and one or other is occasionally absent.
https://en.wikipedia.org/wiki/DenyHosts
DenyHosts is a log-based intrusion-prevention security tool for SSH servers written in Python. It is intended to prevent brute-force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses. DenyHosts is developed by Phil Schwartz, who is also the developer of Kodos Python Regular Expression Debugger. Operation DenyHosts checks the end of the authentication log for recent failed login attempts. It records information about their originating IP addresses and compares the number of invalid attempts to a user-specified threshold. If there have been too many invalid attempts it assumes a dictionary attack is occurring and prevents the IP address from making any further attempts by adding it to /etc/hosts.deny on the server. DenyHosts 2.0 and above support centralized synchronization, so that repeat offenders are blocked from many computers. The site denyhosts.net gathers statistics from computers running the software. DenyHosts is restricted to connections using IPv4. It does not work with IPv6. DenyHosts may be run manually, as a daemon, or as a cron job. Discoveries In July 2007, The Register reported that from May until July that year, "compromised computers" at Oracle UK were listed among the ten worst offenders for launching brute force SSH attacks on the Internet, according to public DenyHosts listings. After an investigation, Oracle denied suggestions that any of its computers had been compromised. Vulnerabilities Daniel B. Cid wrote a paper showing that DenyHosts, as well the similar programs Fail2ban and BlockHosts, were vulnerable to remote log injection, an attack technique similar to SQL injection, in which a specially crafted user name is used to trigger a block against a site chosen by the attacker. This was fixed in version 2.6. Forks and descendants Since there had been no further development by the original author Phil Schwartz after the release of version 2.6 (December 2006) a
https://en.wikipedia.org/wiki/Transverse%20sinuses
The transverse sinuses (left and right lateral sinuses), within the human head, are two areas beneath the brain which allow blood to drain from the back of the head. They run laterally in a groove along the interior surface of the occipital bone. They drain from the confluence of sinuses (by the internal occipital protuberance) to the sigmoid sinuses, which ultimately connect to the internal jugular vein. See diagram (at right): labeled under the brain as "" (for Latin: sinus transversus). Structure The transverse sinuses are of large size and begin at the internal occipital protuberance; one, generally the right, being the direct continuation of the superior sagittal sinus, the other of the straight sinus. Each transverse sinus passes lateral and forward, describing a slight curve with its convexity upward, to the base of the petrous portion of the temporal bone, and lies, in this part of its course, in the attached margin of the tentorium cerebelli; it then leaves the tentorium and curves downward and medialward (an area sometimes referred to as the sigmoid sinus) to reach the jugular foramen, where it ends in the internal jugular vein. In its course it rests upon the squama of the occipital, the mastoid angle of the parietal, the mastoid part of the temporal, and, just before its termination, the jugular process of the occipital; the portion which occupies the groove on the mastoid part of the temporal is sometimes termed the sigmoid sinus. The transverse sinuses are frequently of unequal size, with the one formed by the superior sagittal sinus being the larger; they increase in size as they proceed, from back to center. On transverse section, the horizontal portion exhibits a prismatic form, the curved portion has a semicylindrical form. They receive the blood from the superior petrosal sinuses at the base of the petrous portion of the temporal bone; they communicate with the veins of the pericranium by means of the mastoid and condyloid emissary veins;
https://en.wikipedia.org/wiki/Hajdu%E2%80%93Cheney%20syndrome
Hajdu–Cheney syndrome, also called acroosteolysis with osteoporosis and changes in skull and mandible, arthrodentoosteodysplasia and Cheney syndrome, is an extremely rare autosomal dominant congenital disorder of the connective tissue characterized by severe and excessive bone resorption leading to osteoporosis and a wide range of other possible symptoms. Mutations in the NOTCH2 gene, identified in 2011, cause HCS. HCS is so rare that only about 50 cases have been reported worldwide since the discovery of the syndrome in 1948 Signs and symptoms Hajdu–Cheney syndrome causes many issues with an individual's connective tissues. Some general characteristics of an individual with Hajdu–Cheney syndrome include bone flexibility and deformities, short stature, delayed acquisition of speech and motor skills, dolichocephalic skull, Wormian bone, small maxilla, hypoplastic frontal sinuses, basilar impression, joint laxity, bulbous finger tips and severe osteoporosis. Wormian bone occurs when extra bones appear between cranial sutures. Fetuses with Hajdu–Cheney syndrome often will not be seen to unclench their hands on obstetrical ultrasound. They may also have low-set ears and their eyes may be farther apart than on a usual child, called hypertelorism. Children's heads can have some deformities in their shape and size (plagiocephaly). Early tooth loss and bone deformities, such as serpentine tibiae and fibulae, are also common in those affected. Genetics Hajdu–Cheney syndrome is a monogenic disorder. The disorder is inherited and controlled by a single pair of genes. A single copy of the mutant gene on an autosome causes HCS. HCS is an autosomal dominant disorder, only one parent with the defective gene is needed to pass the disorder to the offspring. Mutations within the last coding exon of NOTCH2 that remove the PEST domain and escape the nonsense-mediated mRNA decay have been shown to be the main cause of Hajdu–Cheney syndrome. The NOTCH2 gene plays a very important rol
https://en.wikipedia.org/wiki/Gresham%20Professor%20of%20Geometry
The Professor of Geometry at Gresham College, London, gives free educational lectures to the general public. The college was founded for this purpose in 1597, when it appointed seven professors; this has since increased to ten and in addition the college now has visiting professors. The Professor of Geometry is always appointed by the City of London Corporation. List of Gresham Professors of Geometry Note, years given as, say, 1596/7 refer to Old Style and New Style dates.
https://en.wikipedia.org/wiki/Inscribed%20figure
In geometry, an inscribed planar shape or solid is one that is enclosed by and "fits snugly" inside another geometric shape or solid. To say that "figure F is inscribed in figure G" means precisely the same thing as "figure G is circumscribed about figure F". A circle or ellipse inscribed in a convex polygon (or a sphere or ellipsoid inscribed in a convex polyhedron) is tangent to every side or face of the outer figure (but see Inscribed sphere for semantic variants). A polygon inscribed in a circle, ellipse, or polygon (or a polyhedron inscribed in a sphere, ellipsoid, or polyhedron) has each vertex on the outer figure; if the outer figure is a polygon or polyhedron, there must be a vertex of the inscribed polygon or polyhedron on each side of the outer figure. An inscribed figure is not necessarily unique in orientation; this can easily be seen, for example, when the given outer figure is a circle, in which case a rotation of an inscribed figure gives another inscribed figure that is congruent to the original one. Familiar examples of inscribed figures include circles inscribed in triangles or regular polygons, and triangles or regular polygons inscribed in circles. A circle inscribed in any polygon is called its incircle, in which case the polygon is said to be a tangential polygon. A polygon inscribed in a circle is said to be a cyclic polygon, and the circle is said to be its circumscribed circle or circumcircle. The inradius or filling radius of a given outer figure is the radius of the inscribed circle or sphere, if it exists. The definition given above assumes that the objects concerned are embedded in two- or three-dimensional Euclidean space, but can easily be generalized to higher dimensions and other metric spaces. For an alternative usage of the term "inscribed", see the inscribed square problem, in which a square is considered to be inscribed in another figure (even a non-convex one) if all four of its vertices are on that figure. Properties Ever
https://en.wikipedia.org/wiki/Exchange%20interaction
In chemistry and physics, the exchange interaction or exchange splitting (with an exchange energy and exchange term) is a quantum mechanical effect that only occurs between identical particles. Despite sometimes being called an exchange force in an analogy to classical force, it is not a true force as it lacks a force carrier. The effect is due to the wave function of indistinguishable particles being subject to exchange symmetry, that is, either remaining unchanged (symmetric) or changing sign (antisymmetric) when two particles are exchanged. Both bosons and fermions can experience the exchange interaction. For fermions, this interaction is sometimes called Pauli repulsion and is related to the Pauli exclusion principle. For bosons, the exchange interaction takes the form of an effective attraction that causes identical particles to be found closer together, as in Bose–Einstein condensation. The exchange interaction alters the expectation value of the distance when the wave functions of two or more indistinguishable particles overlap. This interaction increases (for fermions) or decreases (for bosons) the expectation value of the distance between identical particles (compared to distinguishable particles). Among other consequences, the exchange interaction is responsible for ferromagnetism and the volume of matter. It has no classical analogue. Exchange interaction effects were discovered independently by physicists Werner Heisenberg and Paul Dirac in 1926. "Force" description The exchange interaction is sometimes called the exchange force. However, it is not a true force and should not be confused with the exchange forces produced by the exchange of force carriers, such as the electromagnetic force produced between two electrons by the exchange of a photon, or the strong force between two quarks produced by the exchange of a gluon. Although sometimes erroneously described as a force, the exchange interaction is a purely quantum mechanical effect unlike other
https://en.wikipedia.org/wiki/Nominal%20type%20system
In computer science, a type system is nominal (also called nominative or name-based) if compatibility and equivalence of data types is determined by explicit declarations and/or the name of the types. Nominal systems are used to determine if types are equivalent, as well as if a type is a subtype of another. Nominal type systems contrast with structural systems, where comparisons are based on the structure of the types in question and do not require explicit declarations. Nominal typing Nominal typing means that two variables are type-compatible if and only if their declarations name the same type. For example, in C, two struct types with different names in the same translation unit are never considered compatible, even if they have identical field declarations. However, C also allows a typedef declaration, which introduces an alias for an existing type. These are merely syntactical and do not differentiate the type from its alias for the purpose of type checking. This feature, present in many languages, can result in a loss of type safety when (for example) the same primitive integer type is used in two semantically distinct ways. Haskell provides the C-style syntactic alias in the form of the type declaration, as well as the newtype declaration that does introduce a new, distinct type, isomorphic to an existing type. Nominal subtyping In a similar fashion, nominal subtyping means that one type is a subtype of another if and only if it is explicitly declared to be so in its definition. Nominally-typed languages typically enforce the requirement that declared subtypes be structurally compatible (though Eiffel allows non-compatible subtypes to be declared). However, subtypes which are structurally compatible "by accident", but not declared as subtypes, are not considered to be subtypes. C++, C#, Java, Objective-C, Delphi, Swift, Julia and Rust all primarily use both nominal typing and nominal subtyping. Some nominally-subtyped languages, such as Java and C#,
https://en.wikipedia.org/wiki/Structural%20type%20system
A structural type system (or property-based type system) is a major class of type systems in which type compatibility and equivalence are determined by the type's actual structure or definition and not by other characteristics such as its name or place of declaration. Structural systems are used to determine if types are equivalent and whether a type is a subtype of another. It contrasts with nominative systems, where comparisons are based on the names of the types or explicit declarations, and duck typing, in which only the part of the structure accessed at runtime is checked for compatibility. Description In structural typing, an element is considered to be compatible with another if, for each feature within the second element's type, a corresponding and identical feature exists in the first element's type. Some languages may differ on the details, such as whether the features must match in name. This definition is not symmetric, and includes subtype compatibility. Two types are considered to be identical if each is compatible with the other. For example, OCaml uses structural typing on methods for compatibility of object types. Go uses structural typing on methods to determine compatibility of a type with an interface. C++ template functions exhibit structural typing on type arguments. Haxe uses structural typing, but classes are not structurally subtyped. In languages which support subtype polymorphism, a similar dichotomy can be formed based on how the subtype relationship is defined. One type is a subtype of another if and only if it contains all the features of the base type, or subtypes thereof. The subtype may contain added features, such as members not present in the base type, or stronger invariants. A distinction exists between structural substitution for inferred and non-inferred polymorphism. Some languages, such as Haskell, do not substitute structurally in the case where an expected type is declared (i.e., not inferred), e.g., only substitute
https://en.wikipedia.org/wiki/International%20Conference%20on%20Communications
The International Conference on Communications (ICC) is an annual international academic conference organised by the Institute of Electrical and Electronics Engineers' Communications Society. The conference grew out of the Global Communications Conference (GLOBECOM) when, in 1965, the seventh GLOBECOM was sponsored by the Communications Society's predecessor as the "IEEE Communications Convention". The following year it adopted its current name and GLOBECOM was disbanded (it has since been revived). The conference was held in the United States until 1984 when it was held in Amsterdam; it has since been held in several other countries. Some major telecommunications discoveries have been announced at ICC, such as the invention of turbo codes. In fact, this ground breaking paper had been submitted to ICC the previous year, but was rejected by the referees who thought the results too good to be true. Recent ICCs have been attended by 2500–3000 people. Conferences
https://en.wikipedia.org/wiki/Atomic%20formula
In mathematical logic, an atomic formula (also known as an atom or a prime formula) is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives. The precise form of atomic formulas depends on the logic under consideration; for propositional logic, for example, a propositional variable is often more briefly referred to as an "atomic formula", but, more precisely, a propositional variable is not an atomic formula but a formal expression that denotes an atomic formula. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term. In model theory, atomic formulas are merely strings of symbols with a given signature, which may or may not be satisfiable with respect to a given model. Atomic formula in first-order logic The well-formed terms and propositions of ordinary first-order logic have the following syntax: Terms: , that is, a term is recursively defined to be a constant c (a named object from the domain of discourse), or a variable x (ranging over the objects in the domain of discourse), or an n-ary function f whose arguments are terms tk. Functions map tuples of objects to objects. Propositions: , that is, a proposition is recursively defined to be an n-ary predicate P whose arguments are terms tk, or an expression composed of logical connectives (and, or) and quantifiers (for-all, there-exists) used with other propositions. An atomic formula or atom is simply a predicate applied to a tuple of terms; that is, an atomic formula is a formula of the form P (t1 ,…, tn) for P a predicate, and the tn terms. All other well-formed formulae are obtained by composing atoms with logical connectives and quantifiers. For example, the formula ∀x. P (x) ∧ ∃y.
https://en.wikipedia.org/wiki/Chamfered%20dodecahedron
In geometry, the chamfered dodecahedron is a convex polyhedron with 80 vertices, 120 edges, and 42 faces: 30 hexagons and 12 pentagons. It is constructed as a chamfer (edge-truncation) of a regular dodecahedron. The pentagons are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the pentakis icosidodecahedron. It is also called a truncated rhombic triacontahedron, constructed as a truncation of the rhombic triacontahedron. It can more accurately be called an order-12 truncated rhombic triacontahedron because only the order-12 vertices are truncated. Structure These 12 order-5 vertices can be truncated such that all edges are equal length. The original 30 rhombic faces become non-regular hexagons, and the truncated vertices become regular pentagons. The hexagon faces can be equilateral but not regular with D symmetry. The angles at the two vertices with vertex configuration are and at the remaining four vertices with , they are each. It is the Goldberg polyhedron , containing pentagonal and hexagonal faces. It also represents the exterior envelope of a cell-centered orthogonal projection of the 120-cell, one of six convex regular 4-polytopes. Chemistry This is the shape of the fullerene ; sometimes this shape is denoted to describe its icosahedral symmetry and distinguish it from other less-symmetric 80-vertex fullerenes. It is one of only four fullerenes found by to have a skeleton that can be isometrically embeddable into an L space. Related polyhedra This polyhedron looks very similar to the uniform truncated icosahedron which has 12 pentagons, but only 20 hexagons. The chamfered dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip chamfered dodecahedron makes a chamfered truncated icosahedron, and Goldberg (2,2). Chamfered truncated icosahedron In geometry, the chamfered truncated icosahedron is a convex polyhedron with 240 vertices, 360 edges, and 122 faces, 110 hexagon
https://en.wikipedia.org/wiki/Vitelline%20duct
In the human embryo, the vitelline duct, also known as the vitellointestinal duct, the yolk stalk, the omphaloenteric duct, or the omphalomesenteric duct, is a long narrow tube that joins the yolk sac to the midgut lumen of the developing fetus. It appears at the end of the fourth week, when the yolk sac (also known as the umbilical vesicle) presents the appearance of a small pear-shaped vesicle. Function Obliteration Generally, the duct fully obliterates (narrows and disappears) during the 5–6th week of fertilization age (9th week of gestational age), but a failure of the duct to close is termed a vitelline fistula. This results in discharge of meconium from the navel (umbilicus). About two percent of fetuses exhibit a type of vitelline fistula characterized by persistence of the proximal part of the vitelline duct as a diverticulum protruding from the small intestine, Meckel's diverticulum, which is typically situated within two feet of the ileocecal junction and may be attached by a fibrous cord to the abdominal wall at the umbilicus. Persistence The yolk sac can be seen in the afterbirth as a small, somewhat oval-shaped body, the diameter of which varies from 1 mm to 5 mm. It is situated between the amnion and the chorion and may lie on or at a varying distance from the placenta. Clinical significance Meckel's diverticulum Sometimes a narrowing of the lumen of the ileum is seen opposite the site of attachment of the duct. On this site of attachment, sometimes a pathological Meckel's diverticulum may be present. A mnemonic used to recall details of a Meckel's diverticulum is as follows: "2 inches long, within 2 feet of ileocecal valve, 2 times as common in males than females, 2% of population, 2% symptomatic, 2 types of ectopic tissue: gastric and pancreatic". In the decades since the mnemonic was developed, further epidemiology has found the incidence of symptomatic diverticulae to be 4%, not 2%, and the incidence to be 2–5x greater in males than females,
https://en.wikipedia.org/wiki/ATP-binding%20motif
An ATP-binding motif is a 250-residue sequence within an ATP-binding protein’s primary structure. The binding motif is associated with a protein’s structure and/or function. ATP is a molecule of energy, and can be a coenzyme, involved in a number of biological reactions. ATP is proficient at interacting with other molecules through a binding site. The ATP binding site is the environment in which ATP catalytically actives the enzyme and, as a result, is hydrolyzed to ADP. The binding of ATP causes a conformational change to the enzyme it is interacting with. The genetic and functional similarity of such a motif demonstrates micro-evolution: proteins have co-opted the same binding sequence from other enzymes rather than developing them independently. ATP binding sites, which may be representative of an ATP binding motif, are present in many proteins which require an input of energy (from ATP), such sites as active membrane transporters, microtubule subunits, flagellum proteins, and various hydrolytic and proteolytic enzymes. Primary sequence The short motifs involving ATP-binding are the Walker motifs, Walker A, also known as the P-loop, and Walker B, as well as the C motif and switch motif. Walker A motif The Walker site A has a primary amino acid sequence of or . The letter can represent any amino acid. Walker B motif The primary amino acid sequence of the Walker B site is , in which represents any hydrophobic amino acid. C motif The C motif, also known as the signature motif, LSGGQ motif, or the linker peptide, has a primary amino acid sequence of . Due to the variety of different amino acids that can be used in the primary sequence, of both the Walker site A and B, the non-variant amino acids within the sequence are highly conserved. A mutation of any of these amino acids will affect the binding ATP or interfere with the catalytic activity of the enzyme. The primary amino acid sequence determines the three dimensional structure of each motif. Struct
https://en.wikipedia.org/wiki/Traverse%20%28surveying%29
Traverse is a method in the field of surveying to establish control networks. It is also used in geodesy. Traverse networks involve placing survey stations along a line or path of travel, and then using the previously surveyed points as a base for observing the next point. Traverse networks have many advantages, including: Less reconnaissance and organization needed; While in other systems, which may require the survey to be performed along a rigid polygon shape, the traverse can change to any shape and thus can accommodate a great deal of different terrains; Only a few observations need to be taken at each station, whereas in other survey networks a great deal of angular and linear observations need to be made and considered; Traverse networks are free of the strength of figure considerations that happen in triangular systems; Scale error does not add up as the traverse is performed. Azimuth swing errors can also be reduced by increasing the distance between stations. The traverse is more accurate than triangulateration (a combined function of the triangulation and trilateration practice). Types Frequently in surveying engineering and geodetic science, control points (CP) are setting/observing distance and direction (bearings, angles, azimuths, and elevation). The CP throughout the control network may consist of monuments, benchmarks, vertical control, etc. There are mainly two types of traverse: Closed traverse: either originates from a station and returns to the same station completing a circuit, or runs between two known stations Open traverse: neither returns to its starting station, nor closes on any other known station. Compound traverse: it is where an open traverse is linked at its ends to an existing traverse to form a closed traverse. The closing line may be defined by coordinates at the end points wh
https://en.wikipedia.org/wiki/Hyperelastic%20material
A hyperelastic or Green elastic material is a type of constitutive model for ideally elastic material for which the stress–strain relationship derives from a strain energy density function. The hyperelastic material is a special case of a Cauchy elastic material. For many materials, linear elastic models do not accurately describe the observed material behaviour. The most common example of this kind of material is rubber, whose stress-strain relationship can be defined as non-linearly elastic, isotropic and incompressible. Hyperelasticity provides a means of modeling the stress–strain behavior of such materials. The behavior of unfilled, vulcanized elastomers often conforms closely to the hyperelastic ideal. Filled elastomers and biological tissues are also often modeled via the hyperelastic idealization. Ronald Rivlin and Melvin Mooney developed the first hyperelastic models, the Neo-Hookean and Mooney–Rivlin solids. Many other hyperelastic models have since been developed. Other widely used hyperelastic material models include the Ogden model and the Arruda–Boyce model. Hyperelastic material models Saint Venant–Kirchhoff model The simplest hyperelastic material model is the Saint Venant–Kirchhoff model which is just an extension of the geometrically linear elastic material model to the geometrically nonlinear regime. This model has the general form and the isotropic form respectively where is tensor contraction, is the second Piola–Kirchhoff stress, is a fourth order stiffness tensor and is the Lagrangian Green strain given by and are the Lamé constants, and is the second order unit tensor. The strain-energy density function for the Saint Venant–Kirchhoff model is and the second Piola–Kirchhoff stress can be derived from the relation Classification of hyperelastic material models Hyperelastic material models can be classified as: phenomenological descriptions of observed behavior Fung Mooney–Rivlin Ogden Polynomial Saint Venant–Kirchhoff
https://en.wikipedia.org/wiki/ECC%20memory
Error correction code memory (ECC memory) is a type of computer data storage that uses an error correction code (ECC) to detect and correct n-bit data corruption which occurs in memory. ECC memory is used in most computers where data corruption cannot be tolerated, like industrial control applications, critical databases, and infrastructural memory caches. Typically, ECC memory maintains a memory system immune to single-bit errors: the data that is read from each word is always the same as the data that had been written to it, even if one of the bits actually stored has been flipped to the wrong state. Most non-ECC memory cannot detect errors, although some non-ECC memory with parity support allows detection but not correction. Description Error correction codes protect against undetected data corruption and are used in computers where such corruption is unacceptable, examples being scientific and financial computing applications, or in database and file servers. ECC can also reduce the number of crashes in multi-user server applications and maximum-availability systems. Electrical or magnetic interference inside a computer system can cause a single bit of dynamic random-access memory (DRAM) to spontaneously flip to the opposite state. It was initially thought that this was mainly due to alpha particles emitted by contaminants in chip packaging material, but research has shown that the majority of one-off soft errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read or write to them. Hence, the error rates increase rapidly with rising altitude; for example, compared to sea level, the rate of neutron flux is 3.5 times higher at 1.5 km and 300 times higher at 10-12 km (the cruising altitude of commercial airplanes). As a result, systems operating at high altitudes require special provisions for reliability. A
https://en.wikipedia.org/wiki/Calogero%20conjecture
The Calogero conjecture is a minority interpretation of quantum mechanics. It is a quantization explanation involving quantum mechanics, originally stipulated in 1997 and further republished in 2004 by Francesco Calogero that suggests the classical stochastic background field to which Edward Nelson attributes quantum mechanical behavior in his theory of stochastic quantization is a fluctuating space-time, and that there are further mathematical relations between the involved quantities. The hypothesis itself suggests that if the angular momentum associated with a stochastic tremor with spatial coherence provides an action purported by that motion within the order of magnitude of Planck's constant then the order of magnitude of the associated angular momentum has the same value. Calogero himself suggests that these findings, originally based on the simplified model of the universe "are affected (and essentially, unaffected) by the possible presence in the mass of the Universe of a large component made up of particles much lighter than nucleons". Essentially, the relation explained by Calogero can be expressed with the formulas: Furthermore: Const G,m Where: represents the gravitational constant represents the mass of a hydrogen atom. represents the radius of the universe accessible by gravitational interactions in time, t. is a dimensional constant. Despite its common description, it has been noted that the conjecture is not entirely defined within the realms of Nelson's stochastic mechanics, but can also be thought of as a means of inquiring into the statistical effects of interaction with distant masses in the universe and was expected by Calogero himself to be within the same order of magnitude as quantum mechanical effects. Analysis Compatibility with fundamental constants After the publication of Calogero's original paper, "[The] [c]osmic origin of quantization" a response was published by Giuseppe Gaeta of the University of Rome in which he discus
https://en.wikipedia.org/wiki/Extensor%20indicis%20muscle
In human anatomy, the extensor indicis [proprius] is a narrow, elongated skeletal muscle in the deep layer of the dorsal forearm, placed medial to, and parallel with, the extensor pollicis longus. Its tendon goes to the index finger, which it extends. Structure It arises from the distal third of the dorsal part of the body of ulna and from the interosseous membrane. It runs through the fourth tendon compartment together with the extensor digitorum, from where it projects into the dorsal aponeurosis of the index finger. Opposite the head of the second metacarpal bone, it joins the ulnar side of the tendon of the extensor digitorum which belongs to the index finger. Like the extensor digiti minimi (i.e. the extensor of the little finger), the tendon of the extensor indicis runs and inserts on the ulnar side of the tendon of the common extensor digitorum. The extensor indicis lacks the juncturae tendinum interlinking the tendons of the extensor digitorum on the dorsal side of the hand. Variation The extensor indicis proprius does not show much variation. It exists as a single tendon most of the time. Double tendons of the extensor indicis proprius was also reported. It is known that the extensor indicis proprius inserts to the index finger on the ulnar side of the extensor digitorum. However, the insertion on the radial side of the common extensor digitorum infrequently seen, namely the extensor indicis radialis. Split tendons of the muscle inserting on both ulnar and the radial side of the common extensor digitorum was also reported. Anomalous hand extensors including the extensor medii proprius and the extensor indicis et medii communis are often seen as variations of the extensor indicis due to the shared characteristics and embryonic origin. Function The extensor indicis extends the index finger, and by its continued action assists in extending (dorsiflexion) the wrist and the midcarpal joints. Because the index finger and little finger have separate ex
https://en.wikipedia.org/wiki/Trusted%20Network%20Connect
Trusted Network Connect (TNC) is an open architecture for Network Access Control, promulgated by the Trusted Network Connect Work Group (TNC-WG) of the Trusted Computing Group (TCG). History The TNC architecture was first introduced at the RSA Conference in 2005. TNC was originally a network access control standard with a goal of multi-vendor endpoint policy enforcement. In 2009 TCG announced expanded specifications which extended the specifications to systems outside of the enterprise network. Additional uses for TNC which have been reported include Industrial Control System (ICS), SCADA security, and physical security. Specifications Specifications introduced by the TNC Work Group: TNC Architecture for Interoperability IF-IMC - Integrity Measurement Collector Interface IF-IMV - Integrity Measurement Verifier Interface IF-TNCCS - Trusted Network Connect Client-Server Interface IF-M - Vendor-Specific IMC/IMV Messages Interface IF-T - Network Authorization Transport Interface IF-PEP - Policy Enforcement Point Interface IF-MAP - Metadata Access Point Interface CESP - Clientless Endpoint Support Profile Federated TNC TNC Vendor Adoption A partial list of vendors who have adopted TNC Standards: ArcSight Aruba Networks Avenda Systems Enterasys Extreme Networks Fujitsu IBM Pulse Secure Juniper Networks Lumeta McAfee Microsoft Nortel ProCurve strongSwan Wave Systems Also, networking by Cisco HP Symantec Trapeze Networks Tofino TNC Customer Adoption The U.S. Army has planned to use this technology to enhance the security of its computer networks. The South Carolina Department of Probation, Parole, and Pardon Services has tested a TNC-SCAP integration combination in a pilot program. See also IF-MAP Trusted Computing Trusted Computing Group Trusted Internet Connection
https://en.wikipedia.org/wiki/Actuarial%20reserves
In insurance, an actuarial reserve is a reserve set aside for future insurance liabilities. It is generally equal to the actuarial present value of the future cash flows of a contingent event. In the insurance context an actuarial reserve is the present value of the future cash flows of an insurance policy and the total liability of the insurer is the sum of the actuarial reserves for every individual policy. Regulated insurers are required to keep offsetting assets to pay off this future liability. The loss random variable The loss random variable is the starting point in the determination of any type of actuarial reserve calculation. Define to be the future state lifetime random variable of a person aged x. Then, for a death benefit of one dollar and premium , the loss random variable, , can be written in actuarial notation as a function of From this we can see that the present value of the loss to the insurance company now if the person dies in t years, is equal to the present value of the death benefit minus the present value of the premiums. The loss random variable described above only defines the loss at issue. For K(x) > t, the loss random variable at time t can be defined as: Net level premium reserves Net level premium reserves, also called benefit reserves, only involve two cash flows and are used for some US GAAP reporting purposes. The valuation premium in an NLP reserve is a premium such that the value of the reserve at time zero is equal to zero. The net level premium reserve is found by taking the expected value of the loss random variable defined above. They can be formulated prospectively or retrospectively. The amount of prospective reserves at a point in time is derived by subtracting the actuarial present value of future valuation premiums from the actuarial present value of the future insurance benefits. Retrospective reserving subtracts accumulated value of benefits from accumulated value of valuation premiums as of a point in time. T
https://en.wikipedia.org/wiki/Intraspecific%20antagonism
Intraspecific antagonism means a disharmonious or antagonistic interaction between two individuals of the same species. As such, it could be a sociological term, but was actually coined by Alan Rayner and Norman Todd working at Exeter University in the late 1970s, to characterise a particular kind of zone line formed between wood-rotting fungal mycelia. Intraspecific antagonism is one of the expressions of a phenomenon known as vegetative or somatic incompatibility. Fungal individualism Zone lines form in wood for many reasons, including host reactions against parasitic encroachment, and inter-specific interactions, but the lines observed by Rayner and Todd when transversely-cut sections of brown-rotted birch tree trunk or branch were incubated in plastic bags appeared to be due to a reaction between different individuals of the same species of fungus. This was a startling inference at a time when the prevailing orthodoxy within the mycological community was that of the "unit mycelium". This was the theory that when two different individuals of the same species of basidiomycete wood rotting fungi grew and met within the substratum, they fused, cooperated, and shared nuclei freely. Rayner and Todd's insight was that basidiomycete fungi individuals do, in most "adult" or dikaryotic cases anyway, retain their individuality. A small stable of postgraduate and postdoctoral students helped elucidate the mechanisms underlying these intermycelial interactions, at Exeter University (Todd) and the University of Bath (Rayner), over the next few years. Applications of intraspecific antagonism Although the attribution of individual status to the mycelia confined by intraspecific zone lines is a comparatively new idea, zone lines themselves have been known since time immemorial. The term spalting is applied by woodworkers to wood showing strongly-figured zone lines, particularly those cases where the area of "no-man's land" between two antagonistic conspecific mycelia is c
https://en.wikipedia.org/wiki/Duhamel%27s%20principle
In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity. The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy in . Indicating by the time derivative of , the initial value problem is where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation, corresponds to adding an external heat energy at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice . By linearity, one can add up (integrate) the resulting solutions through time and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle. General considerations Formally, consider a linear inhomogeneous evolution equation for a function with spatial domain in , of the form where L is a linear differential operator that involves no time derivatives. Duhamel's principle is, formally, that
https://en.wikipedia.org/wiki/Mail-sink
Smtp-sink is a utility program in the Postfix Mail software package that implements a "black hole" function. It listens on the named host (or address) and port. It accepts Simple Mail Transfer Protocol (SMTP) messages from the network and discards them. The purpose is to support measurement of client performance. It is not SMTP protocol compliant. Connections can be accepted on IPv4 or IPv6 endpoints, or on UNIX-domain sockets. IPv4 and IPv6 are the default. This program is the complement of the smtp-source(1) program. See also Tarpit (networking) SMTP
https://en.wikipedia.org/wiki/H.%20C.%20%C3%98rsted%20Medal
The H. C. Ørsted Medal is a medal for scientific achievement awarded by the Danish Selskabet for naturlærens udbredelse (The Society for the Dissemination of Natural Science). Named for the society's founder Hans Christian Ørsted, it is awarded chiefly to Danes. The medal is awarded in three versions: gold is for outstanding original scientific work in physics or chemistry published during the previous year; silver is for outstanding writing of science for a popular audience over several years; and bronze is for outstanding popularization of science through non-written methods, such as in museums or through business. Recipients Gold Source: naturlæren.dk 2019 Karl Anker Jørgensen 1989 Thor A. Bak 1977 Kai Arne Jensen 1974 Jens Lindhard 1970 Christian Møller 1970 Aage Bohr 1965 Bengt Strömgren 1959 Jens Anton Christiansen 1959 Paul Bergsøe 1952 Alex Langseth 1941 Kaj Linderstrøm-Lang 1928 Peder Oluf Pedersen 1928 Niels Bjerrum 1928 Johannes Nicolaus Brønsted 1924 Niels Bohr 1916 Martin Knudsen 1912 Christian Christiansen 1909 S.P.L. Sørensen Silver Source: naturlæren.dk 2020 Jens Ramskov 2019 Thomas Bolander 2016 Anja Cetti Andersen 2000 Jens Martin Knudsen 1999 Ove Nathan 1991 Niels Ove Lassen 1990 Jens J. Kjærgaard 1988 Niels Blædel 1980 K.G. Hansen Bronze Source: naturlæren.dk 2020 Stefan Emil Lemser Eychenne 2020 Claus Rintza 2020 Lasse Seidelin Bendtsen 2019 Michael Lentfer Jensen 2019 Jeannette Overgaard Tejlmann Madsen 2018 Ole Bakander 2017 Bjarning Grøn 2016 Martin Frøhling Jensen 2015 Henrik Parbo 2014 Pia Halkjær Gommesen 2013 Niels Christian Hartling 2013 Peter Arnborg Videsen 2012 Jannik Johansen (scientist) 2006 Finn Berg Rasmussen 2004 Erik Schou Jensen 2003 Ryan Holm 2001 Asger Høeg See also List of chemistry awards List of physics awards External links Selskabet for naturlærens udbredelse Chemistry awards Danish science and technology awards Physics awards Science writing awards Science communication awards
https://en.wikipedia.org/wiki/MikroMikko
MikroMikko was a Finnish line of microcomputers released by Nokia Corporation's computer division Nokia Data from 1981 through 1987. MikroMikko was Nokia Data's attempt to enter the business computer market. They were especially designed for good ergonomy. History The first model in the line, MikroMikko 1, was released on 29 September 1981, 48 days after IBM introduced its Personal Computer. The launch date of MikroMikko 1 is the name day of Mikko in the Finnish almanac. The MikroMikko line was manufactured in a factory in the Kilo district of Espoo, Finland, where computers had been produced since the 1960s. Nokia later bought the computer division of the Swedish telecommunications company Ericsson. During Finland's economic depression in the early 1990s, Nokia streamlined many of its operations and sold many of its less profitable divisions to concentrate on its key competence of telecommunications. Nokia's personal computer division was sold to the British computer company ICL (International Computers Limited) in 1991, which later became part of Fujitsu. However, ICL and later Fujitsu retained the MikroMikko trademark in Finland. Internationally the MikroMikko line was marketed by Fujitsu under the trademark ErgoPro. Fujitsu later transferred its personal computer operations to Fujitsu Siemens Computers, which shut down its only factory in Espoo at the end of March 2000, thus ending large-scale PC manufacturing in the country. Models MikroMikko 1 M6 Processor: Intel 8085, 2 MHz 64 KB RAM, 4 KB ROM Display: 80×24 character text mode, the 25th row was used as a status row. Graphics resolutions 160×75 and 800×375 pixels, refresh rate 50 Hz Two 640 KB 5.25" floppy drives (other models might only have one drive) Optional 5 MB hard disk (stock in model M7) Connectors: two RS-232s, display, printer, keyboard Software: Nokia CP/M 2.2 operating system, Microsoft BASIC, editor, assembler and debugger Cost: 30,000 mk in 1984 MikroMikko 2 Released in 1983 Pro
https://en.wikipedia.org/wiki/Microcom%20Networking%20Protocol
The Microcom Networking Protocols, almost always shortened to MNP, is a family of error-correcting protocols commonly used on early high-speed (2400 bit/s and higher) modems. Originally developed for use on Microcom's own family of modems, the protocol was later openly licensed and used by most of the modem industry, notably the "big three", Telebit, USRobotics and Hayes. MNP was later supplanted by V.42bis, which was used almost universally starting with the first V.32bis modems in the early 1990s. Overview Although Xmodem was introduced 1977, as late as 1985 The New York Times described XMODEM first, then discussed MNP as a leading contender, and that 9600 baud modems "are beginning to make their appearance." By 1988, the Times was talking about 9600 and 19.2K, and that "At least 100 other brands of modems follow" MNP (compared to Hayes' use of LAP-B). Error correction basics Modems are, by their nature, error-prone devices. Noise on the telephone line, a common occurrence, can easily mimic the sounds used by the modems to transmit data, thereby introducing errors that are difficult to notice. For some tasks, like reading or writing simple text, a small number of errors can be accepted without causing too many problems. For other tasks, like transferring computer programs in machine format, even one error can render the received data useless. As modems increase in speed by using up more of the available bandwidth, the chance that random noise would introduce errors also increases; above 2400 bit/s these errors are quite common. To deal with this problem, a number of file transfer protocols were introduced and implemented in various programs. In general, these protocols break down a file into a series of frames or packets containing a number of bytes from the original file. Some sort of additional data, normally a checksum or CRC, is added to each packet to indicate whether the packet encountered an error while being received . The packet is then sent to the re
https://en.wikipedia.org/wiki/Energy%20forestry
Energy forestry is a form of forestry in which a fast-growing species of tree or woody shrub is grown specifically to provide biomass or biofuel for heating or power generation. The two forms of energy forestry are short rotation coppice and short rotation forestry: Short rotation coppice may include tree crops of poplar, willow or eucalyptus, grown for two to five years before harvest. Short rotation forestry are crops of alder, ash, birch, eucalyptus, poplar, and sycamore, grown for eight to twenty years before harvest. Benefits The main advantage of using "grown fuels", as opposed to fossil fuels such as coal, natural gas and oil, is that while they are growing they absorb the near-equivalent in carbon dioxide (an important greenhouse gas) to that which is later released in their burning. In comparison, burning fossil fuels increases atmospheric carbon unsustainably, by using carbon that was added to the Earth's carbon sink millions of years ago. This is a prime contributor to climate change. According to the FAO, compared to other energy crops, wood is among the most efficient sources of bioenergy in terms of quantity of energy released by unit of carbon emitted. Other advantages of generating energy from trees, as opposed to agricultural crops, are that trees do not have to be harvested each year, the harvest can be delayed when market prices are down, and the products can fulfil a variety of end-uses. Yields of some varieties can be as high as 11 oven dry tonnes per hectare every year. However, commercial experience on plantations in Scandinavia have shown lower yield rates. These crops can also be used in bank stabilisation and phytoremediation. In fact, experiments in Sweden with willow plantations have proved to have many beneficial effects on the soil and water quality when compared to conventional agricultural crops (such as cereal). This beneficial effects have been the basis for the designed of multifunctional production systems to meet emerging b
https://en.wikipedia.org/wiki/Ordo%20naturalis
In botany, the phrase ordo naturalis, 'natural order', was once used for what today is a family. Its origins lie with Carl Linnaeus who used the phrase when he referred to natural groups of plants in his lesser-known work, particularly Philosophia Botanica. In his more famous works the Systema Naturae and the Species Plantarum, plants were arranged according to his artificial "Sexual system", and Linnaeus used the word for an artificial unit. In those works, only genera and species (sometimes varieties) were "real" taxa. In nineteenth-century works such as the Prodromus of and the Genera Plantarum of Bentham & Hooker, the word did indicate taxa that are now given the rank of family. Contemporary French works used the word for these same taxa. In the first international Rules of botanical nomenclature of 1906 the word family () was assigned to this rank, while the term order () was reserved for a higher rank, for what in the nineteenth century had often been named a (plural ). The International Code of Nomenclature for algae, fungi, and plants provides for names published in the rank of in Art 18.2: normally, these are to be accepted as family names. Some plant families retain the name they were given by pre-Linnaean authors, recognised by Linnaeus as "natural orders" (e.g. Palmae or Labiatae). Such names are known as descriptive family names.
https://en.wikipedia.org/wiki/Smith%20conjecture
In mathematics, the Smith conjecture states that if f is a diffeomorphism of the 3-sphere of finite order, then the fixed point set of f cannot be a nontrivial knot. showed that a non-trivial orientation-preserving diffeomorphism of finite order with fixed points must have a fixed point set equal to a circle, and asked in if the fixed point set could be knotted. proved the Smith conjecture for the special case of diffeomorphisms of order 2 (and hence any even order). The proof of the general case was described by and depended on several major advances in 3-manifold theory, In particular the work of William Thurston on hyperbolic structures on 3-manifolds, and results by William Meeks and Shing-Tung Yau on minimal surfaces in 3-manifolds, with some additional help from Bass, Cameron Gordon, Peter Shalen, and Rick Litherland. gave an example of a continuous involution of the 3-sphere whose fixed point set is a wildly embedded circle, so the Smith conjecture is false in the topological (rather than the smooth or PL) category. showed that the analogue of the Smith conjecture in higher dimensions is false: the fixed point set of a periodic diffeomorphism of a sphere of dimension at least 4 can be a knotted sphere of codimension 2. See also Hilbert–Smith conjecture
https://en.wikipedia.org/wiki/Glycol%20nucleic%20acid
Glycol nucleic acid (GNA), sometimes also referred to as glycerol nucleic acid, is a nucleic acid similar to DNA or RNA but differing in the composition of its sugar-phosphodiester backbone, using propylene glycol in place of ribose or deoxyribose. GNA is chemically stable but not known to occur naturally. However, due to its simplicity, it might have played a role in the evolution of life. The 2,3-dihydroxypropyl nucleoside analogues were first prepared by Ueda et al. (1971). Soon thereafter it was shown that phosphate-linked oligomers of the analogues do in fact exhibit hypochromicity in the presence of RNA and DNA in solution (Seita et al. 1972). The preparation of the polymers was later described by Cook et al. (1995, 1999) and Acevedo and Andrews (1996). However the ability of GNA-GNA self-pairing was first reported by Zhang and Meggers in 2005. Crystal structures of a GNA duplexes were subsequently reported by Essen and Meggers. DNA and RNA have a deoxyribose and ribose sugar backbone, respectively, whereas GNA's backbone is composed of repeating glycol units linked by phosphodiester bonds. The glycol unit has just three carbon atoms and still shows Watson–Crick base pairing. The Watson–Crick base pairing is much more stable in GNA than its natural counterparts DNA and RNA as it requires a high temperature to melt a duplex of GNA. It is possibly the simplest of the nucleic acids, making it a hypothetical precursor to RNA. See also Abiogenesis Locked nucleic acid Oligonucleotide synthesis Peptide nucleic acid Threose nucleic acid
https://en.wikipedia.org/wiki/Propositional%20directed%20acyclic%20graph
A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A Boolean function can be represented as a rooted, directed acyclic graph of the following form: Leaves are labeled with (true), (false), or a Boolean variable. Non-leaves are (logical and), (logical or) and (logical not). - and -nodes have at least one child. -nodes have exactly one child. Leaves labeled with () represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled with a Boolean variable is interpreted as the assignment , i.e. it represents the Boolean function which evaluates to 1 if and only if . The Boolean function represented by a -node is the one that evaluates to 1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a -node represents the Boolean function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a -node represents the complementary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean function of its child evaluates to 0. PDAG, BDD, and NNF Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some particular properties. The following pictures represent the Boolean function : See also Data structure Boolean satisfiability problem Proposition
https://en.wikipedia.org/wiki/Beers%20criteria
The Beers Criteria for Potentially Inappropriate Medication Use in Older Adults, commonly called the Beers List, are guidelines published by the American Geriatrics Society (AGS) for healthcare professionals to help improve the safety of prescribing medications for adults 65 years and older in all except palliative settings. They emphasize deprescribing medications that are unnecessary, which helps to reduce the problems of polypharmacy, drug interactions, and adverse drug reactions, thereby improving the risk–benefit ratio of medication regimens in at-risk people. The criteria are used in geriatrics clinical care to monitor and improve the quality of care. They are also used in training, research, and healthcare policy to assist in developing performance measures and document outcomes. These criteria include lists of medications in which the potential risks may be greater than the potential benefits for people 65 and older. By considering this information, practitioners may be able to reduce harmful side effects caused by such medications. The Beers Criteria are intended to serve as a guide for clinicians and not as a substitute for professional judgment in prescribing decisions. The criteria may be used in conjunction with other information to guide clinicians about safe prescribing in older adults.. The criteria are frequently used internationally although they were only intended for use in the United States. Some countries have adapted the criteria to their own context. Others countries have observed that the listed medications may not be applicable in their country. History Geriatrician Mark H. Beers formulated the Beers Criteria through a consensus panel of experts using the Delphi method. The criteria were originally published in the Archives of Internal Medicine in 1991 and updated in 1997, 2003, 2012, 2015, 2019, and 2023. The AGS has registered a trademark for the term "AGS Beers Criteria" and in 2018, it formed a commercial partnership to authorize
https://en.wikipedia.org/wiki/Intracule
An intracule is a quantum mechanical mathematical function for the two electron density which depends not upon the absolute values of position or momentum but rather upon their relative values. Its use is leading to new methods in physics and computational chemistry to investigate the electronic structure of molecules and solids. These methods are a development of Density functional theory (DFT), but with the two electron density replacing the one electron density.
https://en.wikipedia.org/wiki/Articularis%20cubiti%20muscle
The Articularis cubiti muscle is a muscle of the elbow. It is considered by some sources to be a part of the triceps brachii muscle. It is also known as the "subanconeus muscle", for its relationship to the anconeus muscle. It is classified as a muscle of the posterior brachium.
https://en.wikipedia.org/wiki/Flexor%20digiti%20minimi%20brevis%20muscle%20of%20hand
The flexor digiti minimi brevis is a hypothenar muscle in the hand that flexes the little finger (digit V) at the metacarpophalangeal joint. It lies lateral to the abductor digiti minimi when the hand is in anatomical position. Structure The flexor digiti minimi brevis arises from the hamulus of the hamate bone and the palmar surface of the flexor retinaculum of the hand. It is inserted into the medial side of the base of the proximal phalanx of digit V. It is separated from the abductor digiti minimi, at its origin, by the deep branches of the ulnar artery and the ulnar nerve. The flexor digiti minimi brevis is sometimes not present; in these cases, the abductor digiti minimi is usually larger than normal. The flexor digiti minimi brevis is one of three muscles in the hypothenar muscle group. These three muscles form the fleshy mass at the base of the little finger, and are solely concerned with the movement of digit V. The other two muscles that make up the hypothenar muscle group are the abductor digiti minimi and the opponens digiti minimi. In anatomical position from medial to lateral is the abductor digiti minimi, flexor digiti minimi brevis, and opponens digiti minimi. Innervation The flexor digiti minimi brevis, like other hypothenar muscles, is innervated by the deep branch of the ulnar nerve. The ulnar nerve arises from the spinal nerve levels C8-T1. The spinal roots of C8 and T1 then merge to form the lower trunk, anterior division, medial cord, and finally produce the ulnar nerve. The ulnar nerve has a superficial and deep branch, but it is the deep branch that innervates the flexor digiti minimi brevis. Actions The flexor digiti minimi brevis flexes the little finger at the metacarpophalangeal joint. Etymology The name of this muscle is Latin for the 'short flexor of the little finger'. Note that brevis is usually included to differentiate it from a longus muscle of the same name. The flexor digiti minimi longus, however, is not found in the
https://en.wikipedia.org/wiki/Bush%20Brothers%20and%20Company
Bush Brothers and Company is a family-owned corporation best known for its Bush's Best brand canned baked beans. The company produces approximately 80 percent of the canned baked beans consumed in the United States, representing estimated annual sales in excess of $400 million and the processing of more than 55 million pounds of beans per year. In addition, the company also offers other canned beans (black, garbanzo, pinto, and refried), as well as peas, hominy, and cut green beans. Based in Knoxville, Tennessee, Bush Brothers operates plants in Augusta, Wisconsin and Chestnut Hill, Tennessee. Its canned goods are sold through retail food outlets and food service operators throughout the United States and Canada. History In 1904, A. J. (Andrew Jackson) Bush entered a partnership with the Stokely family to open a tomato cannery in Chestnut Hill, Tennessee. His cannery proved so profitable that, by 1908, he was able to buy out the Stokelys' interest and establish his own independent business. He entered into partnership with his two oldest sons, Fred and Claude, and established Bush Brothers & Company. Incorporation and expansion (1922–1935) In 1922, A.J. Bush took out a $945 loan on his life insurance policy and used his bank line of credit to incorporate his business. Bush Brothers & Company was incorporated in June 1922, and Fred Bush, A.J.'s oldest son, was named president. (Fred was also a director of the National Canners Association, the primary industry trade group for canning companies.) The company's canning operations originally began with laborious, manual processes, then expanded with steam-powered engines, and eventually evolved into electric-powered factory machinery. During the 1920s, Bush Brothers & Company began expanding its operations to include products other than tomatoes. The company constructed a small canning factory in Clinton, Tennessee in 1923, and began canning both tomatoes and peaches in 1924. In 1928 the company expanded into corn a
https://en.wikipedia.org/wiki/PWE3
In 2001, the IETF set up the Pseudowire Emulation Edge to Edge working group, and this group was given the initialism PWE3 (the 3 standing for the third power of E, i.e. EEE). The working group was chartered to develop an architecture for service provider edge-to-edge pseudowires and service-specific documents detailing the encapsulation techniques. In computer networking and telecommunications, a pseudowire (PW) is an emulation of a native service over a packet-switched network (PSN). The native service may be ATM, Frame Relay, Ethernet, low-rate TDM, or SONET/SDH, while the PSN may be MPLS, IP (either IPv4 or IPv6), or L2TPv3. The working group chairs were originally Danny McPherson and Luca Martini, but following Martini's resignation Stewart Bryant became co-chair. External links Charter for PWE3 Working Group : http://datatracker.ietf.org/wg/pwe3/charter/ Computer network organizations
https://en.wikipedia.org/wiki/Temporoparietalis%20muscle
The temporoparietalis muscle is a distinct muscle of the head. It lies above the auricularis superior muscle. It lies just inferior to the epicranial aponeurosis of the occipitofrontalis muscle. The temporoparietalis muscle may be used in reconstructive ear surgery.
https://en.wikipedia.org/wiki/Accelerated%20Math
Accelerated Math is a daily, progress-monitoring software tool that monitors and manages mathematics skills practice, from preschool math through calculus. It is primarily used by primary and secondary schools, and it is published by Renaissance Learning, Inc. Currently, there are five versions: a desktop version and a web-based version in Renaissance Place, the company's web software for Accelerated Math and a number of other software products (e.g. Accelerated Reader). In Australia and the United Kingdom, the software is referred to as "Accelerated Maths". Research Below is a sample of some of the most current research on Accelerated Math. Sadusky and Brem (2002) studied the impact of first-year implementation of Accelerated Math in a K-6 urban elementary school during the 2001–2002 school year. The researchers found that teachers were able to immediately use data to make decisions about instruction in the classroom. The students in classrooms using Accelerated Math had twice the percentile gains when tested as compared to the control classrooms that did not use Accelerated Math. Ysseldkyke and Tardrew (2003) studied 2,202 students in 125 classrooms encompassing 24 states. The results showed that when students using Accelerated Math were compared to a control group, those students using the software made a significant gains on the STAR Math test. Students in grades 3 through 10 that were using Accelerated Math had more than twice the percentile gains on these tests than students in the control group. Ysseldyke, Betts, Thill, and Hannigan (2004) conducted a quasi-experimental study with third- through sixth-grade Title I students. They found that Title I students who used Accelerated Math outperformed students who did not. Springer, Pugalee, and Algozzine (2005) also discovered a similar pattern. They studied students that failed to pass the AIMS test in order to graduate. Over half of the students passed the test after taking a course in which Accelerated Math
https://en.wikipedia.org/wiki/Common%20interosseous%20artery
The common interosseous artery, about 1 cm. in length, arises immediately below the tuberosity of the radius from the ulnar artery. Passing backward to the upper border of the interosseous membrane, it divides into two branches, the anterior interosseous and posterior interosseous arteries. Additional images
https://en.wikipedia.org/wiki/HashClash
HashClash was a volunteer computing project running on the Berkeley Open Infrastructure for Network Computing (BOINC) software platform to find collisions in the MD5 hash algorithm. It was based at Department of Mathematics and Computer Science at the Eindhoven University of Technology, and Marc Stevens initiated the project as part of his master's degree thesis. The project ended after Stevens defended his M.Sc. thesis in June 2007. However SHA1 was added later, and the code repository was ported to git in 2017. The project was used to create a rogue certificate authority certificate in 2009. See also Berkeley Open Infrastructure for Network Computing (BOINC) List of volunteer computing projects
https://en.wikipedia.org/wiki/Mobile%20PCI%20Express%20Module
A Mobile PCI Express Module (MXM) is an interconnect standard for GPUs (MXM Graphics Modules) in laptops using PCI Express created by MXM-SIG. The goal was to create a non-proprietary, industry standard socket, so one could easily upgrade the graphics processor in a laptop, without having to buy a whole new system or relying on proprietary vendor upgrades. Generations Smaller graphics modules can be inserted into larger slots, but type I and II heatsinks will not fit type III and above or vice versa. The Alienware m5700 platform uses a heatsink that will fit Type I, II, & III cards without modification. MXM 3.1 was released in March 2012 and added PCIe 3.0 support. First generation modules are not compatible with second generation (MXM 3) modules and vice versa. First generation modules I to IV are fully backwards compatible. Specification MXM is no longer freely supplied by Nvidia but it is controlled by the MXM-SIG controlled by Nvidia. Only corporate clients are granted access to the standard. The MXM 2.1 specification is widely available. List of MXM cards MXM 3.x cards Other uses The Qseven computer-on-module form factor uses a MXM-II connector, while the SMARC computer-on-module form factor uses a MXM 3 connector. Both implementations are not in any way compatible with the MXM standard. Notes
https://en.wikipedia.org/wiki/Facial%20vein
The facial vein (or anterior facial vein) is a relatively large vein in the human face. It commences at the side of the root of the nose and is a direct continuation of the angular vein where it also receives a small nasal branch. It lies behind the facial artery and follows a less tortuous course. It receives blood from the external palatine vein before it either joins the anterior branch of the retromandibular vein to form the common facial vein, or drains directly into the internal jugular vein. A common misconception states that the facial vein has no valves, but this has been contradicted. Its walls are not so flaccid as most superficial veins. Path From its origin it runs obliquely downward and backward, beneath the zygomaticus major muscle and zygomatic head of the quadratus labii superioris, descends along the anterior border and then on the superficial surface of the masseter, crosses over the body of the mandible, and passes obliquely backward, beneath the platysma and cervical fascia, superficial to the submandibular gland, the digastricus and stylohyoideus muscles. Clinical significance Thrombophlebitis of the facial vein, (inflammation of the facial vein with secondary clot formation) can result in pieces of an infected clot extending into the cavernous sinus, forming thrombophlebitis of the cavernous sinus. Infections may spread from the facial veins into the dural venous sinuses. Infections may also be introduced by facial lacerations and by bursting pimples in the areas drained by the facial vein. Additional images
https://en.wikipedia.org/wiki/Common%20facial%20vein
The facial vein usually unites with the anterior branch of the retromandibular vein to form the common facial vein, which crosses the external carotid artery and enters the internal jugular vein at a variable point below the hyoid bone. From near its termination a communicating branch often runs down the anterior border of the sternocleidomastoideus to join the lower part of the anterior jugular vein. The common facial vein is not present in all individuals.
https://en.wikipedia.org/wiki/Great%20cardiac%20vein
The great cardiac vein (left coronary vein) is a vein of the heart. It begins at the apex of the heart and ascends along the anterior interventricular sulcus before joining the oblique vein of the left atrium to form the coronary sinus upon the posterior surface of the heart. Anatomy Course The great cardiac vein ascends along the anterior interventricular sulcus to the base of the ventricles. It then curves around the left margin of the heart to reach the posterior surface. Fate Upon reaching the posterior surface of the heart, the great cardiac vein merges with the oblique vein of the left atrium to form the coronary sinus. At the junction of the great cardiac vein and the coronary sinus, there is typically a valve present. This is the Vieussens valve of the coronary sinus. Tributaries The great cardiac vein receives tributaries from the left atrium and from both ventricles: one, the left marginal vein, is of considerable size, and ascends along the left margin of the heart.
https://en.wikipedia.org/wiki/Small%20cardiac%20vein
The small cardiac vein, also known as the right coronary vein, is a coronary vein that drains parts of the right atrium and right ventricle of the heart. Despite its size, it is one of the major drainage vessels for the heart. Anatomy Course The small cardiac vein runs in the coronary sulcus between the right atrium and right ventricle, and opens into the right extremity of the coronary sinus. Territory The small cardiac vein receives blood from the posterior portion of the right atrium and ventricle. Variation The small cardiac vein may empty into the coronary sinus, right atrium, or middle cardiac vein. It may be absent.
https://en.wikipedia.org/wiki/Short%20gastric%20veins
The short gastric veins, four or five in number, drain the fundus and left part of the greater curvature of the stomach, and pass between the two layers of the gastrolienal ligament to end in the splenic vein or in one of its large tributaries.
https://en.wikipedia.org/wiki/FreeSurfer
FreeSurfer is a brain imaging software package originally developed by Bruce Fischl, Anders Dale, Martin Sereno, and Doug Greve. Development and maintenance of FreeSurfer is now the primary responsibility of the Laboratory for Computational Neuroimaging at the Athinoula A. Martinos Center for Biomedical Imaging. FreeSurfer contains a set of programs with a common focus of analyzing magnetic resonance imaging (MRI) scans of brain tissue. It is an important tool in functional brain mapping and contains tools to conduct both volume based and surface based analysis. FreeSurfer includes tools for the reconstruction of topologically correct and geometrically accurate models of both the gray/white and pial surfaces, for measuring cortical thickness, surface area and folding, and for computing inter-subject registration based on the pattern of cortical folds. 57,541 copies of the FreeSurfer software package have been registered for use as of April 2022 and it is a core tool in the processing pipelines of the Human Connectome Project, the UK Biobank, the Adolescent Brain Cognitive Development Study, and the Alzheimer's Disease Neuroimaging Initiative. Usage The FreeSurfer processing stream is controlled by a shell script called recon-all. The script calls component programs that organize raw MRI images into formats easily usable for morphometric and statistical analysis. FreeSurfer automatically segments the volume and parcellates the surface into standardized regions of interest (ROIs). Freesurfer uses a morphed spherical method to average across subjects for statistical (general linear model) analysis with the mri_glmfit tool. FreeSurfer contains a range of packages allowing a broad spectrum of uses, including: FreeView, a tool to visualize FreeSurfer output, which can also display common MRI image formats TRACULA, a tool to construct white matter tract data from diffusion images FSFAST, a tool for analysis of functional MRI data XHemi, for Interhemispheric reg
https://en.wikipedia.org/wiki/Suprarenal%20veins
The suprarenal veins are two in number: the right ends in the inferior vena cava. the left ends in the left renal or left inferior phrenic vein. They receive blood from the adrenal glands and will sometimes form anastomoses with the inferior phrenic veins. Additional images
https://en.wikipedia.org/wiki/Lake%20ecosystem
A lake ecosystem or lacustrine ecosystem includes biotic (living) plants, animals and micro-organisms, as well as abiotic (non-living) physical and chemical interactions. Lake ecosystems are a prime example of lentic ecosystems (lentic refers to stationary or relatively still freshwater, from the Latin lentus, which means "sluggish"), which include ponds, lakes and wetlands, and much of this article applies to lentic ecosystems in general. Lentic ecosystems can be compared with lotic ecosystems, which involve flowing terrestrial waters such as rivers and streams. Together, these two ecosystems are examples of freshwater ecosystems. Lentic systems are diverse, ranging from a small, temporary rainwater pool a few inches deep to Lake Baikal, which has a maximum depth of 1642 m. The general distinction between pools/ponds and lakes is vague, but Brown states that ponds and pools have their entire bottom surfaces exposed to light, while lakes do not. In addition, some lakes become seasonally stratified. Ponds and pools have two regions: the pelagic open water zone, and the benthic zone, which comprises the bottom and shore regions. Since lakes have deep bottom regions not exposed to light, these systems have an additional zone, the profundal. These three areas can have very different abiotic conditions and, hence, host species that are specifically adapted to live there. Two important subclasses of lakes are ponds, which typically are small lakes that intergrade with wetlands, and water reservoirs. Over long periods of time, lakes, or bays within them, may gradually become enriched by nutrients and slowly fill in with organic sediments, a process called succession. When humans use the watershed, the volumes of sediment entering the lake can accelerate this process. The addition of sediments and nutrients to a lake is known as eutrophication. Zones Lake ecosystems can be divided into zones. One common system divides lakes into three zones. The first, the littoral zo
https://en.wikipedia.org/wiki/Inferior%20phrenic%20vein
The inferior phrenic veins drain the diaphragm and follow the course of the inferior phrenic arteries; the right ends in the inferior vena cava; the left is often represented by two branches, one of which ends in the left renal or suprarenal vein, while the other passes in front of the esophageal hiatus in the diaphragm and opens into the inferior vena cava.