source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Cambium%20Networks
Cambium Networks is a wireless infrastructure provider that offers fixed wireless and Wi-Fi to broadband service providers and enterprises to provide Internet access. An American telecommunications infrastructure company, it provides wireless technology, including Enterprise WiFi, switching solutions, Internet of Things, and fixed wireless broadband and Wi-Fi for enterprises. Publicly traded on the NASDAQ stock exchange, it spun out of Motorola in October 2011. Products Cambium Networks manufactures point-to-point backhaul, point-to-multipoint communication wide area network (WAN), Wi-Fi indoor and outdoor access, and cloud-based network management systems. In 2020, the company collaborated with Facebook to add mesh networking technology Terragraph that allows high-speed internet connections where laying fiber optic cable is not viable. As of 2021 the company has shipped 10 million radios. Products are available in point-to-point and point-to-multipoint configurations. Its cnWave fixed wireless solution provides multi-gigabit throughputs. It includes both the original Motorola-designed products using the Canopy protocol and the PtP backhauls that were rebranded from Orthogon Systems, which Motorola acquired in 2006. Cambium Networks’ solutions are used by broadband service providers and managed service providers to connect business and residential locations in dense urban, suburban, rural and remote locations, including education and healthcare. Enterprise Wi-Fi and Switching Cambium Networks also manufactures Wireless LAN (WLAN) Wi-Fi access points including Wi-Fi 6E and intelligent switches along with cloud=management systems. In 2022, Spectralink added interoperability with Cambium Networks access points and Wi-Fi phones and handsets as part of its enterprise wireless certification program. History Cambium Networks was created when Motorola Solutions sold the Canopy and Orthogon businesses in 2011. Cambium evolved the platform and expanded it to thr
https://en.wikipedia.org/wiki/Perineal%20raphe
The perineal raphe is a visible line or ridge of tissue on the body that extends from the anus through the perineum to scrotum (male) or labia majora (female). It is found in both males and females, arises from the fusion of the urogenital folds, and is visible running medial through anteroposterior, to the anus where it resolves in a small knot of skin of varying size. In males, this structure continues through the midline of the scrotum (scrotal raphe) and upwards through the posterior midline aspect of the penis (penile raphe). It also exists deeper through the scrotum where it is called the scrotal septum. It is the result of a fetal developmental phenomenon whereby the scrotum and penis close toward the midline and fuse. See also Embryonic and prenatal development of the male reproductive system in humans Frenulum of prepuce of penis Linea nigra Raphe Images
https://en.wikipedia.org/wiki/Appendix%20of%20the%20epididymis
The appendix of the epididymis (or pedunculated hydatid) is a small stalked appendage (sometimes duplicated) on the head of the epididymis. It is usually regarded as a detached efferent duct. This structure is derived from the Wolffian duct (Mesonephric Duct) as opposed to the appendix testis which is derived from the Müllerian duct (Paramesonephric Duct) remnant. See also Appendix testis
https://en.wikipedia.org/wiki/Scala%20%28company%29
Scala is a producer of multimedia software. It was founded in 1987 as a Norwegian company called Digital Visjon. It is headquartered near Philadelphia, Pennsylvania, USA, and has subsidiaries in Europe and Asia. History In 1987 a young Norwegian entrepreneur, Jon Bøhmer founded the company "Digital Visjon" in Brumunddal, Norway to create multimedia software on the Commodore Amiga computer platform. In 1988 they released their first product which was named InfoChannel 0.97L, which had hotels and cable TV companies as their first customers. In 1990, they redesigned the program with a new graphical user interface. They renamed the company and the software "Scala" and released a number of multimedia applications. The company attracted investors, mainly from Norway and incorporated in the US in 1994 and is now based in the United States with their European headquarters located in the Netherlands. The name "Scala" was given by Bøhmer and designer Bjørn Rybakken and represents the scales in colors, tones and the opera in Milan. The name inspired a live actor animation made by Bøhmer and Rybakken using an Amiga, a video camera and a frame-by-frame video digitizer. The animation, named "Lo scalatore" (Italian for 'The Climber'), featured a magic trick of Indian fakirs of a man climbing a ladder and disappearing in the air. This animation was then included into one of the Demo Disks of Scala Multimedia in order to show the capabilities of that presentation software in loading and playing animations whilst also manipulating it with other features of the software. In 1994 Scala released Multimedia MM400 and InfoChannel 500. Due to bankruptcies of Commodore and Escom in 1994 and 1996 respectively, Scala left the Amiga platform and started delivering the same applications under MS-DOS. Scala Multimedia MM100, Scala Multimedia Publisher and Scala InfoChannel 100 were released for the x86 platform. Scala Multimedia MM100 won Byte Magazine's "Best of Comdex" in 1996. Corporat
https://en.wikipedia.org/wiki/Electronic%20filter%20topology
Electronic filter topology defines electronic filter circuits without taking note of the values of the components used but only the manner in which those components are connected. Filter design characterises filter circuits primarily by their transfer function rather than their topology. Transfer functions may be linear or nonlinear. Common types of linear filter transfer function are; high-pass, low-pass, bandpass, band-reject or notch and all-pass. Once the transfer function for a filter is chosen, the particular topology to implement such a prototype filter can be selected so that, for example, one might choose to design a Butterworth filter using the Sallen–Key topology. Filter topologies may be divided into passive and active types. Passive topologies are composed exclusively of passive components: resistors, capacitors, and inductors. Active topologies also include active components (such as transistors, op amps, and other integrated circuits) that require power. Further, topologies may be implemented either in unbalanced form or else in balanced form when employed in balanced circuits. Implementations such as electronic mixers and stereo sound may require arrays of identical circuits. Passive topologies Passive filters have been long in development and use. Most are built from simple two-port networks called "sections". There is no formal definition of a section except that it must have at least one series component and one shunt component. Sections are invariably connected in a "cascade" or "daisy-chain" topology, consisting of additional copies of the same section or of completely different sections. The rules of series and parallel impedance would combine two sections consisting only of series components or shunt components into a single section. Some passive filters, consisting of only one or two filter sections, are given special names including the L-section, T-section and Π-section, which are unbalanced filters, and the C-section, H-section and b
https://en.wikipedia.org/wiki/Interface%20defeat
Interface defeat is when a ceramic armour system, typically on an armoured fighting vehicle, defeats a kinetic energy penetrator at the ceramic's front surface. Above a certain impact velocity, known as the transition velocity, interface defeat can no longer occur and either penetration or perforation of the ceramic occurs.
https://en.wikipedia.org/wiki/Dirichlet%20density
In mathematics, the Dirichlet density (or analytic density) of a set of primes, named after Peter Gustav Lejeune Dirichlet, is a measure of the size of the set that is easier to use than the natural density. Definition If A is a subset of the prime numbers, the Dirichlet density of A is the limit if it exists. Note that since as (see Prime zeta function), this is also equal to This expression is usually the order of the "pole" of at s = 1, (though in general it is not really a pole as it has non-integral order), at least if this function is a holomorphic function times a (real) power of s−1 near s = 1. For example, if A is the set of all primes, it is the Riemann zeta function which has a pole of order 1 at s = 1, so the set of all primes has Dirichlet density 1. More generally, one can define the Dirichlet density of a sequence of primes (or prime powers), possibly with repetitions, in the same way. Properties If a subset of primes A has a natural density, given by the limit of (number of elements of A less than N)/(number of primes less than N) then it also has a Dirichlet density, and the two densities are the same. However it is usually easier to show that a set of primes has a Dirichlet density, and this is good enough for many purposes. For example, in proving Dirichlet's theorem on arithmetic progressions, it is easy to show that the set of primes in an arithmetic progression a + nb (for a, b coprime) has Dirichlet density 1/φ(b), which is enough to show that there are an infinite number of such primes, but harder to show that this is the natural density. Roughly speaking, proving that some set of primes has a non-zero Dirichlet density usually involves showing that certain L-functions do not vanish at the point s = 1, while showing that they have a natural density involves showing that the L-functions have no zeros on the line Re(s) = 1. In practice, if some "naturally occurring" set of primes has a Dirichlet density, then it also has a
https://en.wikipedia.org/wiki/Beneficial%20weed
A beneficial weed is an invasive plant that has some companion plant effect, is edible, contributes to soil health, adds ornamental value, or is otherwise beneficial. These plants are normally not domesticated. However, some invasive plants, such as dandelions, are commercially cultivated, in addition to growing in the wild. Beneficial weeds include many wildflowers, as well as other weeds that are commonly removed or poisoned. Certain weeds that have obnoxious and destructive qualities have been shown to fight illness and are thus used in medicine. For example, Parthenium hysterophorus native to northern Mexico and parts of the US has been an issue for years due to its toxicity and ability to spread rapidly. In the past few decades, though, research has found that P. hysterophorus was "used in traditional medicine to treat inflammation, pain, fever, and diseases like malaria dysentery." It is also known to create biogas that can be used as a bioremediation agent to break down heavy metals and other pollutants. Soil health These are erroneously considered to compete with neighboring plants for food and moisture. However, some "weeds" provide the soil with nutrients, either directly or indirectly. For example, if they are colonized by certain bacteria (most commonly Rhizobium), legumes such as white clover, they add nitrogen to the soil through the process of nitrogen fixation. These bacteria have a symbiotic relationship with the roots of their host, "fixing" atmospheric nitrogen by combining it with oxygen or hydrogen to make the nitrogen available to the plant as NH4 or NO3. Others use deep taproots to bring up nutrients and moisture from beyond the range of normal plants so that the soil improves in quality over generations of that plant's presence. Weeds with strong, widespread roots also introduce organic matter to the earth in the form of those roots, turning hard, dense clay dirt into richer, more fertile soil. Some plants like tomatoes and maize will "
https://en.wikipedia.org/wiki/Crystallographic%20Information%20File
Crystallographic Information File (CIF) is a standard text file format for representing crystallographic information, promulgated by the International Union of Crystallography (IUCr). CIF was developed by the IUCr Working Party on Crystallographic Information in an effort sponsored by the IUCr Commission on Crystallographic Data and the IUCr Commission on Journals. The file format was initially published by Hall, Allen, and Brown and has since been revised, most recently versions 1.1 and 2.0. Full specifications for the format are available at the IUCr website. Many computer programs for molecular viewing are compatible with this format, including Jmol. mmCIF Closely related is mmCIF, macromolecular CIF, which is intended as an successor to the Protein Data Bank (PDB) format. It is now the default format used by the Protein Data Bank. Also closely related is Crystallographic Information Framework, a broader system of exchange protocols based on data dictionaries and relational rules expressible in different machine-readable manifestations, including, but not restricted to, Crystallographic Information File and XML.
https://en.wikipedia.org/wiki/Meta-scheduling
Meta-scheduling or super scheduling is a computer software technique of optimizing computational workloads by combining an organization's multiple job schedulers into a single aggregated view, allowing batch jobs to be directed to the best location for execution. Meta-scheduling technique is a solution for scheduling a set of dependent or independent faults with different scenarios that are mapping and modeling in an event-tree. It can be used as a dynamic or static scheduling method. Scenario-based meta-scheduling Scenario-based and multi-mode approaches are essential techniques in embedded-systems, e.g., design space exploration for MPSoCs and reconfigurable systems. Optimization techniques for the generation of schedule graphs supporting such a SBMeS approach have been developed and implemented. Scenario-based meta-scheduling can promise better performance by reducing dynamic scheduling overhead and recovering from faults. Implementations The following is a partial list of noteworthy open source and commercial meta-schedulers currently available. GridWay by the Globus Alliance Community Scheduler Framework by Platform Computing and Jilin University MP Synergy by United Devices Moab Cluster Suite and Maui Cluster scheduler from Adaptive Computing DIOGENES (distributed optimal genetic algorithm for grid applications scheduling, started project) SynfiniWay's meta-scheduler MeS is designed to generate schedules for anticipated changes of scenarios by Dr.-Ing. Babak Sorkhpour and Prof. Dr.-Ing.Roman Obermaisser in chair for Embedded Systems in university of Siegen for energy-efficient, Robust and Adaptive Time-Triggered Systems (multi-core architectures with Networks-on-chip). Accelerator Plus runs jobs by the use of host jobs in an underlying workload manager. This approach achieves high job throughput by distributing the processing load associated with submitting and managing jobs.
https://en.wikipedia.org/wiki/Acoustic%20levitation
Acoustic levitation is a method for suspending matter in air against gravity using acoustic radiation pressure from high intensity sound waves. It works on the same principles as acoustic tweezers by harnessing acoustic radiation forces. However acoustic tweezers are generally small scale devices which operate in a fluid medium and are less affected by gravity, whereas acoustic levitation is primarily concerned with overcoming gravity. Technically dynamic acoustic levitation is a form of acoustophoresis, though this term is more commonly associated with small scale acoustic tweezers. Typically sound waves at ultrasonic frequencies are used thus creating no sound audible to humans. This is primarily due to the high intensity of sound required to counteract gravity. However, there have been cases of audible frequencies being used. There are various techniques for generating the sound, but the most common is the use of piezoelectric transducers which can efficiently generate high amplitude outputs at the desired frequencies. Levitation is a promising method for containerless processing of microchips and other small, delicate objects in industry. Containerless processing may also be used for applications requiring very-high-purity materials or chemical reactions too rigorous to happen in a container. This method is harder to control than others such as electromagnetic levitation but has the advantage of being able to levitate nonconducting materials. Although originally static, acoustic levitation has progressed from motionless levitation to dynamic control of hovering objects, an ability useful in the pharmaceutical and electronics industries. This dynamic control was first realised with a prototype with a chessboard-like array of square acoustic emitters that move an object from one square to another by slowly lowering the sound intensity emitted from one square while increasing the sound intensity from the other, allowing the object to travel virtually "downhill"
https://en.wikipedia.org/wiki/Nvidia%20System%20Tools
NVIDIA System Tools (previously called nTune) is a discontinued collection of utilities for accessing, monitoring, and adjusting system components, including temperature and voltages with a graphical user interface within Windows, rather than through the BIOS. Additionally, System Tools has a feature that automatically adjusts settings and tests them to find what it believes to be the optimal combination of settings for a particular computer hardware configuration. Everything, including the graphics processing unit (GPU), central processing unit (CPU), Media Communications Processor (MCP), RAM, voltage and fans are adjusted, though not all motherboards support all of these adjustment options. Configurations can also be saved. This allows the end user to toggle between performance gaming profiles, quiet profiles for less demanding work, or some other profile that is usage-specific. NVIDIA System Tools is also a front end for the BIOS. Most settings that can be changed in the BIOS are available in the utilities included. BIOS and driver updates to both nForce and GeForce hardware can also be done through System Tools. It additionally supports hardware which is certified under the Enthusiast System Architecture and connects to the motherboard via USB. Previously supported motherboard chipsets The following chipsets were supported in nTune releases, but are no longer supported by NVIDIA System Tools. nForce 220, nForce 220D, nForce 415 and nForce 420D nForce2 and nForce2 400 nForce2 Ultra and nForce2 Ultra 400 nForce2 400R and nForce2 Ultra 400Gb nForce3 150 and nForce3 PRO 150 nForce3 250, nForce3 250Gb and nForce3 PRO 250 Currently supported motherboard chipsets nForce4 Pro 2200, nForce4 Ultra, nForce4 SLI, and nForce4 SLI x16 nForce 590 SLI, nForce 570 SLI, nForce 570 LT SLI, nForce 570 Ultra, nForce 560, nForce 550, nForce 520, and nForce 520 LE nForce 680a SLI, nForce 680i SLI, nForce 680i LT SLI, nForce 650i SLI, nForce 650i Ultra, nForce 630a, nFo
https://en.wikipedia.org/wiki/Quantities%20of%20information
The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the bit, or more correctly the shannon, based on the binary logarithm. Although "bit" is more frequently used in place of "shannon", its name is not distinguished from the bit as used in data-processing to refer to a binary value or stream regardless of its entropy (information content) Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever is zero. This is justified because for any logarithmic base. Self-information Shannon derived a measure of information content called the self-information or "surprisal" of a message : where is the probability that message is chosen from all possible choices in the message space . The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of shannons or more often simply "bits" (a bit in other contexts is rather defined as a "binary digit", whose average information content is at most 1 shannon). Information from a source is gained by a recipient only if the recipient did not already have that information to begin with. Messages that convey information over a certain (P=1) event (or one which is known with certainty, for instance, through a back-channel) provide no information, as the above equation indicates. Infrequently occurring messages contain more information than more frequently occurring messages. It can also be shown that a compound message of two (or more) unrelated messages would
https://en.wikipedia.org/wiki/Clamper%20%28electronics%29
A clamper (or clamping circuit or clamp) is an electronic circuit that fixes either the positive or the negative peak excursions of a signal to a defined voltage by adding a variable positive or negative DC voltage to it. The clamper does not restrict the peak-to-peak excursion of the signal (clipping); it moves the whole signal up or down so as to place its peaks at the reference level. A diode clamp (a simple, common type) consists of a diode, which conducts electric current in only one direction and prevents the signal exceeding the reference value; and a capacitor, which provides a DC offset from the stored charge. The capacitor forms a time constant with a resistor load, which determines the range of frequencies over which the clamper will be effective. General function A clamper will bind the upper or lower extreme of a waveform to a fixed DC voltage level. These circuits are also known as DC voltage restorers. Clampers can be constructed in both positive and negative polarities. When unbiased, clamping circuits will fix the voltage lower limit (or upper limit, in the case of negative clampers) to 0 volts. These circuits clamp a peak of a waveform to a specific DC level compared with a capacitively coupled signal, which swings about its average DC level. The clamping network is one that will "clamp" a signal to a different DC level. The network must have a capacitor, a diode, and optionally a resistive element and/or load, but it can also employ an independent DC supply to introduce an additional shift. The magnitude of R and C must be chosen such that the time constant RC is large enough to ensure that the voltage across the capacitor does not discharge significantly during the interval the diode is nonconducting. Types Clamp circuits are categorised by their operation: negative or positive, and biased or unbiased. A positive clamp circuit (negative peak clamper) outputs a purely positive waveform from an input signal; it offsets the input signal so that
https://en.wikipedia.org/wiki/Gothenburg%20International%20Bioscience%20Business%20School
Gothenburg International Bioscience Business School (GIBBS) is an educational platform with a focus on business creation within the bio- and life sciences in Gothenburg, Sweden. As a student at the school you are studying intellectual property, management, economics and business development. The skills and tools to drive innovation and growth are learned by increasingly acknowledged pedagogy—‘experiential knowledge’ and ‘team based learning’, allowing the students to learn-by-doing. The programme is an international, action-based, and multi-disciplinary education where the education platform calls for special competencies and resources. Consequently, students engage in “real-life” commercialization project supported by instructors that apply their entrepreneurial experiences and network to the exclusive pedagogical experience offered to an exclusive pick of students. The education is a collaboration between Sahlgrenska Academy at Göteborg University and Chalmers University of Technology and is a part of the Center for Intellectual Property Studies, CIP. Most of the collaboration is during the first year with a shared curriculum studying with peers from various backgrounds such as law, engineering, life sciences, and management. The second year students work in groups with the innovation project with the aim to commercialize an innovation. The University of Gothenburg an education geared towards life science students who become entrepreneurial project leaders ready to deal uncertainties and distinct dynamics that life science industry and start-ups. The graduates from the programme are not only well versed in their respective life science fields; uniquely, they are also entrepreneurial project leaders. An essential mix that prepares them for the distinctive life science market dynamics in their chosen career, equipped to recognize possibilities and create growth. Chalmers University of Technology offers an education geared towards engineers in technology based v
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20shells
This is a list of official state shells for those states of the United States that have chosen to select one as part of their state insignia. In 1965, North Carolina was the first state to designate an official state shell, the Scotch bonnet. Since then, 14 other states have designated an official state shell. These are seashells, the shells of various marine mollusks including both gastropod and bivalves. Each one was chosen to represent a maritime state, based on the fact that the species occurs in that state and was considered suitable to represent the state, either because of the species' commercial importance as a local seafood item, or because of its beauty, rarity, exceptional size, or other features. Table See also List of U.S. state, district, and territorial insignia
https://en.wikipedia.org/wiki/Off-site%20data%20protection
In computing, off-site data protection, or vaulting, is the strategy of sending critical data out of the main location (off the main site) as part of a disaster recovery plan. Data is usually transported off-site using removable storage media such as magnetic tape or optical storage. Data can also be sent electronically via a remote backup service, which is known as electronic vaulting or e-vaulting. Sending backups off-site ensures systems and servers can be reloaded with the latest data in the event of a disaster, accidental error, or system crash. Sending backups off-site also ensures that there is a copy of pertinent data that is not stored on-site. Although some organizations manage and store their own off-site backups, many choose to have their backups managed and stored by third parties who specialize in the commercial protection of off-site data. Data vaults The storage of off-site data is also known as vaulting, as backups are stored in purpose-built vaults. There are no generally recognized standards for the type of structure which constitutes a vault. That said, commercial vaults typically fit into three categories: Underground vaults – often converted defunct cold war military or communications facilities, or even disused mines. Free-standing dedicated vaults Insulated chambers sharing facilities – often implemented within existing record center buildings. Hybrid on site and off-site vaulting Hybrid on-site and off-site data vaulting, sometimes known as Hybrid Online Backup, involve a combination of Local backup for fast backup and restore, along with Off-site backup for protection against local disasters. This ensures that the most recent data is available locally in the event of need for recovery, while archived data that is needed much less often is stored in the cloud. Hybrid Online Backup works by storing data to local disk so that the backup can be captured at high speed, and then either the backup software or a D2D2C (Disk to Disk to C
https://en.wikipedia.org/wiki/Tarski%E2%80%93Grothendieck%20set%20theory
Tarski–Grothendieck set theory (TG, named after mathematicians Alfred Tarski and Alexander Grothendieck) is an axiomatic set theory. It is a non-conservative extension of Zermelo–Fraenkel set theory (ZFC) and is distinguished from other axiomatic set theories by the inclusion of Tarski's axiom, which states that for each set there is a Grothendieck universe it belongs to (see below). Tarski's axiom implies the existence of inaccessible cardinals, providing a richer ontology than ZFC. For example, adding this axiom supports category theory. The Mizar system and Metamath use Tarski–Grothendieck set theory for formal verification of proofs. Axioms Tarski–Grothendieck set theory starts with conventional Zermelo–Fraenkel set theory and then adds “Tarski's axiom”. We will use the axioms, definitions, and notation of Mizar to describe it. Mizar's basic objects and processes are fully formal; they are described informally below. First, let us assume that: Given any set , the singleton exists. Given any two sets, their unordered and ordered pairs exist. Given any set of sets, its union exists. TG includes the following axioms, which are conventional because they are also part of ZFC: Set axiom: Quantified variables range over sets alone; everything is a set (the same ontology as ZFC). Axiom of extensionality: Two sets are identical if they have the same members. Axiom of regularity: No set is a member of itself, and circular chains of membership are impossible. Axiom schema of replacement: Let the domain of the class function be the set . Then the range of (the values of for all members of ) is also a set. It is Tarski's axiom that distinguishes TG from other axiomatic set theories. Tarski's axiom also implies the axioms of infinity, choice, and power set. It also implies the existence of inaccessible cardinals, thanks to which the ontology of TG is much richer than that of conventional set theories such as ZFC. Tarski's axiom (adapted from Tarski
https://en.wikipedia.org/wiki/Chimeraplasty
Chimeraplasty is a non-viral method of gene therapy. Chimeraplasty changes DNA sequences using a synthetic strand of RNA and DNA. This strand of RNA and DNA is known as a chimeraplast. The chimeraplast enters a cell and attaches itself to the target gene. The DNA of the chimeraplast and the cell complement each other except in the middle of the strand, where the chimeraplast's sequence is different from that of the cell. The DNA repair enzymes then replace the cell's DNA with that of the chimeraplast. This leaves the chimeraplast's new sequence in the cell's DNA and the replaced DNA sequence then decays. This technique was first developed and named by Eric Kmiec at Thomas Jefferson University. Since its discovery there has been debate over chimeraplasty's effectiveness. In the 6 September 1996 article of Science, Kmiec claimed that chimeraplasty was 50% effective in human cells. This figure was later disputed by a number of universities; chimeraplasty is now considered from .4-2.4% effective at transforming fibroblasts, and 0.0002% effective in transforming yeast cells.
https://en.wikipedia.org/wiki/Charge%20carrier%20density
Charge carrier density, also known as carrier concentration, denotes the number of charge carriers in per volume. In SI units, it is measured in m−3. As with any density, in principle it can depend on position. However, usually carrier concentration is given as a single number, and represents the average carrier density over the whole material. Charge carrier densities involve equations concerning the electrical conductivity, related phenomena like the thermal conductivity, and chemicals bonds like covalent bond. Calculation The carrier density is usually obtained theoretically by integrating the density of states over the energy range of charge carriers in the material (e.g. integrating over the conduction band for electrons, integrating over the valence band for holes). If the total number of charge carriers is known, the carrier density can be found by simply dividing by the volume. To show this mathematically, charge carrier density is a particle density, so integrating it over a volume gives the number of charge carriers in that volume where is the position-dependent charge carrier density. If the density does not depend on position and is instead equal to a constant this equation simplifies to Semiconductors The carrier density is important for semiconductors, where it is an important quantity for the process of chemical doping. Using band theory, the electron density, is number of electrons per unit volume in the conduction band. For holes, is the number of holes per unit volume in the valence band. To calculate this number for electrons, we start with the idea that the total density of conduction-band electrons, , is just adding up the conduction electron density across the different energies in the band, from the bottom of the band to the top of the band . Because electrons are fermions, the density of conduction electrons at any particular energy, is the product of the density of states, or how many conducting states are possible, with the F
https://en.wikipedia.org/wiki/Pulse-coupled%20networks
Pulse-coupled networks or pulse-coupled neural networks (PCNNs) are neural models proposed by modeling a cat's visual cortex, and developed for high-performance biomimetic image processing. In 1989, Eckhorn introduced a neural model to emulate the mechanism of cat's visual cortex. The Eckhorn model provided a simple and effective tool for studying small mammal’s visual cortex, and was soon recognized as having significant application potential in image processing. In 1994, Johnson adapted the Eckhorn model to an image processing algorithm, calling this algorithm a pulse-coupled neural network. Over the past decade, PCNNs have been used in a variety of image processing applications, including: image segmentation, feature generation, face extraction, motion detection, region growing, and noise reduction. The basic property of the Eckhorn's linking-field model (LFM) is the coupling term. LFM is a modulation of the primary input by a biased offset factor driven by the linking input. These drive a threshold variable that decays from an initial high value. When the threshold drops below zero it is reset to a high value and the process starts over. This is different than the standard integrate-and-fire neural model, which accumulates the input until it passes an upper limit and effectively "shorts out" to cause the pulse. LFM uses this difference to sustain pulse bursts, something the standard model does not do on a single neuron level. It is valuable to understand, however, that a detailed analysis of the standard model must include a shunting term, due to the floating voltages level in the dendritic compartment(s), and in turn this causes an elegant multiple modulation effect that enables a true higher-order network (HON). Multidimensional pulse image processing of chemical structure data using PCNN has been discussed by Kinser, et al. A PCNN is a two-dimensional neural network. Each neuron in the network corresponds to one pixel in an input image, receiving its
https://en.wikipedia.org/wiki/Betti%27s%20theorem
Betti's theorem, also known as Maxwell–Betti reciprocal work theorem, discovered by Enrico Betti in 1872, states that for a linear elastic structure subject to two sets of forces {Pi} i=1,...,n and {Qj}, j=1,2,...,n, the work done by the set P through the displacements produced by the set Q is equal to the work done by the set Q through the displacements produced by the set P. This theorem has applications in structural engineering where it is used to define influence lines and derive the boundary element method. Betti's theorem is used in the design of compliant mechanisms by topology optimization approach. Proof Consider a solid body subjected to a pair of external force systems, referred to as and . Consider that each force system causes a displacement field, with the displacements measured at the external force's point of application referred to as and . When the force system is applied to the structure, the balance between the work performed by the external force system and the strain energy is: The work-energy balance associated with the force system is as follows: Now, consider that with the force system applied, the force system is applied subsequently. As the is already applied and therefore won't cause any extra displacement, the work-energy balance assumes the following expression: Conversely, if we consider the force system already applied and the external force system applied subsequently, the work-energy balance will assume the following expression: If the work-energy balance for the cases where the external force systems are applied in isolation are respectively subtracted from the cases where the force systems are applied simultaneously, we arrive at the following equations: If the solid body where the force systems are applied is formed by a linear elastic material and if the force systems are such that only infinitesimal strains are observed in the body, then the body's constitutive equation, which may follow Hooke's law, can be
https://en.wikipedia.org/wiki/Ordinal%20notation
In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. A recursive ordinal notation must satisfy the following two additional properties: the subset of natural numbers is a recursive set the induced well-ordering on the subset of natural numbers is a recursive relation There are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here. Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contra
https://en.wikipedia.org/wiki/Kokko%20and%20Rector%20Model
The Kokko and Rector model is a theory explaining the mechanism of generation of a gradient in the inner medulla of the kidney. Unlike earlier theories explaining the mechanism using counter current mechanism (as is the case in the outer medulla), the driving force for salt reabsorption is stated to be urea accumulation. It has been proved that counter current mechanism cannot be the case in the inner medulla, since there are no salt pumps, and the cell membrane is too permeable to salt. History It has been proposed by Juha Kokko and Floyd Rector Jr. in 1972.
https://en.wikipedia.org/wiki/Microlocal%20analysis
In mathematical analysis, microlocal analysis comprises techniques developed from the 1950s onwards based on Fourier transforms related to the study of variable-coefficients-linear and nonlinear partial differential equations. This includes generalized functions, pseudo-differential operators, wave front sets, Fourier integral operators, oscillatory integral operators, and paradifferential operators. The term microlocal implies localisation not only with respect to location in the space, but also with respect to cotangent space directions at a given point. This gains in importance on manifolds of dimension greater than one. See also Algebraic analysis Microfunction External links lecture notes by Richard Melrose newer lecture notes by Richard Melrose Fourier analysis Generalized functions
https://en.wikipedia.org/wiki/Flora%20of%20Australia%20%28series%29
Flora of Australia is a 59 volume series describing the vascular plants, bryophytes and lichens present in Australia and its external territories. The series is published by the Australian Biological Resources Study who estimate that the series when complete will describe over 20 000 plant species. It was orchestrated by Alison McCusker. Series Volume 1 of the series was published in 1981, a second extended edition was released in 1999. The series uses the Cronquist system of taxonomy. The ABRS also published the Fungi of Australia, the Algae of Australia and the Flora of Australia Supplementary Series. A new online Flora of Australia was launched by ABRS in 2017, and no more printed volumes will be published. Volumes published 1. Introduction (1st edition) 1981 1. Introduction (2nd edition) 1999 Other Australian floras A few censuses of the Australian flora have been carried out, they include 1793-95 - J. E. Smith - A Specimen of the Botany of New Holland 1804-05 - J. E. Smith - Exotic Botany 1804-07 - J. J. H. de Labillardière - Novae Hollandiae Plant. Spec 1810 - R. Brown - Prodromus Florae Novae Hollandiae et Insulae Van Diemen 1814 - R. Brown - Botanical Appendix to Flinders' Voyage 1849 - R. Brown - Botanical Appendix to C. Sturt, Narrative of an Expedition into Central Australia 1856 - J. D. Hooker - Introductory Essay, Flora Tasmaniae 1863-78 - G. Bentham - Flora Australiensis 1882 - F. Mueller - Systematic Census of Australian Plants 1889 - F. Mueller - Second Systematic Census 1990 - R. J. Hnatiuk - Census of Australian Vascular Plants See also Flora of Australia Systematic Census of Australian Plants
https://en.wikipedia.org/wiki/Differential%20adhesion%20hypothesis
Differential adhesion hypothesis (DAH) is a hypothesis that explains cellular movement during morphogenesis with thermodynamic principles. In DAH tissues are treated as liquids consisting of mobile cells whose varying degrees of surface adhesion cause them to reorganize spontaneously to minimize their interfacial free energy. Put another way, according to DAH, cells move to be near other cells of similar adhesive strength in order to maximize the bonding strength between cells and produce a more thermodynamically stable structure. In this way the movement of cells during tissue formation, according to DAH, parodies the behavior of a mixture of liquids. Although originally motivated by the problem of understanding cell sorting behavior in vertebrate embryos, DAH has subsequently been applied to explain several other morphogenic phenomena. Background The origins of DAH can be traced back to a 1955 study by Philip L. Townes and Johannes Holtfreter. In this study Townes and Holtfreter placed the three germ layers of an amphibian into an alkaline solution, allowing them to dissociate into individual cells, and mixed these different types of cells together. Cells of different species were used to be able to visually observe and follow their movements. Cells of similar types migrated to their correct location and reaggregated to form germ layers in their developmentally correct positions. This experiment demonstrated that tissue organization can occur independent of the path taken, implying that it is mediated by forces that are persistently present and doesn't arise solely from the chronological sequence of developmental events preceding it. From these results Holtfreter developed his concept of selective affinity, and hypothesized that well-timed changes to selective affinity of cells to one another throughout development guided morphogenesis. Several hypotheses were introduced to explain these results including the "timing hypothesis" and the "differential surface con
https://en.wikipedia.org/wiki/CryptGenRandom
CryptGenRandom is a deprecated cryptographically secure pseudorandom number generator function that is included in Microsoft CryptoAPI. In Win32 programs, Microsoft recommends its use anywhere random number generation is needed. A 2007 paper from Hebrew University suggested security problems in the Windows 2000 implementation of CryptGenRandom (assuming the attacker has control of the machine). Microsoft later acknowledged that the same problems exist in Windows XP, but not in Vista. Microsoft released a fix for the bug with Windows XP Service Pack 3 in mid-2008. Background The Win32 API includes comprehensive support for cryptographic security, including native TLS support (via the SCHANNEL API) and code signing. These capabilities are built on native Windows libraries for cryptographic operations, such as RSA and AES key generation. These libraries in turn rely on a cryptographically secure pseudorandom number generator (CSPRNG). CryptGenRandom is the standard CSPRNG for the Win32 programming environment. Method of operation Microsoft-provided cryptography providers share the same implementation of CryptGenRandom, currently based on an internal function called RtlGenRandom. Only a general outline of the algorithm had been published : [RtlGenRandom] generates as specified in FIPS 186-2 appendix 3.1 with SHA-1 as the G function. And with entropy from: The current process ID (GetCurrentProcessID). The current thread ID (GetCurrentThreadID). The tick count since boot time (GetTickCount). The current time (GetLocalTime). Various high-precision performance counters (QueryPerformanceCounter). An MD4 hash of the user's environment block, which includes username, computer name, and search path. [...] High-precision internal CPU counters, such as RDTSC, RDMSR, RDPMC [omitted: long lists of low-level system information fields and performance counters] Security The security of a cryptosystem's CSPRNG is significant because it is the origin for dynamic key material. Keys
https://en.wikipedia.org/wiki/Prefix%20sum
In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers is a second sequence of numbers , the sums of prefixes (running totals) of the input sequence: ... For instance, the prefix sums of the natural numbers are the triangular numbers: {| class="wikitable" |- !input numbers |  1 ||  2 ||  3 ||  4 ||  5 ||  6 || ... |- !prefix sums |  1 ||  3 ||  6 || 10 || 15 || 21 || ... |} Prefix sums are trivial to compute in sequential models of computation, by using the formula to compute each output value in sequence order. However, despite their ease of computation, prefix sums are a useful primitive in certain algorithms such as counting sort, and they form the basis of the scan higher-order function in functional programming languages. Prefix sums have also been much studied in parallel algorithms, both as a test problem to be solved and as a useful primitive to be used as a subroutine in other parallel algorithms. Abstractly, a prefix sum requires only a binary associative operator ⊕, making it useful for many applications from calculating well-separated pair decompositions of points to string processing. Mathematically, the operation of taking prefix sums can be generalized from finite to infinite sequences; in that context, a prefix sum is known as a partial sum of a series. Prefix summation or partial summation form linear operators on the vector spaces of finite or infinite sequences; their inverses are finite difference operators. Scan higher order function In functional programming terms, the prefix sum may be generalized to any binary operation (not just the addition operation); the higher order function resulting from this generalization is called a scan, and it is closely related to the fold operation. Both the scan and the fold operations apply the given binary operation to the same sequence of values, but differ in that the scan returns the whole sequence of results from the binary operation, whereas
https://en.wikipedia.org/wiki/Andreev%20reflection
Andreev reflection (AR), named after the Russian physicist Alexander F. Andreev, is a type of particle scattering which occurs at interfaces between a superconductor (S) and a normal state material (N). It is a charge-transfer process by which normal current in N is converted to supercurrent in S. Each Andreev reflection transfers a charge 2e across the interface, avoiding the forbidden single-particle transmission within the superconducting energy gap. Overview The process involves an electron (hole) incident on the interface from the normal state material at energies less than the superconducting energy gap. The incident electron (hole) forms a Cooper pair in the superconductor with the retroreflection of a hole (electron) of opposite spin and velocity but equal momentum to the incident electron (hole), as seen in the figure. The barrier transparency is assumed to be high, with no oxide or tunnel layer which reduces instances of normal electron-electron or hole-hole scattering at the interface. Since the pair consists of an up and down spin electron, a second electron (hole) of opposite spin to the incident electron (hole) from the normal state forms the pair in the superconductor, and hence the retroreflected hole (electron). Through time-reversal symmetry, the process with an incident electron will also work with an incident hole (and retroreflected electron). The process is highly spin-dependent – if only one spin band is occupied by the conduction electrons in the normal-state material (i.e. it is fully spin-polarized), Andreev reflection will be inhibited due to inability to form a pair in the superconductor and impossibility of single-particle transmission. In a ferromagnet or material where spin-polarization exists or may be induced by a magnetic field, the strength of the Andreev reflection (and hence conductance of the junction) is a function of the spin-polarization in the normal state. The spin-dependence of AR gives rise to the Point Contact Andre
https://en.wikipedia.org/wiki/Dynamic%20testing
Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time. In dynamic testing the software must actually be compiled and run. It involves working with the software, giving input values and checking if the output is as expected by executing specific test cases which can be done manually or with the use of an automated process. This is in contrast to static testing. Unit tests, integration tests, system tests and acceptance tests utilize dynamic testing. Usability tests involving a mock version made in paper or cardboard can be classified as static tests when taking into account that no program has been executed; or, as dynamic ones when considering the interaction between users and such mock version is effectively the most basic form of a prototype. Main procedure The process and function of dynamic testing in software development, dynamic testing can be divided into unit testing, integration testing, system testing, acceptance testing and finally regression testing. Unit testing is a test that focuses on the correctness of the basic components of a software. Unit testing falls into the category of white-box testing. In the entire quality inspection system, unit testing needs to be completed by the product group, and then the software is handed over to the testing department. Integration testing is used to detect if the interfaces between the various units are properly connected during the integration process of the entire software. Testing a software system that has completed integration is called a system test, and the purpose of the test is to verify that the correctness and performance of the software system meet the requirements specified in its specifications. Testers should follow the established test plan. When testing the robustn
https://en.wikipedia.org/wiki/RM-ODP
Reference Model of Open Distributed Processing (RM-ODP) is a reference model in computer science, which provides a co-ordinating framework for the standardization of open distributed processing (ODP). It supports distribution, interworking, platform and technology independence, and portability, together with an enterprise architecture framework for the specification of ODP systems. RM-ODP, also named ITU-T Rec. X.901-X.904 and ISO/IEC 10746, is a joint effort by the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the Telecommunication Standardization Sector (ITU-T). Overview The RM-ODP is a reference model based on precise concepts derived from current distributed processing developments and, as far as possible, on the use of formal description techniques for specification of the architecture. Many RM-ODP concepts, possibly under different names, have been around for a long time and have been rigorously described and explained in exact philosophy (for example, in the works of Mario Bunge) and in systems thinking (for example, in the works of Friedrich Hayek). Some of these concepts—such as abstraction, composition, and emergence—have recently been provided with a solid mathematical foundation in category theory. RM-ODP has four fundamental elements: an object modelling approach to system specification; the specification of a system in terms of separate but interrelated viewpoint specifications; the definition of a system infrastructure providing distribution transparencies for system applications; and a framework for assessing system conformance. The RM-ODP family of recommendations and international standards defines a system of interrelated essential concepts necessary to specify open distributed processing systems and provides a well-developed enterprise architecture framework for structuring the specifications for any large-scale systems including software systems. History Much of the pre
https://en.wikipedia.org/wiki/Passive%20cooling
Passive cooling is a building design approach that focuses on heat gain control and heat dissipation in a building in order to improve the indoor thermal comfort with low or no energy consumption. This approach works either by preventing heat from entering the interior (heat gain prevention) or by removing heat from the building (natural cooling). Natural cooling utilizes on-site energy, available from the natural environment, combined with the architectural design of building components (e.g. building envelope), rather than mechanical systems to dissipate heat. Therefore, natural cooling depends not only on the architectural design of the building but on how the site's natural resources are used as heat sinks (i.e. everything that absorbs or dissipates heat). Examples of on-site heat sinks are the upper atmosphere (night sky), the outdoor air (wind), and the earth/soil. Passive cooling is an important tool for design of buildings for climate change adaptationreducing dependency on energy-intensive air conditioning in warming environments. Overview Passive cooling covers all natural processes and techniques of heat dissipation and modulation without the use of energy. Some authors consider that minor and simple mechanical systems (e.g. pumps and economizers) can be integrated in passive cooling techniques, as long they are used to enhance the effectiveness of the natural cooling process. Such applications are also called 'hybrid cooling systems'. The techniques for passive cooling can be grouped in two main categories: Preventive techniques that aim to provide protection and/or prevention of external and internal heat gains. Modulation and heat dissipation techniques that allow the building to store and dissipate heat gain through the transfer of heat from heat sinks to the climate. This technique can be the result of thermal mass or natural cooling. Preventive techniques Protection from or prevention of heat gains encompasses all the design techniques that m
https://en.wikipedia.org/wiki/Object-oriented%20design
Object-oriented design (OOD) is the process of planning a system of interacting objects for the purpose of solving a software problem. It is one approach to software design. Overview An object contains encapsulated data and procedures grouped together to represent an entity. The 'object interface' defines how the object can be interacted with. An object-oriented program is described by the interaction of these objects. Object-oriented design is the discipline of defining the objects and their interactions to solve a problem that was identified and documented during object-oriented analysis. What follows is a description of the class-based subset of object-oriented design, which does not include object prototype-based approaches where objects are not typically obtained by instantiating classes but by cloning other (prototype) objects. Object-oriented design is a method of design encompassing the process of object-oriented decomposition and a notation for depicting both logical and physical as well as state and dynamic models of the system under design. Object-oriented design topics Input (sources) for object-oriented design The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice the results of one activity can feed the other in a short feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and the artifacts can be continuously grown instead of completely developed in one shot. Some typical input artifacts for object-oriented design are: Conceptual model: The result of object-oriented analysis, it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. Use case: A description of sequences of events
https://en.wikipedia.org/wiki/Ivan%20Privalov
Ivan Ivanovich Privalov (; 11 February 1891 – 13 July 1941) was a Russian mathematician best known for his work on analytic functions. Biography Privalov graduated from Moscow State University (MSU) in 1913 studying under Dimitri Egorov and Nikolai Lusin. He obtained his master's degree from MSU in 1916 and became professor at Imperial Saratov University (1917—1922). In 1922 he was appointed as Professor at MSU and worked there for the rest of his life. Corresponding member of the USSR Academy of Sciences (since 1939). Member of the French Mathematical Society (Société Mathématique de France) and the Mathematical Circle of Palermo (Circolo Matematico di Palermo). Research work Privalov wrote Cauchy Integral (1918) which built on work by Fatou. He also worked on many problems jointly with Luzin. In 1934 he studied subharmonic functions, building on the work of Riesz. PhD students Samarii Aleksandrovich Galpern. Publications Books I. I. Privalov, Subharmonic Functions, GITTL, Moscow, 1937. I. I. Privalov, Introduction to the Theory of Functions of a Complex Variable, GITTL, Moscow-Leningrad, 1948 (14n ed: 1999, ). I. I. Privalov, Boundary Properties of Analytic Functions, 2nd ed., GITTL, Moscow-Leningrad, 1950. See also Luzin–Privalov theorems External links . . P. I. Kuznetsov and E. D. Solomentsev (1982). "Ivan Ivanovich Privalov (ninety years after his birth)" Russ. Math. Surv. 37: 152-174.
https://en.wikipedia.org/wiki/Photoelastic%20modulator
A photoelastic modulator (PEM) is an optical device used to modulate the polarization of a light source. The photoelastic effect is used to change the birefringence of the optical element in the photoelastic modulator. PEM was first invented by J. Badoz in the 1960s and originally called a "birefringence modulator." It was initially developed for physical measurements including optical rotary dispersion and Faraday rotation, polarimetry of astronomical objects, strain-induced birefringence, and ellipsometry. Later developers of the photoelastic modulator include J.C Kemp, S.N Jasperson and S.E Schnatterly. Description The basic design of a photoelastic modulator consists of a piezoelectric transducer and a half wave resonant bar; the bar being a transparent material (now most commonly fused silica). The transducer is tuned to the natural frequency of the bar. This resonance modulation results in highly sensitive polarization measurements. The fundamental vibration of the optic is along its longest dimension. Basic principles The principle of operation of photoelastic modulators is based on the photoelastic effect, in which a mechanically stressed sample exhibits birefringence proportional to the resulting strain. Photoelastic modulators are resonant devices where the precise oscillation frequency is determined by the properties of the optical element/transducer assembly. The transducer is tuned to the resonance frequency of the optical element along its long dimension, determined by its length and the speed of sound in the material. A current is then sent through the transducer to vibrate the optical element through stretching and compressing which changes the birefringence of the transparent material. Because of this resonant character, the birefringence of the optical element can be modulated to large amplitudes, but also by the same reason, the operation of a PEM is limited to a single frequency, and most commercial devices manufactured today opera
https://en.wikipedia.org/wiki/FELICS
FELICS, which stands for Fast Efficient & Lossless Image Compression System, is a lossless image compression algorithm that performs 5-times faster than the original lossless JPEG codec and achieves a similar compression ratio. History It was invented by Paul G. Howard and Jeffrey S. Vitter of the Department of Computer Science at Brown University in Providence, Rhode Island, USA, and was first presented at the 1993 IEEE Data Compression Conference in Snowbird, Utah. It was successfully implemented in hardware and deployed as part of HiRISE on the Mars Reconnaissance Orbiter. Principle Like other lossless codecs for continuous-tone images, FELICS operates by decorrelating the image and encoding it with an entropy coder. The decorrelation is the context where and where are the pixel's two nearest neighbors (causal, already coded and known at the decoder) used for providing the context to code the present pixel . Except at the top and left edges, these are the pixel above and the pixel to the left. For example, the neighbors of pixel X in the diagram are A and B, but if X were at the left side, its neighbors would be B and D. P lies within the closed interval [L, H] roughly half the time. Otherwise, it is above H or below L. These can be encoded as 1, 01, and 00 respectively (p. 4). The following figure shows the (idealized) histogram of the pixels and their intensity values along the x-axis, and frequency of occurrence along the y-axis. The distribution of P within the range [L, H] is nearly uniform with a minor peak near the center of this range. When P falls in the range [L, H], P − L is encoded using an adjusted binary code such that values in the center of the range use floor(log2(Δ + 1)) bits and values at the ends use ceil (log2(Δ + 1)) bits (p. 2). For example, when Δ = 11, the codes for P − L in 0 to 11 may be 0000, 0001, 0010, 0011, 010, 011, 100, 101, 1100, 1101, 1110, 1111. Outside the range, P tends to follow a geometric distribution on each s
https://en.wikipedia.org/wiki/Punctuated%20gradualism
Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution. Description While the traditional model of paleontology, the phylogenetic model, posits that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes don't happen over a gradual period but in localized, rare, rapid events of branching speciation. Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states. History In 1983, Malmgren and colleagues published a paper called "Evidence for punctuated gradualism in the late Neogene Globorotalia tumida lineage of planktonic foraminifera." This paper studied the lineage of planktonic foraminifera, specifically the evolutionary transition from G. plesiotumida to G. tumida across the Miocene/Pliocene boundary. The study found that the G. tumida lineage, while remaining in relative stasis over a considerable part of its total duration underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching. Based on these findings, Malmgren and colleagues introduced a new mode of evolution and proposed to call it "punctuated gradualism." There is strong evidence supporting both gradual evolution of a species over time and rapid events of species evolution separated by periods of little evolutionary change. Organisms have a great propensity to adapt and evolve depending on the circumstances. Studies Studies
https://en.wikipedia.org/wiki/Single%20vegetative%20obstruction%20model
The ITU Single Vegetative Obstruction Model is a Radio propagation model that quantitatively approximates the attenuation due to the vegetation in the middle of a telecommunication link. Applicable to/under conditions The model is applicable to scenarios where no end of the link is completely inside foliage, but a single plant or tree stands in the middle of the link. Coverage Frequency = Below 3 GHz and Over 5 GHz Depth = Not specified Mathematical formulations The single vegetative obstruction model is formally expressed as, where, A = The Attenuation due to vegetation. Unit: decibel(dB). d = Depth of foliage. Unit: Meter (m). = Specific attenuation for short vegetative paths. Unit: decibel per meter (dB/m). Ri = The initial slope of the attenuation curve. Rf = The final slope of the attenuation curve. f = The frequency of operations. Unit: gigahertz (GHz). k = Empirical constant. Calculation of slopes Initial slope is calculated as: And the final slope as: where, a, b and c are empirical constants (given in the table below). Calculation of k k is computed as: where, k0 = Empirical constant (given in the table below). Rf = Empirical constant for frequency dependent attenuation. A0 = Empirical attenuation constant (given in the table below). Ai = Illumination area. Calculation of Ai Ai is calculated in using any of the equations below. A point to note is that, the terms h, hT, hR, w, wT and wR are defined perpendicular to the (assumed horizontal) line joining the transmitter and receiver. The first three terms are measured vertically and the other there are measured horizontally. Equation 1: Equation 2: where, wT = Width of illuminated area as seen from the transmitter. Unit: meter (m) wR = Width of illuminated area as seen from the receiver. Unit: meter (m) w = Width of the vegetation. Unit: meter (m) hT =Height of illuminated area as seen from the transmitter. Unit: meter (m) hR = Height of illuminated area as seen from the
https://en.wikipedia.org/wiki/Okumura%20model
The Okumura model is a radio propagation model that was built using the data collected in the city of Tokyo, Japan. The model is ideal for using in cities with many urban structures but not many tall blocking structures. The model served as a base for the Hata model. Okumura model was built into three modes. The ones for urban, suburban and open areas. The model for urban areas was built first and used as the base for others. Coverage Frequency: 150–1920 MHz Mobile station antenna height: between 1 m and 3 m Base station antenna height: between 30 m and 100 m Link distance: between 1 km and 100 km Mathematical formulation The Okumura model is formally expressed as: where, L = The median path loss. Unit: Decibel (dB) LFSL = The free space loss. Unit: decibel (dB) AMU = Median attenuation. Unit: decibel (dB) HMG = Mobile station antenna height gain factor. HBG = Base station antenna height gain factor. Kcorrection = Correction factor gain (such as type of environment, water surfaces, isolated obstacle etc.) Points to note Okumura's model is one of the most widely used models for signal prediction in urban areas. This model is applicable for frequencies in the range 150–1920 MHz (although it is typically extrapolated up to 3000 MHz) and distances of 1–100 km. It can be used for base-station antenna heights ranging from 30–1000 m. Okumura developed a set of curves giving the median attenuation relative to free space (Amu), in an urban area over a quasi-smooth terrain with a base station effective antenna height () of 200 m and a mobile antenna height (hre) of 3 m. These curves were developed from extensive measurements using vertical omni-directional antennas at both the base and mobile, and are plotted as a function of frequency in the range 100–1920 MHz and as a function of distance from the base station in the range 1–100 km. To determine path loss using Okumura's model, the free space path loss between the points of interest is first determined, an
https://en.wikipedia.org/wiki/Loc.%20cit.
Loc. cit. (Latin, short for loco citato, meaning "in the place cited") is a footnote or endnote term used to repeat the title and page number for a given work (and author). Loc. cit. is used in place of ibid. when the reference is not only to the work immediately preceding, but also refers to the same page. Therefore, loc. cit. is never followed by volume or page numbers. Loc. cit. may be contrasted with op. cit. (opere citato, "in the work cited"), in which reference is made to a work previously cited, but to a different page within that work. Sample usage Example 1: 9. R. Millan, "Art of Latin grammar" (Academic, New York, 1997), p. 23. 10. Loc. cit. In the above example, the loc. cit. in reference 10 refers to reference 9 in its entirety, including page number. Note that loc. cit. is capitalized in this instance. Example 2: 9. R. Millan, "Art of Latin grammar" (Academic, New York, 1997), p. 23 10. G. Wiki, "Blah and its uses" (Blah Ltd., Old York, 2000), p. 12. 11. Millan, loc. cit. In the second example, the loc. cit. in reference 11 refers to reference 9, including page number. See also Bibliography Ibid. Op. cit. MLA style
https://en.wikipedia.org/wiki/Heliox%20%28cryogenic%20equipment%29
Heliox is a cryogenically cooled system produced by Oxford Instruments. Presently available in 2 varieties, the VL and TL, vertically loaded and top-loaded respectively. They are both pumped 3He cryostats, the TL capable of magnetic fields of up to 14 T, and the VL capable of achieving magnetic fields of up to 2 T. The base temperature for both systems is ~250 mK. Whilst the basis of operation of system is pumping of liquid helium-3 below 2.2 K, this low temperature is achieved by first cooling the system to 2.2 K by pumping of helium-4. A constant supply of liquid 4He is necessary, constituting a typical overhead of ~£1 / liter, whilst 3He is efficiently conserved as it is valued at ~£300 / liter.
https://en.wikipedia.org/wiki/Hearing%20loss%20with%20craniofacial%20syndromes
Hearing loss with craniofacial syndromes is a common occurrence. Many of these multianomaly disorders involve structural malformations of the outer or middle ear, making a significant hearing loss highly likely. Treacher Collins syndrome Individuals with Treacher Collins syndrome often have both cleft palate and hearing loss, in addition to other disabilities. Hearing loss is often secondary to absent, small or unusually formed ears (microtia) and commonly results from malformations of the middle ear. Researchers have found that most patients with Treacher Collins syndrome have symmetric external ear canal abnormalities and symmetrically dysmorphic or absent ossicles in the middle ear space. Inner ear structure is largely normal. Most patients show a moderate hearing impairment or greater, and the type of loss is generally a conductive hearing loss. Patients with Treacher Collins syndrome exhibit hearing losses similar to those of patients with malformed or missing ossicles (Pron et al., 1993). Pierre Robin sequence Persons with Pierre Robin sequence (PRS) are at greater risk for hearing impairment than persons with cleft lip and/or palate without PRS. One study showed an average of 83% hearing loss in PRS, compared to 60% in cleft individuals without PRS (Handzic et al., 1995). Similarly, PRS individuals typically exhibit conductive, bilateral hearing losses that are greater in degree than in cleft individuals without PRS. Middle ear effusion is generally apparent, with no middle ear or inner ear malformations. Accordingly, management by ear tubes (myringotomy tubes) is often effective and may restore normal levels of hearing (Handzic et al., 1995). Stickler syndrome The hearing loss most typical in patients with Stickler syndrome is a sensorineural hearing loss, indicating that the source of the deficit lies in the inner ear, the vestibulocochlear nerve or the processing centers of the brain. Szymko-Bennett et al. (2001) found that the overall hearing los
https://en.wikipedia.org/wiki/HashKeeper
HashKeeper is a database application of value primarily to those conducting forensic examinations of computers on a somewhat regular basis. Overview HashKeeper uses the MD5 file signature algorithm to establish unique numeric identifiers (hash values) for files "known to be good" and "known to be bad." The HashKeeper application was developed to reduce the amount of time required to examine files on digital media. Once an examiner defines a file as known to be good, the examiner need not repeat that analysis. HashKeeper compares hash values of known to be good files against the hash values of files on a computer system. Where those values match "known to be good" files, the examiner can say, with substantial certainty, that the corresponding files on the computer system have been previously identified as known to be good and therefore do not need to be examined. Where those values match known to be bad files, the examiner can say with substantial certainty that the corresponding files on the system being examined that the files are bad and therefore require further scrutiny. A hash match on known to be bad files does not relieve the examiner of the responsibility of verifying that the file or files are, in fact, of a criminal nature. History Created by the National Drug Intelligence Center (NDIC)—a component of the United States Department of Justice—in 1996, it was the first large scale source for hash values of "known to be good" and "known to be bad" files. HashKeeper was, and still is, the only community effort based upon the belief that members of state, national, and international law enforcement agencies can be trusted to submit properly categorized hash values. One of the first community sources of "known to be good" hash values was the United States Internal Revenue Service. The first source of "known to be bad" hash values was the Luxembourg Police who contributed hash values of recognized child pornography. Availability HashKeeper is available, f
https://en.wikipedia.org/wiki/Velopharyngeal%20inadequacy
Velopharyngeal inadequacy is a malfunction of a velopharyngeal mechanism which is responsible for directing the transmission of sound energy and air pressure in both the oral cavity and the nasal cavity. When this mechanism is impaired in some way, the valve does not fully close, and a condition known as 'velopharyngeal inadequacy' can develop. VPI can either be congenital or acquired later in life. Presentation Relationship to cleft palate A cleft palate is one of the most common causes of VPI. Cleft palate is an anatomical abnormality that occurs in utero and is present at birth. This malformation can affect the lip and palate, or the palate only. A cleft palate can affect the mobility of the velopharyngeal valve, thereby resulting in VPI. Causes While cleft is the most common cause of VPI, other significant etiologies exist. These other causes are outlined in the chart below: Diagnosis Classification The most frequent types of cleft palates are overt, submucous, and occult submucous. Treatment A common method to treat Velopharyngeal insufficiency is pharyngeal flap surgery, where tissue from the back of the mouth is used to close part of the gap. Other ways of treating velopharyngeal insufficiency is by placing a posterior nasopharyngeal wall implant (commonly cartilage or collagen) or type of soft palate lengthening procedure (i.e. VY palatoplasty). Inadequacy, insufficiency and incompetency Velopharyngeal insufficiency or incompetency are related labels for this phenomenon, in addition to most common generic- velopharyngeal inadequacy. Velopharyngeal insufficiency is the inability of the velopharyngeal sphincter to sufficiently separate the nasal cavity from the oral cavity during speech. Velopharyngeal incompetency occurs when the soft palate and the lateral/posterior pharyngeal walls fail to separate the oral cavity from the nasal cavity during speech. Although the definitions are similar, the etiologies correlated with each term differ slightly. Howev
https://en.wikipedia.org/wiki/Extramarital%20sex
Extramarital sex occurs when a married person engages in sexual activity with someone other than their spouse. The term may be applied to the situation of a single person having sex with a married person. Where extramarital sexual relations do not breach a sexual norm, it may be referred to as consensual non-monogamy (see also polyamory). Where extramarital sexual relations do breach a sexual norm, it may be referred to as adultery or non-monogamy (sexual acts between a married person and a person other than the spouse), fornication (sexual acts between unmarried people), philandery, or infidelity. These terms imply moral or religious consequences, whether in civil law or religious law. Prevalence American researcher Alfred Kinsey found in his 1950-era studies that 50% of American males and 26% of females had extramarital sex. Depending on studies, it was estimated that 26–50% of men and 21–38% of women, or 22.7% of men and 11.6% of women had extramarital sex. Other authors say that between 20% and 25% of Americans had sex with someone other than their spouse. Durex's Global Sex Survey (2005) found that 44% of adults worldwide reported having had one-night extramarital sex and 22% had an affair. According to a 2004 United States survey, 16% of married partners have had extramarital sex, nearly twice as many men as women, while an additional 30% have fantasized about extramarital sex. According to a 2015 study by Durex and Match.com, Thailand and Denmark were the most adulterous countries based on the percentage of adults who admitted having an affair. A 2016 study by the Institute for Family Studies in the US found that black Protestants had a higher rate of extramarital sex than Catholics. A 2018 US study found that 53.5% of Americans who admitted having extramarital sex did so with someone they knew well, such as a close friend. About 29.4% were with someone who was somewhat well-known, such as a neighbor, co-worker or long-term acquaintance, and the rest w
https://en.wikipedia.org/wiki/Sex%E2%80%93gender%20distinction
Though the terms sex and gender have been used interchangeably since at least the fourteenth century, in contemporary academic literature they usually have distinct meanings. Sex generally refers to an organism's biological sex, while gender usually refers to either social roles typically associated with the sex of a person (gender role) or personal identification of one's own gender based on an internal awareness (gender identity). While in ordinary speech, the terms sex and gender are often used interchangeably, most contemporary social scientists, behavioral scientists and biologists, many legal systems and government bodies, and intergovernmental agencies such as the WHO make a distinction between gender and sex. In most individuals, the various biological determinants of sex are congruent, and consistent with the individual's gender identity, but in some circumstances, an individual's assigned sex and gender do not align, and the person may be transgender. Also in some cases, an individual may have sex characteristics that complicate sex assignment, and the person may be intersex. Sexologist John Money pioneered the concept of a distinction between biological sex and gender identity in 1955. Madison Bentley had already defined gender as the "socialized obverse of sex" a decade earlier, in 1945. As originally conceived by Money, gender and sex are analysed together as a single category including both biological and social elements, but later work by Robert Stoller separated the two, designating sex and gender as biological and cultural categories, respectively. Before the work of Bentley, Money and Stoller, the word gender was only regularly used to refer to grammatical categories. Sex Biologists Anisogamy, or the size differences of gametes (sex cells), is the defining feature of the two sexes. According to biologist Michael Majerus there is no other universal difference between males and females. By definition, males are organisms that produce small, mob
https://en.wikipedia.org/wiki/Space%20food
Space food is a type of food product created and processed for consumption by astronauts during missions to outer space. The food has specific requirements to provide a balanced diet and adequate nutrition for individuals working in space while being easy and safe to store, prepare and consume in the machinery-filled weightless environments of crewed spacecraft. Most space food is freeze-dried to ensure long shelf life. In recent years, space food has been used by various nations engaging in space programs as a way to share and show off their cultural identity and facilitate intercultural communication. Although astronauts consume a wide variety of foods and beverages in space, the initial idea from The Man in Space Committee of the Space Science Board in 1963 was to supply astronauts with a formula diet that would provide all the needed vitamins and nutrients. Types There are several classifications of space food, as follows: Beverages (B) - Freeze dried drink mixes (coffee or tea) or flavored drinks (lemonade or orange drink) are provided in vacuum sealed beverage pouches. Coffee and tea may have powdered cream and/or sugar added depending on personal taste preferences. Empty beverage pouches are provided for drinking water. Fresh Foods (FF) - Fresh fruits, vegetables, and tortillas delivered by resupply missions. These foods spoil quickly and need to be eaten within the first two days of the package's arrival to the ISS to prevent spoilage. These foods are provided as psychological support for astronauts who may not return home for extended periods of time. Irradiated (I) Meat - Beef steak that is sterilized with ionizing radiation to keep the food from spoiling. NASA has dispensation from the U.S. Food and Drug Administration (FDA) to use this type of food sterilization. Intermediate Moisture (IM) - Foods that have some moisture but not enough to cause immediate spoilage. Examples include sausage and beef jerky. Natural Form (NF) - Commercially available
https://en.wikipedia.org/wiki/Protection%20Forest%20Adjacent%20to%20the%20Nuevo%20Imperial%20Canal%20Intake
The Protection Forest Adjacent to the Nuevo Imperial Canal Intake is an ecological project. The protected forest is adjacent to the New Imperial Canal Intake and is located approximately 150 km south of the city of Lima, Peru near the town of Lunahuaná in the Cañete Province. It is situated in an arid desert region which is a characteristic of the central coast of Peru. It protects the Nuevo Imperial Canal Intake against the ravage of Cañete River. It also preserves the bordering soils and the infrastructure that guarantees the water supply for agricultural use in the valley.
https://en.wikipedia.org/wiki/Presentity
The term presentity is a combination of two words - "presence" and "entity". It basically refers to an entity that has presence information associated with it; information such as status, reachability, and willingness to communicate. Usage The term presentity is often used to refer to users who post and update their presence information through some kind of presence applications on their devices. In this case presence information describes availability and willingness of this user to communicate via set of communication services. For example, users of an instant messaging service (such as ICQ or MSN Messenger) are presentities and their presence information is their user status (online, offline, away, etc.). Presentity can also refer to a resource or role such as a conference room or help desk. A presentity can also refer to a group of users, for example a collection of customer service agents in a call center. This presentity may be considered available if there is at least one agent ready to accept a call.
https://en.wikipedia.org/wiki/American%20Machinists%27%20Handbook
American Machinists' Handbook was a McGraw-Hill reference book similar to Industrial Press's Machinery's Handbook. (The latter title, still in print and regularly revised, is the one that machinists today are usually referring to when they speak imprecisely of "the machinist's handbook" or "the machinists' handbook".) The somewhat generic sound of the title American Machinists' Handbook, no doubt contributed to the confounding of the two books' titles and identities. It capitalized on readers' familiarity with American Machinist, McGraw-Hill's popular trade journal. But the usage could have benefited from some branding discipline, because of some little confusion over whether the title was properly "American Machinist's Handbook" or "American Machinists' Handbook". ("American Machinist 's Handbook" would be parallel to the construction of the title "Machinery's Handbook") McGraw-Hill's American Machinists' Handbook appeared first (1908). It is doubtful that Industrial Press's Machinery's Handbook (1914) was a mere me-too conceived afterwards in response. The eager market for such reference works had probably been obvious for at least a decade before either work was compiled, perhaps the appearance of the McGraw-Hill title merely prodded Industrial Press to finally get moving on a handbook of its own. American Machinists' Handbook, co-edited by Fred H. Colvin and Frank A. Stanley, went through eight editions between 1908 and 1945. In 1955, McGraw-Hill published The new American machinist's handbook. Based upon earlier editions of American machinists' handbook, but perhaps the book did not compete well enough with Machinery's Handbook. No subsequent editions were produced. List of the editions of American Machinists' Handbook Renewal data from Rutgers. All works after 1923 with renewed copyright are presumably still protected. 1908 non-fiction books Mechanical engineering Metallurgical industry of the United States Handbooks and manuals McGraw-Hill books
https://en.wikipedia.org/wiki/CHAPS%20detergent
{{Chembox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 470456710 | ImageFile = CHAPS.png | ImageName = Structural formula of CHAPS detergent | IUPACName = 3-{Dimethyl[3-(3α,7α,12α-trihydroxy-5β-cholan-24-amido)propyl]azaniumyl}propane-1-sulfonate | SystematicName = 3-[Dimethyl(3-{(4R)-4-[(1R,3aS,3bR,4R,5aS,7R,9aS,9bS,11S,11aR)-4,7,11-trihydroxy-9a,11a-dimethylhexadecahydro-1H-cyclopenta[a'']phenanthren-1-yl]pentanamido}propyl)azaniumyl]propane-1-sulfonate | OtherNames = 3-[(3-Cholamidopropyl)dimethylammonio]-1-propanesulfonate |Section1= |Section2= }}CHAPS is a zwitterionic surfactant used in the laboratory to solubilize biological macromolecules such as proteins. It may be synthesized from cholic acid and is zwitterionic due to its quaternary ammonium and sulfonate groups; it is structurally similar to certain bile acids, such as taurodeoxycholic acid and taurochenodeoxycholic acid. It is used as a non-denaturing detergent in the process of protein purification and is especially useful in purifying membrane proteins, which are often sparingly soluble or insoluble in aqueous solution due to their native hydrophobicity. CHAPS is an abbreviation for 3-[(3-cholamidopropyl)dimethylammonio]-1-propanes'''ulfonate. A related detergent, called CHAPSO, has the same basic chemical structure with an additional hydroxyl functional group; its full chemical name is 3-[(3-cholamidopropyl)dimethylammonio]-2-hydroxy-1-propanesulfonate. Both detergents have low light absorbance in the ultraviolet region of the electromagnetic spectrum, which is useful for monitoring ongoing chemical reactions or protein-protein binding with UV/Vis spectroscopy. See also Taurodeoxycholic acid Taurochenodeoxycholic acid
https://en.wikipedia.org/wiki/ADAM%20%28protein%29
ADAMs (short for a disintegrin and metalloproteinase) are a family of single-pass transmembrane and secreted metalloendopeptidases. All ADAMs are characterized by a particular domain organization featuring a pro-domain, a metalloprotease, a disintegrin, a cysteine-rich, an epidermal-growth factor like and a transmembrane domain, as well as a C-terminal cytoplasmic tail. Nonetheless, not all human ADAMs have a functional protease domain, which indicates that their biological function mainly depends on protein–protein interactions. Those ADAMs which are active proteases are classified as sheddases because they cut off or shed extracellular portions of transmembrane proteins. For example, ADAM10 can cut off part of the HER2 receptor, thereby activating it. ADAM genes are found in animals, choanoflagellates, fungi and some groups of green algae. Most green algae and all land plants likely lost ADAM proteins. ADAMs are categorized under the enzyme group, and in the MEROPS peptidase family M12B. The terms adamalysin and MDC family (metalloproteinase-like, disintegrin-like, cysteine rich) have been used to refer to this family historically. ADAM family members Medicine Therapeutic ADAM inhibitors might potentiate anti-cancer therapy. See also ADAMTS (A disintegrin and metalloproteinase with thrombospondin motifs) family Ectodomain shedding
https://en.wikipedia.org/wiki/3M%20computer
3M was a goal first proposed in the early 1980s by Raj Reddy and his colleagues at Carnegie Mellon University (CMU) as a minimum specification for academic/technical workstations: at least a megabyte of memory, a megapixel display and a million instructions per second (MIPS) processing power. It was also often said that it should cost no more than a "megapenny" (). At that time a typical desktop computer such as an early IBM Personal Computer might have 1/8 of a megabyte of memory (128K), 1/4 of a million pixels (640400 monochrome display), and run at 1/3 million instructions per second ( 8088). The concept was inspired by the Xerox Alto which had been designed in the 1970s at the Xerox Palo Alto Research Center. Several Altos were donated to CMU, Stanford, and MIT in 1979. An early 3M computer was the PERQ Workstation made by Three Rivers Computer Corporation. The PERQ had a 1 million P-codes (Pascal instructions) per second processor, 256 KB of RAM (upgradeable to 1 MB), and a 768×1024 pixel display on a display. While not quite a true 3M machine, it was used as the initial 3M machine for the CMU Scientific Personal Integrated Computing Environment (SPICE) workstation project. The Stanford University Network SUN workstation, designed by Andy Bechtolsheim in 1980, is another example. It was then commercialized by Sun Microsystems in 1982. Apollo Computer (in the Route 128 region) announced the Apollo/Domain computer in 1981. By 1986, CMU stated that it expected at least two companies to introduce 3M computers by the end of the year, with academic pricing of and retail pricing of , and Stanford University planned to deploy them in computer labs. The first "megapenny" 3M workstation was the Sun-2/50 diskless desktop workstation with a list price of in 1986. The original NeXT Computer was introduced in 1988 as a 3M machine by Steve Jobs, who first heard this term at Brown University. Its so-called "MegaPixel" display had just over (with 2 bits per pixel). How
https://en.wikipedia.org/wiki/Vyatta
Vyatta is a software-based virtual router, virtual firewall and VPN product for Internet Protocol networks (IPv4 and IPv6). A free download of Vyatta has been available since March 2006. The system is a specialized Debian-based Linux distribution with networking applications such as Quagga, OpenVPN, and many others. A standardized management console, similar to Juniper JUNOS or Cisco IOS, in addition to a web-based GUI and traditional Linux system commands, provides configuration of the system and applications. In recent versions of Vyatta, web-based management interface is supplied only in the subscription edition. However, all functionality is available through KVM, serial console or SSH/telnet protocols. The software runs on standard x86-64 servers. Vyatta is also delivered as a virtual machine file and can provide (, , VPN) functionality for Xen, VMware, KVM, Rackspace, SoftLayer, and Amazon EC2 virtual and cloud computing environments. As of October, 2012, Vyatta has also been available through Amazon Marketplace and can be purchased as a service to provide VPN, cloud bridging and other network functions to users of Amazon's AWS services. Vyatta sells a subscription edition that includes all the functionality of the open source version as well as a graphical user interface, access to Vyatta's RESTful API's, Serial Support, TACACS+, Config Sync, System Image Cloning, software updates, 24x7 phone and email technical support, and training. Certification as a Vyatta Professional is now available. Vyatta also offers professional services and consulting engagements. The Vyatta system is intended as a replacement for Cisco IOS 1800 through ASR 1000 series Integrated Services Routers (ISR) and ASA 5500 security appliances, with a strong emphasis on the cost and flexibility inherent in an open source, Linux-based system running on commodity x86 hardware or in VMware ESXi, Microsoft Hyper-V, Citrix XenServer, Open Source Xen and KVM virtual environments. In 2012, Bro
https://en.wikipedia.org/wiki/UNESCO/Institut%20Pasteur%20Medal
The UNESCO/Institut Pasteur Medal is a biennial international science prize created jointly by UNESCO and the Pasteur Institute in 1995 "to be awarded in recognition of outstanding research contributing to a beneficial impact on human health and to the advancement of scientific knowledge in related fields such as medicine, fermentations, agriculture and food." Its creation marked the centenary of the death of Louis Pasteur. The future of the prize is under review. Laureates See also List of biomedical science awards
https://en.wikipedia.org/wiki/Eternal%20youth
Eternal youth is the concept of human physical immortality free of ageing. The youth referred to is usually meant to be in contrast to the depredations of aging, rather than a specific age of the human lifespan. Eternal youth is common in mythology, and is a popular theme in fiction. Religion and mythology Eternal youth is a characteristic of the inhabitants of Paradise in Abrahamic religions. The Hindus believe that the Vedic and the post-Vedic rishis have attained immortality, which implies the ability to change one's body's age or even shape at will. These are some of the siddhas in Yoga. Markandeya is said to always stay at the age of 16. The difference between eternal life and the more specific eternal youth is a recurrent theme in Greek and Roman mythology. The mytheme of requesting the boon of immortality from a god, but forgetting to ask for eternal youth appears in the story of Tithonus. A similar theme is found in Ovid regarding the Cumaean Sibyl. In Norse mythology, Iðunn is described as providing the gods apples that grant them eternal youthfulness in the 13th-century Prose Edda. Telomeres An individual's DNA plays a role in the aging process. Aging begins even before birth, as soon as cells start to die and need to be replaced. On the ends of each chromosome are repetitive sequences of DNA, telomeres, that protect the chromosome from joining with other chromosomes, and have several key roles. One of these roles is to regulate cell division by allowing each cell division to remove a small amount of genetic code. The amount removed varies by the cell type being replicated. The gradual degradation of the telomeres restricts cell division to 40-60 times, also known as the Hayflick limit. Once this limit has been reached, more cells die than can be replaced in the same time span. Thus, soon after this limit is reached the organism dies. The importance of telomeres is now clearly evident: lengthen the telomeres, lengthen the life. However, a study of th
https://en.wikipedia.org/wiki/Functional%20symptom
A functional symptom is a medical symptom with no known physical cause. In other words, there is no structural or pathologically defined disease to explain the symptom. The use of the term 'functional symptom' does not assume psychogenesis, only that the body is not functioning as expected. Functional symptoms are increasingly viewed within a framework in which 'biological, psychological, interpersonal and healthcare factors' should all be considered to be relevant for determining the aetiology and treatment plans. Historically, there has often been fierce debate about whether certain problems are predominantly related to an abnormality of structure (disease) or are psychosomatic in nature, and what are at one stage posited to be functional symptoms are sometimes later reclassified as organic, as investigative techniques improve. It is well established that psychosomatic symptoms are a real phenomenon, so this potential explanation is often plausible, however the commonality of a range of psychological symptoms and functional weakness does not imply that one causes the other. For example, symptoms associated with migraine, epilepsy, schizophrenia, multiple sclerosis, stomach ulcers, chronic fatigue syndrome, Lyme disease and many other conditions have all tended historically at first to be explained largely as physical manifestations of the patient's psychological state of mind; until such time as new physiological knowledge is eventually gained. Another specific example is functional constipation, which may have psychological or psychiatric causes. However, one type of apparently functional constipation, anismus, may have a neurological (physical) basis. Whilst misdiagnosis of functional symptoms does occur, in neurology, for example, this appears to occur no more frequently than of other neurological or psychiatric syndromes. However, in order to be quantified, misdiagnosis has to be recognized as such, which can be problematic in such a challenging field as me
https://en.wikipedia.org/wiki/Matt%20Lebofsky
Matt Lebofsky is an Oakland, California-based multi-instrumentalist and composer. Growing up in New York he studied piano/composition with Arthur Cunningham from 1978-1988. As a performer/composer he is currently active in several bands such as miRthkon, MoeTar, Secret Chiefs 3, Bodies Floating Ashore, The Fuxedos, Three Piece Combo, Research & Development, Midline Errors, Fuzzy Cousins and JOB. He is also a long-time prolific member of the Immersion Composition Society Origin Lodge. He toured nationally in 2006 as a member of Faun Fables, and throughout 2000-2001 as a member of Species Being, and released three albums and toured internationally with Mumble & Peg from 1995-2002. Matt is also a computer programmer, webmaster, and database/systems administrator at the Berkeley SETI Research Center, working with Breakthrough Listen since 2015, and as a core member of the small staff developing/maintaining the world's largest volunteer computing project SETI@home (since its inception at the University of California at Berkeley's Space Sciences Laboratory in 1997). He also works on the open-source general distributed computing engine BOINC, and designed levels for the iPhone video game Tap Tap Revenge.
https://en.wikipedia.org/wiki/Xak%3A%20The%20Art%20of%20Visual%20Stage
is the first game in the fantasy role-playing video game series Xak developed and published by Micro Cabin. It was originally released for the NEC PC-8801 computer system, with subsequent versions being developed for the NEC PC-9801, Sharp X68000, MSX2, PC-Engine, Super Famicom, and mobile phones. The first four versions were re-released for Windows on online store Project EGG. An English translation of Xak: The Art of Visual Stage was also released in 2007 on the now-defunct retro gaming service WOOMB.net, and is now to become available on Project EGG. Plot Setting and story Xak: The Art of Visual Stage features a typical high fantasy setting. According to the game world's legends, a great war was fought between the benevolent but weakening ancient gods and a demon race, which led to the collapse and eventual mortality of the gods. After this 'War of Sealing', the gods divided the world into three parts: Xak, the world of humans, Oceanity, the world of faeries, and Xexis, the world of demons. The demon world of Xexis was tightly sealed from the other two worlds as to prevent reentry of the warmongering demon race. Some demons were left behind in Xak, however, and others managed to discover a separate means to enter Xak from Xexis anyway. This ancient history is displayed in the introduction of Xak II. One of them, Badu, was a very powerful demon, able to use coercive magic to make humans do his bidding. Duel, the god of war, managed to defeat Badu and seal him away in a mountain of ice for 250 years. The god later settled in a village known as Fearless to live out the rest of his mortal life. At the beginning of the game, Badu's prison is broken. Demons overrun parts of Xak once again. In order to stop the ravaging of his lands, the King of Wavis sends a messenger faerie to Dork Kart, a famous warrior living in the village of Fearless. Dork, however, has gone missing. The player takes on the role of Latok Kart, Dork's 16-year-old son, as he meets the messenger
https://en.wikipedia.org/wiki/Indiana%20Jones%20and%20the%20Last%20Crusade%3A%20The%20Action%20Game
Indiana Jones and the Last Crusade: The Action Game was published in 1989 by Lucasfilm Games, based on the film of the same name. The game was released for the ZX Spectrum, Amstrad CPC, Commodore 64, Atari ST, Amiga, IBM PC, MSX, Master System, NES, Game Boy, Sega Genesis and Game Gear. It is a different game from Indiana Jones and the Last Crusade: The Graphic Adventure, also released in 1989. There is also a different game for the Nintendo Entertainment System titled Indiana Jones and the Last Crusade, released by Taito in 1991. Gameplay As in the film, the player's quest is to find the Holy Grail. En route, the player must find the Cross of Coronado, the Knight of the First Crusade's Shield and Henry Jones, Sr.'s Grail Diary. Reception The game grossed or in worldwide sales across all platforms by 1994. Computer Gaming World gave the game a negative review and said it was "just another search and recover game" with little to do with Indiana Jones. The review praised the graphics and sound, but found the fight sequences both too easy and too short, since all enemies could be defeated in one hit and turned their backs shortly after attacking the player. Compute! liked the Commodore 64 version, approving of the graphics and describing gameplay as "quite addicting", but criticizing lack of savegame and replay value. It reached number one in the UK charts, replacing RoboCop which had held the top spot for a record 36 weeks. Nintendo Power, reviewing the NES version, praised the action gameplay and noted that the music and levels helped recreate the feel of the movie. Nintendo Power was not impressed with the character graphics but stated that the animation "is quite good" for the NES. Nintendo Power praised the Game Boy version for its graphics, password system, and challenging gameplay, but criticized the poor "hit detection" and the time limits on each level, both of which made the game more difficult. The action game features six levels and a password fea
https://en.wikipedia.org/wiki/Frederick%20Mosteller
Charles Frederick Mosteller (December 24, 1916 – July 23, 2006) was an American mathematician, considered one of the most eminent statisticians of the 20th century. He was the founding chairman of Harvard's statistics department from 1957 to 1971, and served as the president of several professional bodies including the Psychometric Society, the American Statistical Association, the Institute of Mathematical Statistics, the American Association for the Advancement of Science, and the International Statistical Institute. Biographical details Frederick Mosteller was born in Clarksburg, West Virginia, on December 24, 1916, to Helen Kelley Mosteller and William Roy Mosteller. His father was a highway builder. He was raised near Pittsburgh, Pennsylvania, and attended Carnegie Institute of Technology (now Carnegie Mellon University). He completed his ScM degree at Carnegie Tech in 1939, and enrolled at Princeton University in 1939 to work on a PhD with statistician Samuel S. Wilks. In 1941 he married Virginia Gilroy, whom he met during college. They had two children: Bill (born 1947) and Gale (born 1953). They lived in Belmont, Massachusetts. and spent summers in West Falmouth, Massachusetts on Cape Cod. Mosteller worked in Samuel Wilks's Statistical Research Group in New York city during World War II on statistical questions about airborne bombing. He received his PhD in mathematics from Princeton University in 1946. He was hired by Harvard University's Department of Social Relations in 1946, where he received tenure in 1951 and served as acting chair from 1953 to 1954. He founded the Department of Statistics and served as its first chairman from 1957 to 1969, 1973, 1975 to 1977. He chaired the Department of Biostatistics at the Harvard School of Public Health from 1977 to 1981 and later the Department of Health Policy and Management in the 1980s. His four chairmanships have not been matched. He also taught courses at Harvard Law School and Harvard's Kennedy School o
https://en.wikipedia.org/wiki/Rambo%20III%20%28video%20game%29
Rambo III is a series of video games based on the film Rambo III (1988). Like in the film, their main plots center on former Vietnam-era Green Beret John Rambo being recalled up to duty one last time to rescue his former commander, Colonel Sam Trautman, who was captured during a covert operation mission in Soviet-controlled Afghanistan. Taito released an arcade video game based on the film. The console versions were developed and published by Sega, the IBM PC compatible version was developed by Ocean and published by Taito, and Ocean developed and published the other home computer versions: Atari ST, Amiga, Spectrum, C64, Amstrad CPC. Ports The Master System version, released in 1988, is a light gun shooter along the lines of Operation Wolf. The Light Phaser is supported. The Mega Drive version, released in 1989, follows Rambo in six missions, in each one with various objectives. Besides finding the exit of the level, in some missions, prisoners must be freed or enemy ammunition supplies destroyed. Rambo is controlled from an overhead perspective and has several weapons at his disposal. Besides a machine gun that never runs out of ammo, he can use a knife for close range kills, set off timed bombs and use his famous longbow with explosive arrows. Ammunition for the bow and the bombs is limited and can be collected from dead enemies. Rambo himself, on the other hand, is vulnerable and can be killed after one hit. After some of the missions, the perspective switches to a view behind Rambo and additional boss fights take place. Soviet tanks or helicopters must be destroyed using the bow. While aiming the bow, Rambo cannot move, but otherwise he can hide behind rocks or other obstacles from enemy fire. This is reminiscent of the Taito arcade game of the same name, which also had the player firing into the screen at helicopters and jeeps, but instead of just a single segment after each stage, the whole game is played out in this perspective. The ZX Spectrum, Atari S
https://en.wikipedia.org/wiki/Screen%20goo
Screen Goo is an acrylic paint designed by Goo Systems as a projection screen coating for the video projection industry. The intention of the product is to replace fixed or adjustable projection screens in a front projection environment. The product has been formulated with a specific consistency and visual performance characteristics so that it can be painted onto a wall (or other suitable surface), and will reflect the light from a projector in a manner similar to a projection screen. The product was first discussed by the company founder on the AVSForum before being released to the public in 2003. External links Goo Systems Official page. Display technology
https://en.wikipedia.org/wiki/Room%20641A
Room 641A is a telecommunication interception facility operated by AT&T for the U.S. National Security Agency, as part of its warrantless surveillance program as authorized by the Patriot Act. The facility commenced operations in 2003 and its purpose was publicly revealed in 2006. Description Room 641A is located in the SBC Communications building at 611 Folsom Street, San Francisco, three floors of which were occupied by AT&T before SBC purchased AT&T. The room was referred to in internal AT&T documents as the SG3 [Study Group 3] Secure Room. The room measures about and contains several racks of equipment, including a Narus STA 6400, a device designed to intercept and analyze Internet communications at very high speeds. It is fed by fiber optic lines from beam splitters installed in fiber optic trunks carrying Internet backbone traffic. In the analysis of J. Scott Marcus, a former CTO for GTE and a former adviser to the Federal Communications Commission, it has access to all Internet traffic that passes through the building, and therefore "the capability to enable surveillance and analysis of internet content on a massive scale, including both overseas and purely domestic traffic." The existence of the room was revealed by former AT&T technician Mark Klein and was the subject of a 2006 class action lawsuit by the Electronic Frontier Foundation against AT&T. Klein claims he was told that similar black rooms are operated at other facilities around the country. Room 641A and the controversies surrounding it were subjects of an episode of Frontline, the current affairs documentary program on PBS. It was originally broadcast on May 15, 2007. It was also featured on PBS's NOW on March 14, 2008. The room was also covered in the PBS Nova episode "The Spy Factory". Lawsuits The Electronic Frontier Foundation (EFF) filed a class-action lawsuit against AT&T on January 31, 2006, accusing the telecommunication company of violating the law and the privacy of its customers
https://en.wikipedia.org/wiki/Natural%20design
The Natural Design Perspective is an approach to psychology and biology that (among other things) holds that concepts such as "motivation", "emotion", "development", "adaptation" refer to objectively observable patterns, rather than hidden causes. It was developed by Nicholas S. Thompson (Professor Emeritus of Ethology and Psychology, Clark University), and has its roots in philosophical behaviorism and the new realism. Natural Design may also refer to an holistic approach to Design called for by Prof David W. Orr (Professor of Environmental Studies and Politics, Oberlin College USA) and developed for research practice by Prof Seaton Baxter (Emeritus Professor for the Study of Natural Design, Duncan of Jordanstone College of Art and Design, University of Dundee). History Darwin intended natural selection to explain the presence of design in nature. However, the term "design" has been out of favor since the watchmaker analogy attacks from William Paley. Thompson believes that is a mistake, because without the concept of design, it is easy for evolutionary theory to become a tautology. Natural design is design-without-a-designer, in the same sense that natural selection is selection-without-a-selector. Design is a term we use to refer to a matching of form and function, and we can recognize the presence of design independently of the cause of that design: A kitchen can become well designed for efficient food preparation due to the actions of a home designer. The hand of a blacksmith can becomes well designed for blacksmithing due to processes of muscle growth and callousing. The beaks of finches on the Galapagos became well designed to access different types of food through the process of natural selection. In all those cases, one can identify the matching of form to function, and then look for the processes that explains the presence of that matching. The field of ethology demonstrated, through decades of experimentation, the same principles that ap
https://en.wikipedia.org/wiki/Ivan%20Tyrrell
Ivan Tyrrell (; born 18 October 1943) is a British educator, writer, and artist. He lives with his wife Véronique in the Cotswolds, England. Artist Tyrrell left Wallington County Grammar School to study art as an apprentice at F.G. Marshal in 1959. In 1962 he began a fine arts course at Croydon Art College and was taught painting by Bridget Riley, Barry Fantoni and John Hoyland among others. He left college disillusioned with the art world and worked in London advertising studios before setting up a graphic design company in 1971 on the South Coast in Sussex. Two silk-screen posters produced with fellow artist Frederick Carver featured in Les Sixties, a Paris exhibition of psychedelic art that then transferred to the Brighton Festival and... “the spectral, hallucinatory scenarios of J. G. Ballard, especially in his novel The Crystal World – bodied forth in Tyrrell’s apocalyptic poster design." In 1965 Tyrrell, whilst still a student, had met the writer, Idries Shah, who had begun introducing timeless ideas from the Sufi tradition into the Western world. In 1969 he was invited to attend regular gatherings of writers, poets, actors, businessmen, diplomats, academics, craftsmen and others at Shah's home in Kent. He joined The Institute for Cultural Research in 1970. In 1977 Tyrrell art directed thirty-six illustrators for the first edition of World Tales by Idries Shah and contributed some illustrations himself. Psychology In 1987 he closed his graphic design service due to the recession and began learning about psychotherapy. From what he learned, Tyrrell came to believe most psychotherapists were poorly trained and had little basic knowledge of psychology. Human Givens Journal In 1993, encouraged by the psychiatrist and writer Robin Skynner, author Doris Lessing, psychologist Joe Griffin and Idries Shah, he launched a journal, The Therapist, in an attempt to increase the scientific rigour of the field. Medical journalist Denise Winn was appointed Editor i
https://en.wikipedia.org/wiki/Oskar%20Pfister%20Award
The Oskar Pfister Award was established by the American Psychiatric Association (APA), with the Association of Mental Health Clergy (now the Association of Professional Chaplains), in 1983 to honor those who have made significant contributions to the field of religion and psychiatry. The recipient delivers a lecture at an APA conference during the year of award, although the 2002 lecture was delivered by Susan Larson on behalf of her late husband. The award is named in honor of Oskar Pfister, a chaplain who discussed the religious aspects of psychology with Sigmund Freud. Award winners Source: Association of Professional Chaplains 1983 – Jerome D. Frank 1984 – Wayne Oates 1985 – Viktor Frankl 1986 – Hans Küng 1987 – Robert Jay Lifton 1988 – Oliver Sacks 1989 – William W. Meissner 1990 – Peter Gay 1991 – Robert Coles 1992 – Paulos Mar Gregorios 1993 – Paul R. Fleischman 1994 – James W. Fowler III 1995 – Prakash Desai 1996 – Ann Belford Ulanov 1997 – Ana-Maria Rizzuto 1998 – Allen Bergin 1999 – Don S. Browning 2000 – Paul Ricoeur 2001 – Irvin D. Yalom 2002 – David Larson 2003 – Abraham Twerski 2004 – Elizabeth Bowman 2005 – Armand Nicholi 2006 – Ned H. Cassem 2007 – William R. Miller 2008 – Dan G. Blazer 2009 – Kenneth I. Pargament 2010 – George E. Vaillant 2011 – Clark S. Aist 2012 – Harold G. Koenig 2013 – Marc Galanter 2014 - C. Robert Cloninger 2015 – Allan Josephson 2016 - James W. Lomax 2017 - James Griffith 2018 - John Swinton See also List of psychology awards List of awards named after people
https://en.wikipedia.org/wiki/Supraspinous%20fascia
The supraspinous fascia completes the osseofibrous case in which the supraspinatus muscle is contained; it affords attachment, by its deep surface, to some of the fibers of the muscle. It is thick medially, but thinner laterally under the coracoacromial ligament.
https://en.wikipedia.org/wiki/Infraspinous%20fascia
The infraspinatous fascia is a dense fibrous membrane, covering the Infraspinatous muscle and fixed to the circumference of the infraspinatous fossa; it affords attachment, by its deep surface, to some fibers of that muscle. It is intimately attached to the deltoid fascia along the over-lapping border of the Deltoideus.
https://en.wikipedia.org/wiki/Brachial%20fascia
The brachial fascia (deep fascia of the arm) is continuous with that covering the deltoideus and the pectoralis major muscle, by means of which it is attached, above, to the clavicle, acromion, and spine of the scapula; it forms a thin, loose, membranous sheath for the muscles of the arm, and sends septa between them; it is composed of fibers disposed in a circular or spiral direction, and connected together by vertical and oblique fibers. It differs in thickness at different parts, being thin over the biceps brachii, but thicker where it covers the triceps brachii, and over the epicondyles of the humerus: it is strengthened by fibrous aponeuroses, derived from the pectoralis major and latissimus dorsi medially, and from the deltoideus laterally. On either side it gives off a strong intermuscular septum, which is attached to the corresponding supracondylar ridge and epicondyle of the humerus.
https://en.wikipedia.org/wiki/Oak%20Investment%20Partners
Oak Investment Partners is a private equity firm focusing on venture capital investments in companies developing communications systems, information technology, new Internet media, healthcare services and retail. History The firm, founded in 1978, is based in Greenwich, Connecticut, with offices in Norwalk, Connecticut, Minneapolis and Palo Alto, California. Since inception, Oak had invested in more than 480 companies and had raised more than $8.4 billion in investor commitments across 12 private equity funds. Ann Lamont is a founder and managing partner. In May 2006, Oak raised its 12th fund, at $2.56 billion, reportedly the largest venture capital fund ever raised. In 2015, Indian-born employee Iftikar Ahmed was sued by the U.S. Securities and Exchange Commission on suspicion of stealing US$65 million from the firm. Ahmed was believed to have fled to India. In August 2015, Fortune reported that Mr. Ahmed had been detained in an Indian prison from May 22 until July 23 and that his passport had been confiscated.
https://en.wikipedia.org/wiki/Backup%20battery
A backup battery provides power to a system when the primary source of power is unavailable. Backup batteries range from small single cells to retain clock time and date in computers, up to large battery room facilities that power uninterruptible power supply systems for large data centers. Small backup batteries may be primary cells; rechargeable backup batteries are kept charged by the prime power supply. Examples Aircraft emergency batteries Backup batteries in aircraft keep essential instruments and devices running in the event of an engine power failure. Each aircraft has enough power in the backup batteries to facilitate a safe landing. The batteries keeping navigation, ELUs (emergency lighting units), emergency pressure or oxygen systems running at altitude, and radio equipment operational. Larger aircraft have control surfaces that run on these backups as well. Aircraft batteries are either nickel-cadmium or valve-regulated lead acid type. The battery keeps all necessary items running for between 30 minutes and 3 hours. Large aircraft may have a ram air turbine to provide additional power during engine failures. Burglar alarms Backup batteries are almost always used in burglar alarms. The backup battery prevents the burglar from disabling the alarm by turning off power to the building. Additionally these batteries power the remote cellular phone systems that thwart phone line snipping as well. The backup battery usually has a lifespan of 3-10 years depending on the make and model, and so if the battery runs flat, there is only one main source of power to the whole system which is the mains power. Should this fail as well (for example, a power cut), it usually triggers a third backup battery located in the bellboxes on the outside of the building which simply triggers the bell or siren. This however means that the alarm cannot be stopped in any way apart from physically going outside to the bellbox and disabling the siren. It is also why if there is a p
https://en.wikipedia.org/wiki/Transport%20of%20structure
In mathematics, particularly in universal algebra and category theory, transport of structure refers to the process whereby a mathematical object acquires a new structure and its canonical definitions, as a result of being isomorphic to (or otherwise identified with) another object with a pre-existing structure. Definitions by transport of structure are regarded as canonical. Since mathematical structures are often defined in reference to an underlying space, many examples of transport of structure involve spaces and mappings between them. For example, if and are vector spaces with being an inner product on , such that there is an isomorphism from to , then one can define an inner product on by the following rule: Although the equation makes sense even when is not an isomorphism, it only defines an inner product on when is, since otherwise it will cause to be degenerate. The idea is that allows one to consider and as "the same" vector space, and by following this analogy, then one can transport an inner product from one space to the other. A more elaborated example comes from differential topology, in which the notion of smooth manifold is involved: if is such a manifold, and if is any topological space which is homeomorphic to , then one can consider as a smooth manifold as well. That is, given a homeomorphism , one can define coordinate charts on by "pulling back" coordinate charts on through . Recall that a coordinate chart on is an open set together with an injective map for some natural number ; to get such a chart on , one uses the following rules: and . Furthermore, it is required that the charts cover (the fact that the transported charts cover follows immediately from the fact that is a bijection). Since is a smooth manifold, if U and V, with their maps and , are two charts on , then the composition, the "transition map" (a self-map of ) is smooth. To verify this for the transported charts on , notice that , and there
https://en.wikipedia.org/wiki/Point-to-point%20Lee%20model
The Lee model for point-to-point mode is a radio propagation model that operates around 900 MHz. Built as two different modes, this model includes an adjustment factor that can be adjusted to make the model more flexible to different regions of propagation. Applicable to/under conditions This model is suitable for using in data collected in a specific area for point-to-point links. Coverage Frequency: 900 MHz band Mathematical formulation The model The Lee model for point to point mode is formally expressed as: where, L = The median path loss. Unit: decibel (dB) L0 = The reference path loss along 1 km. Unit: decibel (dB) = The slope of the path loss curve. Unit: decibels per decade d = The distance on which the path loss is to be calculated. Unit: kilometer (km) FA = Adjustment factor. HET = Effective height of terrain. Unit: meter(m) Calculation of reference path loss The reference path loss is usually computed along a 1 km or 1 mi link. Any other suitable length of path can be chosen based on the applications. where, GB = Base station antenna gain. Unit: Decibel with respect to isotropic antenna (dBi) = Wavelength. Unit: meter (m). GM = Mobile station antenna gain. Unit: decibel with respect to isotropic antenna (dBi). Calculation of adjustment factors The adjustment factor is calculated as: where, FBH = Base station antenna height correction factor. FBG = Base station antenna gain correction factor. FMH = Mobile station antenna height correction factor. FMG = Mobile station antenna gain correction factor. FF = Frequency correction factor The base station antenna height correction factor where, hB = Base station antenna height. Unit: meter. The base station antenna gain correction factor where, GB = Base station antenna gain. Unit: decibel with respect to half-wave dipole (dBd) The mobile station antenna height correction factor where, hM = Mobile station antenna height. Unit: meter. The mobile antenna gain correction factor
https://en.wikipedia.org/wiki/UC%20Davis%20Department%20of%20Viticulture%20and%20Enology
The Department of Viticulture and Enology at the University of California, Davis, located in Davis, California, offers undergraduate and graduate degrees in the areas of grape growing and wine making. Located just 45 minutes from Napa Wine Country the department has strong connections with wine producers in California and elsewhere. The department has produced many of the notable winemakers of the California wine industry. History The Department of Viticulture and Enology at UC Davis celebrated its 125th anniversary in 2005. Established in 1880 by mandate of the California Legislature, the purpose of the department was to establish a center of research to help the developing California wine industry. Originally located on the UC Berkeley campus, the department was closed in 1919 with the passage of prohibition into law. The department was re-established in 1935 on the Davis campus following the repeal of prohibition. Today the department includes a pilot winery and two research vineyards (one located on the main campus, and one located in the Napa valley). In 2001 Robert Mondavi donated $25 million to the College of Agricultural and Environmental Sciences for the establishment of the Robert Mondavi Institute for Wine and Food Science, which opened October 2008. See also Winkler vine
https://en.wikipedia.org/wiki/Womersley%20number
The Womersley number ( or ) is a dimensionless number in biofluid mechanics and biofluid dynamics. It is a dimensionless expression of the pulsatile flow frequency in relation to viscous effects. It is named after John R. Womersley (1907–1958) for his work with blood flow in arteries. The Womersley number is important in keeping dynamic similarity when scaling an experiment. An example of this is scaling up the vascular system for experimental study. The Womersley number is also important in determining the thickness of the boundary layer to see if entrance effects can be ignored. The square root of this number is also referred to as Stokes number, , due to the pioneering work done by Sir George Stokes on the Stokes second problem. Derivation The Womersley number, usually denoted , is defined by the relation where is an appropriate length scale (for example the radius of a pipe), is the angular frequency of the oscillations, and , , are the kinematic viscosity, density, and dynamic viscosity of the fluid, respectively. The Womersley number is normally written in the powerless form In the cardiovascular system, the pulsation frequency, density, and dynamic viscosity are constant, however the Characteristic length, which in the case of blood flow is the vessel diameter, changes by three orders of magnitudes (OoM) between the aorta and fine capillaries. The Womersley number thus changes due to the variations in vessel size across the vasculature system. The Womersley number of human blood flow can be estimated as follows: Below is a list of estimated Womersley numbers in different human blood vessels: It can also be written in terms of the dimensionless Reynolds number (Re) and Strouhal number (St): The Womersley number arises in the solution of the linearized Navier–Stokes equations for oscillatory flow (presumed to be laminar and incompressible) in a tube. It expresses the ratio of the transient or oscillatory inertia force to the shear force. When is s
https://en.wikipedia.org/wiki/Ataxin%201
Ataxin-1 is a DNA-binding protein which in humans is encoded by the ATXN1 gene. Mutations in ataxin-1 cause spinocerebellar ataxia type 1, an inherited neurodegenerative disease characterized by a progressive loss of cerebellar neurons, particularly Purkinje neurons. Genetics ATXN1 is conserved across multiple species, including humans, mice, and Drosophila. In humans, ATXN1 is located on the short arm of chromosome 6. The gene contains 9 exons, two of which are protein-coding. There is a CAG repeat in the coding sequence which is longer in humans than other species (6-38 uninterrupted CAG repeats in healthy humans versus 2 in the mouse gene). This repeat is prone to errors in DNA replication and can vary widely in length between individuals. Structure Notable features of the Ataxin-1 protein structure include: A polyglutamine tract of variable length, encoded by the CAG repeat in ATXN1. A region which mediates protein-protein interactions, known as the AXH domain A nuclear localization sequence A phosphorylation site which regulates the protein's stability and interactions with its binding partners Function The function of Ataxin-1 is not completely understood. It appears to be involved in regulating gene expression based on its location in the nucleus of the cell, its association with promoter regions of several genes, and its interactions with transcriptional regulators and parts of the RNA splicing machinery. Interactions Ataxin 1 has been shown to interact with: C2orf27, Coilin, Glyceraldehyde 3-phosphate dehydrogenase, CIC UBE2E1, and USP7. Role in disease ATXN1 is the gene mutated in spinocerebellar ataxia type 1 (SCA1), a dominantly-inherited, fatal genetic disease in which neurons in the cerebellum and brain stem degenerate over the course of years or decades. SCA1 is a trinucleotide repeat disorder caused by expansion of the CAG repeat in ATXN1; this leads to an expanded polyglutamine tract in the protein. This elongation is variab
https://en.wikipedia.org/wiki/Schouten%20tensor
In Riemannian geometry the Schouten tensor is a second-order tensor introduced by Jan Arnoldus Schouten defined for by: where Ric is the Ricci tensor (defined by contracting the first and third indices of the Riemann tensor), R is the scalar curvature, g is the Riemannian metric, is the trace of P and n is the dimension of the manifold. The Weyl tensor equals the Riemann curvature tensor minus the Kulkarni–Nomizu product of the Schouten tensor with the metric. In an index notation The Schouten tensor often appears in conformal geometry because of its relatively simple conformal transformation law where Further reading Arthur L. Besse, Einstein Manifolds. Springer-Verlag, 2007. See Ch.1 §J "Conformal Changes of Riemannian Metrics." Spyros Alexakis, The Decomposition of Global Conformal Invariants. Princeton University Press, 2012. Ch.2, noting in a footnote that the Schouten tensor is a "trace-adjusted Ricci tensor" and may be considered as "essentially the Ricci tensor." Wolfgang Kuhnel and Hans-Bert Rademacher, "Conformal diffeomorphisms preserving the Ricci tensor", Proc. Amer. Math. Soc. 123 (1995), no. 9, 2841–2848. Online eprint (pdf). T. Bailey, M.G. Eastwood and A.R. Gover, "Thomas's Structure Bundle for Conformal, Projective and Related Structures", Rocky Mountain Journal of Mathematics, vol. 24, Number 4, 1191-1217. See also Weyl–Schouten theorem Cotton tensor Curvature tensors Riemannian geometry Tensors in general relativity
https://en.wikipedia.org/wiki/Ataxin%207
Ataxin 7 (ATXN7) is a protein of the SCA7 gene, which contains 892 amino acids with an expandable poly(Q) region close to the N-terminus. The expandable poly(Q) motif region in the protein contributes crucially to spinocerebellar ataxia (SCA) pathogenesis by the induction of intranuclear inclusion bodies. ATXN7 is associated with both olivopontocerebellar atrophy type 3 (OPCA3) and spinocerebellar ataxia type 7 (SCA7). CAG repeat leads to pathological protein misfolding. In ataxin-7 gene has shown to cause cerebellar and brainstem degeneration as well as retinal conerod dystrophy. Polyglutamine (polyQ) expansion at the N-terminus of ataxin-7 causes protein aggregation, leading to the symptoms of ataxia with visual loss. Research suggest that silencing of ataxin-7 in the retina by RNAi can be a possible therapeutic strategy for patients with SCA7 retinal degeneration.
https://en.wikipedia.org/wiki/Ataxin
Ataxin is a type of nuclear protein. The class is called ataxin because mutated forms of these proteins and their corresponding genes were found to cause progressive ataxia. Some examples, their coding genes and associated diseases include: Ataxin 1, coded by ATXN1. Mutants of ataxin 1 with a polyglutamine expansion cause SCA1. Ataxin 2, coded by ATXN2. It is known to cause SCA2 with polyglutamine expansion. Ataxin 3, coded by ATXN3. Machado-Joseph disease is caused by polyglutamine expansions in ataxin 3. Ataxin 7, coded by ATXN7. Polyglutamine expansions in Ataxin 7 cause SCA7. Ataxin 8, coded by ATXN8. Ataxin 8 does not cause an ataxic order, but a gene on the opposite strand, ATXN8OS, causes Spinocerebellar ataxia type 8 with CTG expansion. Ataxin 10, coded by ATXN10. It is associated with the pentanucleotide disorder, SCA10. Frataxin, follows a similar naming convention and is coded by the FXN gene. GAA repeat expansions in a non-coding region of FXN cause Friedreich's ataxia when both copies of the gene are affected.
https://en.wikipedia.org/wiki/Post-mortem%20photography
Post-mortem photography is the practice of photographing the recently deceased. Various cultures use and have used this practice, though the best-studied area of post-mortem photography is that of Europe and America. There can be considerable dispute as to whether individual early photographs actually show a dead person or not, often sharpened by commercial considerations. The form continued the tradition of earlier painted mourning portraits. Today post-mortem photography is most common in the contexts of police and pathology work. History and popularity The invention of the daguerreotype in 1839 made portraiture commonplace, as many of those who were unable to afford the commission of a painted portrait could afford to sit for a photography session. This also provided the middle class with a way to remember dead loved ones. Before this, post-mortem portraiture was restricted to the upper class, who continued to commemorate the deceased with this new method. Post-mortem photography was common in the nineteenth century. As photography was a new medium, it is plausible that many daguerreotype post-mortem portraits, especially those of infants and young children, were probably the only photographs ever made of the sitters. The long exposure time made deceased subjects easy to photograph. The problem of long exposure times also led to the phenomenon of hidden mother photography, where the mother was hidden in-frame to calm a young child and keep them still. Post-mortem photography flourished in photography's early decades, among those who preferred to capture an image of the deceased. This helped many photographic businesses in the nineteenth century. The later invention of the carte de visite, which allowed multiple prints to be made from a single negative, meant that copies of the image could be mailed to relatives. Approaching the 20th century, cameras became more accessible and more people began to be able to take photographs for themselves. Post-mortem photo
https://en.wikipedia.org/wiki/Self-diagnosis
Self-diagnosis is the process of diagnosing, or identifying, medical conditions in oneself. It may be assisted by medical dictionaries, books, resources on the Internet, past personal experiences, or recognizing symptoms or medical signs of a condition that a family member previously had. Depending on the nature of an individual's condition and the accuracy of the information they access, self-diagnoses can vary greatly in their safety. Due to self-diagnoses' varied accuracy, public attitudes toward self-diagnosis include denials of its legitimacy and applause of its ability to promote healthcare access and allow for individuals to find solidarity and support. Furthermore, external influences such as marketing, social media trends, societal stigma around disease, and to which demographic population one belongs greatly affect the use of self-diagnosis. Appropriate use Self-diagnosis is prone to error and may be potentially dangerous if inappropriate decisions are made, which can stem from broad or inaccurately applied symptoms as well as confirmation bias. Because of the risks, self-diagnosis is officially discouraged by physicians and patient care organizations. Physicians are also discouraged from engaging in self-diagnosis due to potential lack of objectivity. An inaccurate self-diagnosis—a misdiagnosis—can result in improper health care, including using the wrong treatment or not seeking care for a serious condition that was under-diagnosed. Further concerns include undermining physician authority, lacking an unbiased view of oneself, overestimating one's symptoms, or adopting a state of denial about these symptoms. However, self-diagnosis may be appropriate under certain circumstances. The use of over-the-counter (non-prescription) medications is often involved in self-diagnosis for conditions that are unlikely to be serious and have a low risk of harm by incorrect medication. Some conditions are more likely to be self-diagnosed, especially simple conditions
https://en.wikipedia.org/wiki/Time-delay%20combination%20locks
A time-delay combination lock is most commonly a digital, electronic combination lock equipped with a delay timer that delays the unlocking of the lock by a user-definable delay period, usually less than one hour. Unlike the time lock, which unlocks at a preset time (as in the case of a bank vault), time-delay locks operate each time the safe is unlocked, but the operator must wait for the set delay period to elapse before the lock can be opened. Time delay safes are most commonly used in businesses with high cash transactions. They are used in some banks including Nationwide, HSBC, Barclays, and Halifax. Use Time-delay combination locks are frequently incorporated into money safes as an armed robbery deterrent. In many instances, time-delay combination locks are also equipped with a duress code which may be entered to activate the time delay whilst sending a silent alarm to a monitoring centre. Modern time delay combination locks can have many functions such as multiple different codes, pre-set time lock settings (open and close times), pre-set vacation times (e.g. Christmas Day), dual code facility, and a full audit trail providing a detailed record of the lock history showing who opened the lock, when and how long it was open. They also use a non-volatile memory so that no information is lost if the batteries are depleted. This will allow the safe to be opened when the batteries are changed after the pre-set time if the correct code is entered. Some electronic combination locks with a time-delay feature require the code to be entered twice: once to start the timer, and a second to unlock and open the safe entered after the delay period has expired.
https://en.wikipedia.org/wiki/Water%20slide%20decal
Water slide decals (or water transfer decals) are decals which rely on dextrose residue from the decal paper to bond the decal to a surface. A water-based adhesive layer can be added to the decal to create a stronger bond or may be placed between layers of lacquer to create a durable decal transfer. The paper also has a layer of glucose film added prior to the dextrose layer which gives it adhesive properties; the dextrose layer gives the decal the ability to slide off the paper and onto the substrate (lubricity). Water slide decals are thinner than many other decorative techniques (such as vinyl stickers) and as they are printed, they can be produced to a very high level of detail. As such, they are popular in craft areas such as scale modeling, as well as for labeling DIY electronics devices, such as guitar pedals. Previously, water slide decals were professionally printed and only available in supplied designs, but with the advent of printable decal paper for colour inkjet and laser printers, custom decals can now be produced by the hobbyist or small business.
https://en.wikipedia.org/wiki/Coffee%20cupping
Coffee cupping, or coffee tasting, is the practice of observing the tastes and aromas of brewed coffee. It is a professional practice but can be done informally by anyone or by professionals known as "Q Graders". A standard coffee cupping procedure involves deeply sniffing the coffee, then slurping the coffee from a spoon so it is aerated and spread across the tongue. The coffee taster attempts to measure aspects of the coffee's taste, specifically the body (the texture or mouthfeel, such as oiliness), sweetness, acidity (a sharp and tangy feeling, like when biting into an orange), flavour (the characters in the cup), and aftertaste. Since coffee beans embody telltale flavours from the region where they were grown, cuppers may attempt to identify the coffee's origin. Aromas Various descriptions are used to note coffee aroma. Animal-like – This odour descriptor is somewhat reminiscent of the smell of animals. It is not a fragrant aroma like musk but has the characteristic odour of wet fur, sweat, leather, hides or urine. It is not necessarily considered as a negative attribute but is generally used to describe strong notes. These flavors can be present in poor-quality dry process coffees. Ashy – This odour descriptor is similar to that of an ashtray, the odour of smokers' fingers or the smell one gets when cleaning out a fireplace. It is not used as a negative attribute. Generally speaking this descriptor is used by the tasters to indicate the degree of roast. Burnt/Smoky – This odour and flavour descriptor is similar to that found in burnt food. The odour is associated with smoke produced when burning wood. This descriptor is frequently used to indicate the degree of roast commonly found by tasters in dark-roasted or oven-roasted coffees. Chemical/Medicinal – This odour descriptor is reminiscent of chemicals, medicines and the smell of hospitals. This term is used to describe coffees having aromas such as rio flavour, chemical residues or highly aromatic coff
https://en.wikipedia.org/wiki/Generator%20matrix
In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix. Terminology If G is a matrix, it generates the codewords of a linear code C by where w is a codeword of the linear code C, and s is any input vector. Both w and s are assumed to be row vectors. A generator matrix for a linear -code has format , where n is the length of a codeword, k is the number of information bits (the dimension of C as a vector subspace), d is the minimum distance of the code, and q is size of the finite field, that is, the number of symbols in the alphabet (thus, q = 2 indicates a binary code, etc.). The number of redundant bits is denoted by . The standard form for a generator matrix is, , where is the identity matrix and P is a matrix. When the generator matrix is in standard form, the code C is systematic in its first k coordinate positions. A generator matrix can be used to construct the parity check matrix for a code (and vice versa). If the generator matrix G is in standard form, , then the parity check matrix for C is , where is the transpose of the matrix . This is a consequence of the fact that a parity check matrix of is a generator matrix of the dual code . G is a matrix, while H is a matrix. Equivalent codes Codes C1 and C2 are equivalent (denoted C1 ~ C2) if one code can be obtained from the other via the following two transformations: arbitrarily permute the components, and independently scale by a non-zero element any components. Equivalent codes have the same minimum distance. The generator matrices of equivalent codes can be obtained from one another via the following elementary operations: permute rows scale rows by a nonzero scalar add rows to other rows permute columns, and scale columns by a nonzero scalar. Thus, we can perform Gaussian elimination on G. Indee
https://en.wikipedia.org/wiki/Phi-hiding%20assumption
The phi-hiding assumption or Φ-hiding assumption is an assumption about the difficulty of finding small factors of φ(m) where m is a number whose factorization is unknown, and φ is Euler's totient function. The security of many modern cryptosystems comes from the perceived difficulty of certain problems. Since P vs. NP problem is still unresolved, cryptographers cannot be sure computationally intractable problems exist. Cryptographers thus make assumptions as to which problems are hard. It is commonly believed that if m is the product of two large primes, then calculating φ(m) is currently computationally infeasible; this assumption is required for the security of the RSA Cryptosystem. The Φ-Hiding assumption is a stronger assumption, namely that if p1 and p2 are small primes exactly one of which divides φ(m), there is no polynomial-time algorithm which can distinguish which of the primes p1 and p2 divides φ(m) with probability significantly greater than one-half. This assumption was first stated in the 1999 paper Computationally Private Information Retrieval with Polylogarithmic Communication, where it was used in a Private Information Retrieval scheme. Applications The Phi-hiding assumption has found applications in the construction of a few cryptographic primitives. Some of the constructions include: Computationally Private Information Retrieval with Polylogarithmic Communication (1999) Efficient Private Bidding and Auctions with an Oblivious Third Party (1999) Single-Database Private Information Retrieval with Constant Communication Rate (2005) Password authenticated key exchange using hidden smooth subgroups (2005)
https://en.wikipedia.org/wiki/Application-level%20gateway
An application-level gateway (ALG, also known as application layer gateway, application gateway, application proxy, or application-level proxy) is a security component that augments a firewall or NAT employed in a computer network. It allows customized NAT traversal filters to be plugged into the gateway to support address and port translation for certain application layer "control/data" protocols such as FTP, BitTorrent, SIP, RTSP, file transfer in IM applications. In order for these protocols to work through NAT or a firewall, either the application has to know about an address/port number combination that allows incoming packets, or the NAT has to monitor the control traffic and open up port mappings (firewall pinholes) dynamically as required. Legitimate application data can thus be passed through the security checks of the firewall or NAT that would have otherwise restricted the traffic for not meeting its limited filter criteria. Functions An ALG may offer the following functions: allowing client applications to use dynamic ephemeral TCP/UDP ports to communicate with the known ports used by the server applications, even though a firewall configuration may allow only a limited number of known ports. In the absence of an ALG, either the ports would get blocked or the network administrator would need to explicitly open up a large number of ports in the firewall — rendering the network vulnerable to attacks on those ports. converting the network layer address information found inside an application payload between the addresses acceptable by the hosts on either side of the firewall/NAT. This aspect introduces the term 'gateway' for an ALG. recognizing application-specific commands and offering granular security controls over them synchronizing between multiple streams/sessions of data between two hosts exchanging data. For example, an FTP application may use separate connections for passing control commands and for exchanging data between the client and a r
https://en.wikipedia.org/wiki/History%20of%20mathematical%20notation
The history of mathematical notation includes the commencement, progress, and cultural diffusion of mathematical symbols and the conflict of the methods of notation confronted in a notation's move to popularity or inconspicuousness. Mathematical notation comprises the symbols used to write mathematical equations and formulas. Notation generally implies a set of well-defined representations of quantities and symbols operators. The history includes Hindu–Arabic numerals, letters from the Roman, Greek, Hebrew, and German alphabets, and a host of symbols invented by mathematicians over the past several centuries. The development of mathematical notation can be divided in stages: The "rhetorical" stage is where calculations are performed by words and no symbols are used. The "syncopated" stage is where frequently used operations and quantities are represented by symbolic syntactical abbreviations. From ancient times through the post-classical age, bursts of mathematical creativity were often followed by centuries of stagnation. As the early modern age opened and the worldwide spread of knowledge began, written examples of mathematical developments came to light. The "symbolic" stage is where comprehensive systems of notation supersede rhetoric. Beginning in Italy in the 16th century, new mathematical developments, interacting with new scientific discoveries were made at an increasing pace that continues through the present day. This symbolic system was in use by medieval Indian mathematicians and in Europe since the middle of the 17th century, and has continued to develop in the contemporary era. The area of study known as the history of mathematics is primarily an investigation into the origin of discoveries in mathematics and the focus here, the investigation into the mathematical methods and notation of the past. Rhetorical stage Although the history commences with that of the Ionian schools, there is no doubt that those Ancient Greeks who paid attention to i
https://en.wikipedia.org/wiki/Merle%20Randall
Merle Randall (January 29, 1888 – March 17, 1950) was an American physical chemist famous for his work with Gilbert N. Lewis, over a period of 25 years, in measuring reaction heat of chemical compounds and determining their corresponding free energy. Together, their 1923 textbook "Thermodynamics and the Free Energy of Chemical Substances" became a classic work in the field of chemical thermodynamics. In 1932, Merle Randall authored two scientific papers with Mikkel Frandsen: "The Standard Electrode Potential of Iron and the Activity Coefficient of Ferrous Chloride," and "Determination of the Free Energy of Ferrous Hydroxide from Measurements of Electromotive Force." Education Randall completed his Ph.D. at the Massachusetts Institute of Technology in 1912 with a dissertation on "Studies in Free Energy". Related Based on work by J. Willard Gibbs, it was known that chemical reactions proceeded to an equilibrium determined by the free energy of the substances taking part. Using this theory, Gilbert Lewis spent 25 years determining free energies of various substances. In 1923, he and Randall published the results of this study and formalizing chemical thermodynamics. According to the Belgian thermodynamicist Ilya Prigogine, their influential 1923 textbook led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world. See also Ionic strength
https://en.wikipedia.org/wiki/Polyploid%20complex
A polyploid complex, also called a diploid-polyploid complex, is a group of interrelated and interbreeding species that also have differing levels of ploidy that can allow interbreeding. A polyploid complex was described by E. B. Babcock and G. Ledyard Stebbins in their 1938 monograph The American Species of Crepis: their interrelationships and distribution as affected by polyploidy and apomixis. In Crepis and some other perennial plant species, a polyploid complex may arise where there are at least two genetically isolated diploid populations, in addition to auto- and allopolyploid derivatives that coexist and interbreed. Thus a complex network of interrelated forms may exist where the polyploid forms allow for intermediate forms between the diploid species that are otherwise unable to interbreed. This complex situation does not fit well within the biological species concept of Ernst Mayr which defines a species as "groups of actually or potentially interbreeding natural populations which are reproductively isolated from other such groups". In many diploid-polyploid complexes the polyploid hybrid members reproduce asexually while diploids reproduce sexually. Thus polyploidy is related to the phenomenon called "geographic parthenogenesis" by zoologist Albert Vandel, that asexual organisms often have greater geographic ranges than their sexual relatives. It is not known which of the associated factors is the major determiner of geographic parthenogenesis, hybridization, polyploidy, or asexual reproduction. See also Species complex
https://en.wikipedia.org/wiki/Grimm%27s%20hydride%20displacement%20law
Grimm's Hydride Displacement Law is an early hypothesis, formulated in 1925, to describe bioisosterism, the ability of certain chemical groups to function as or mimic other chemical groups. “Atoms anywhere up to four places in the periodic system before an inert gas change their properties by uniting with one to four hydrogen atoms, in such a manner that the resulting combinations behave like pseudoatoms, which are similar to elements in the groups one to four places respectively, to their right.” According to Grimm, each vertical column (of Table below) would represent a group of isosteres.
https://en.wikipedia.org/wiki/Lennart%20Johnsson
Lennart Johnsson (born 1944) is a Swedish computer scientist and engineer. Johnsson started his career at ABB in Sweden and moved on to UCLA, Caltech, Yale University, Harvard University, the Royal Institute of Technology (KTH in Sweden), and Thinking Machines Corporation. He is currently based at the University of Houston, where he holds the Hugh and Lillie Roy Cranz Cullen Distinguished Chair of Computer Science, Mathematics, and Electrical and Computer Engineering as a lecturer at a summer school at the KTH PDC Center for High Performance Computing.
https://en.wikipedia.org/wiki/Mellified%20man
A mellified man, also known as a human mummy confection, was a legendary medicinal substance created by steeping a human cadaver in honey. The concoction is detailed in Chinese medical sources, including the Bencao Gangmu of the 16th century. Relying on a second-hand account, the text reports a story that some elderly men in Arabia, nearing the end of their lives, would submit themselves to a process of mummification in honey to create a healing confection. This process differed from a simple body donation because of the aspect of self-sacrifice; the mellification process would ideally start before death. The donor would stop eating any food other than honey, going as far as to bathe in the substance. Shortly, the donor's feces and even sweat would consist of honey. When this diet finally proved fatal, the donor's body would be placed in a stone coffin filled with honey. After a century or so, the contents would have turned into a sort of confection reputedly capable of healing broken limbs and other ailments. This confection would then be sold in street markets as a hard to find item with a hefty price. Origins Some of the earliest known records of mellified corpses come from Greek historian Herodotus (4th century BCE) who recorded that the Assyrians used to embalm their dead with honey. A century later, Alexander the Great's body was reportedly preserved in a honey-filled sarcophagus, and there are also indications that this practice was known to the Egyptians. Another record of mellification is found in the Bencao Gangmu (section 52, "Man as medicine") under the entry for munaiyi (木乃伊 "mummy"). It quotes the Chuogeng lu (輟耕錄 "Talks while the Plough is Resting", c. 1366) by the Yuan dynasty scholar Tao Zongyi (陶宗儀) and Tao Jiucheng (陶九成). According to [Tao Jiucheng] in his [Chuogenglu], in the lands of the Arabs there are men 70 or 80 years old who are willing to give their bodies to save others. Such a one takes no more food or drink, only bathing and eatin
https://en.wikipedia.org/wiki/Calabi-Yau%20%28play%29
Calabi-Yau is a 2001 play written by playwright Susanna Speier with songs and music by Stefan Weisman, based on physicist Brian Greene's national bestseller The Elegant Universe. The musical play is a multimedia sub-subatomic adventure story about a documentarian lost in an inner loop of an abandoned track of the New York Subway system. He encounters MTA workers who are attempting to prove string theory by building a particle accelerator in abandoned subway tunnels beneath downtown New York City. The MTA track workers lead the documentarian to a gatekeeper named Lucy and her grandfather, who is engineering the particle accelerator. A string explains string theory as a Calabi-Yau tells the story of Alexander the Great cutting the Gordian knot. It premiered as a workshop production at the Lincoln Center and HERE Arts Center sponsored American Living Room Festival in 2001. Calabi-Yau was produced and performed at HERE in 2002. Eugene Calabi and Shing-Tung Yau, for whom Calabi-Yau manifolds are named, attempted to attend the play but were not let in since no one believed they were who they said they were.
https://en.wikipedia.org/wiki/Anticipatory%20scheduling
Anticipatory scheduling is an algorithm for scheduling hard disk input/output (I/O scheduling). It seeks to increase the efficiency of disk utilization by "anticipating" future synchronous read operations. I/O scheduling "Deceptive idleness" is a situation where a process appears to be finished reading from the disk when it is actually processing data in preparation of the next read operation. This will cause a normal work-conserving I/O scheduler to switch to servicing I/O from an unrelated process. This situation is detrimental to the throughput of synchronous reads, as it degenerates into a seeking workload. Anticipatory scheduling overcomes deceptive idleness by pausing for a short time (a few milliseconds) after a read operation in anticipation of another close-by read requests. Anticipatory scheduling yields significant improvements in disk utilization for some workloads. In some situations the Apache web server may achieve up to 71% more throughput from using anticipatory scheduling. The Linux anticipatory scheduler may reduce performance on disks using Tagged Command Queuing (TCQ), high performance disks, and hardware RAID arrays. An anticipatory scheduler (AS) was the default Linux kernel scheduler between 2.6.0 and 2.6.18, by which time it was replaced by the CFQ scheduler. As of kernel version 2.6.33, the Anticipatory scheduler has been removed from the Linux kernel. The reason being that while useful, the scheduler's effects could be achieved through tuned use of other schedulers (mostly CFQ, which can also be configured to idle with the slice_idle tunable). Since the anticipatory scheduler added maintenance overhead while not improving the workload coverage of the Linux kernel, it was deemed redundant. See also Deadline scheduler Noop scheduler CFQ scheduler Native Command Queuing (NCQ) Scheduling (computing)
https://en.wikipedia.org/wiki/Blast%20wave
In fluid dynamics, a blast wave is the increased pressure and flow resulting from the deposition of a large amount of energy in a small, very localised volume. The flow field can be approximated as a lead shock wave, followed by a self-similar subsonic flow field. In simpler terms, a blast wave is an area of pressure expanding supersonically outward from an explosive core. It has a leading shock front of compressed gases. The blast wave is followed by a blast wind of negative gauge pressure, which sucks items back in towards the center. The blast wave is harmful especially when one is very close to the center or at a location of constructive interference. High explosives that detonate generate blast waves. Sources High-order explosives (HE) are more powerful than low-order explosives (LE). HE detonate to produce a defining supersonic over-pressurization shock wave. Several sources of HE include trinitrotoluene, C-4, Semtex, nitroglycerin, and ammonium nitrate fuel oil (ANFO). LE deflagrate to create a subsonic explosion and lack HE's over-pressurization wave. Sources of LE include pipe bombs, gunpowder, and most pure petroleum-based incendiary bombs such as Molotov cocktails or aircraft improvised as guided missiles. HE and LE induce different injury patterns. Only HE produce true blast waves. History The classic flow solution—the so-called Taylor–von Neumann–Sedov blast wave solution—was independently devised by John von Neumann and British mathematician Geoffrey Ingram Taylor during World War II. After the war, the similarity solution was published by three other authors—L. I. Sedov, R. Latter, and J. Lockwood-Taylor—who had discovered it independently. Since the early theoretical work, both theoretical and experimental studies of blast waves have been ongoing. Characteristics and properties The simplest form of a blast wave has been described and termed the Friedlander waveform. It occurs when a high explosive detonates in a free field, that is, with no surf
https://en.wikipedia.org/wiki/Bryant%27s%20triangle
A surface marking of clinical importance is Bryant's triangle (or iliofemoral triangle), which is mapped out thus: the hypotenuse of the right angled triangle is a line from the anterior superior iliac spine to the top of the greater trochanter. its sides are formed respectively by: a vertical line from the anterior superior iliac spine a perpendicular line from the top of the greater trochanter.
https://en.wikipedia.org/wiki/Triangle%20of%20auscultation
The triangle of auscultation is a relative thinning of the musculature of the back, situated along the medial border of the scapula which allows for improved listening to the lungs. Boundaries It has the following boundaries: medially, by the inferior portion of the trapezius inferiorly, by the latissimus dorsi laterally, by the medial border of the scapula The superficial floor of the triangle is formed by the lateral portion of the erector spinae muscles. Deep to these muscles are the osseous portions of the 6th and 7th ribs and the internal and external intercostal muscles. Clinical significance The triangle of auscultation is useful for assessment using a pulmonary auscultation and thoracic procedures. Due to the relative thinning of the musculature of the back in the triangle, the posterior thoracic wall is closer to the skin surface, making respiratory sounds audible more clearly with a stethoscope. On the left side, the cardiac orifice of the stomach lies deep to the triangle. In days before X-rays were discovered, the sound of swallowed liquids were auscultated over this triangle to confirm an oesophageal tumour. To better expose the floor of the triangle up of the posterior thoracic wall in the 6th and 7th intercostal space, a patient is asked to fold their arms across their chest, laterally rotating the scapulae, while bending forward at the trunk, somewhat resembling the fetal position. The triangle of auscultation can be used as a surgical approach path. It can also be used for applying a nerve block known as the rhomboid intercostal block, which can be used to relieve pain after rib fractures, and a thoracotomy. This nerve block is usually achieved by injection of the local anesthetic agent into the fascial plane between the rhomboid upper intercostal muscle and the rhombic muscles.