source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/NAPQI
NAPQI, also known as NAPBQI or N-acetyl-p-benzoquinone imine, is a toxic byproduct produced during the xenobiotic metabolism of the analgesic paracetamol (acetaminophen). It is normally produced only in small amounts, and then almost immediately detoxified in the liver. However, under some conditions in which NAPQI is not effectively detoxified (usually in the case of paracetamol overdose), it causes severe damage to the liver. This becomes apparent 3–4 days after ingestion and may result in death from fulminant liver failure several days after the overdose. Metabolism In adults, the primary metabolic pathway for paracetamol is glucuronidation. This yields a relatively non-toxic metabolite, which is excreted into bile and passed out of the body. A small amount of the drug is metabolized via the cytochrome P-450 pathway (to be specific, CYP3A4 and CYP2E1) into NAPQI, which is extremely toxic to liver tissue, as well as being a strong biochemical oxidizer. In an average adult, only a small amount (approximately 10% of a therapeutic paracetamol dose) of NAPQI is produced, which is inactivated by conjugation with glutathione (GSH). The amount of NAPQI produced differs in certain populations. The minimum dosage at which paracetamol causes toxicity usually is 7.5 to 10g in the average person. The lethal dose is usually between 10 g and 15 g. Concurrent alcohol intake lowers these thresholds significantly. Chronic alcoholics may be more susceptible to adverse effects due to reduced glutathione levels. Other populations may experience effects at lower or higher dosages depending on differences in P-450 enzyme activity and other factors which affect the amount of NAPQI produced. In general, however, the primary concern is accidental or intentional paracetamol overdose. When a toxic dose of paracetamol is ingested, the normal glucuronide pathway is saturated and large amounts of NAPQI are produced. Liver reserves of glutathione are depleted by conjugation with th
https://en.wikipedia.org/wiki/Dual%20process%20theory
In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. Dual process theories can be found in social, personality, cognitive, and clinical psychology. It has also been linked with economics via prospect theory and behavioral economics, and increasingly in sociology through cultural analysis. History The foundations of dual process theory likely come from William James. He believed that there were two different kinds of thinking: associative and true reasoning. James theorized that empirical thought was used for things like art and design work. For James, images and thoughts would come to mind of past experiences, providing ideas of comparison or abstractions. He claimed that associative knowledge was only from past experiences describing it as "only reproductive". James believed that true reasoning could enable overcoming “unprecedented situations” just as a map could enable navigating past obstacles. There are various dual process theories that were produced after William James's work. Dual process models are very common in the study of social psychological variables, such as attitude change. Examples include Petty and Cacioppo's elaboration likelihood model (explained below) and Chaiken's heuristic systematic model. According to these models, persuasion may occur after either intense scrutiny or extremely superficial thinking. In cognitive psychology, attention and working memory have also been conceptualized as relying on two distinct processes. Whether the focus be on social psychology or cognitive psychology, there are many examples o
https://en.wikipedia.org/wiki/Global%20Address%20List
A Global Address List (GAL) is an electronic shared address book which contains usually all people of given organization (company, school etc.). This address book is accessed over the computer network using LDAP protocol, CardDAV or some other electronic means. The GAL is usually read-only for users. Only administrators add or update the items. Users can search it, look up other people (employees, students, members, etc.) and obtain information such as their email address, phone number, work position and office location. A common usage of a GAL is if the user is writing an email, and knows the recipient's name but doesn't know their email address. The application, such as an email client (e.g. SOGo, Zimbra or Thunderbird) can look up the email address in the GAL while the user has written only a part of the recipient's name. Certificates and encryption LDAP directory can be used also for distribution of user certificates (X.509, OpenPGP). So user can query the GAL not only for contact information but also for digital certificate of other users – in order to send them e.g. encrypted e-mails.
https://en.wikipedia.org/wiki/Floating-gate%20MOSFET
The floating-gate MOSFET (FGMOS), also known as a floating-gate MOS transistor or floating-gate transistor, is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) where the gate is electrically isolated, creating a floating node in direct current, and a number of secondary gates or inputs are deposited above the floating gate (FG) and are electrically isolated from it. These inputs are only capacitively connected to the FG. Since the FG is surrounded by highly resistive material, the charge contained in it remains unchanged for long periods of time, nowadays typically longer than 10 years. Usually Fowler-Nordheim tunneling and hot-carrier injection mechanisms are used to modify the amount of charge stored in the FG. The FGMOS is commonly used as a floating-gate memory cell, the digital storage element in EPROM, EEPROM and flash memory technologies. Other uses of the FGMOS include a neuronal computational element in neural networks, analog storage element, digital potentiometers and single-transistor DACs. History The first MOSFET was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, and presented in 1960. The first report of a FGMOS was later made by Dawon Kahng and Simon Min Sze at Bell Labs, and dates from 1967. The earliest practical application of FGMOS was floating-gate memory cells, which Kahng and Sze proposed could be used to produce reprogrammable ROM (read-only memory). Initial applications of FGMOS was digital semiconductor memory, to store nonvolatile data in EPROM, EEPROM and flash memory. In 1989, Intel employed the FGMOS as an analog nonvolatile memory element in its electrically trainable artificial neural network (ETANN) chip, demonstrating the potential of using FGMOS devices for applications other than digital memory. Three research accomplishments laid the groundwork for much of the current FGMOS circuit development: Thomsen and Brooke's demonstration and use of electron tunneling in a standard CMOS double-pol
https://en.wikipedia.org/wiki/Nanocomputer
Nanocomputer refers to a computer smaller than the microcomputer, which is smaller than the minicomputer. Microelectronic components that are at the core of all modern electronic devices employ semiconductor transistors. The term nanocomputer is increasingly used to refer to general computing devices of size comparable to a credit card. Modern single-board computer such as the Raspberry Pi and Gumstix would fall under this classification. Arguably, smartphones and tablets would also be classified as nanocomputers. Future computers with features smaller than 10 nanometers Die shrink has been more or less continuous since around 1970. A few years later, the 6 μm process allowed the making of desktop computers, known as microcomputers. Moore's Law in the next 40 years brought features 1/100th the size, or ten thousand times as many transistors per square millimeter, putting smartphones in every pocket. Eventually computers will be developed with fundamental parts that are no bigger than a few nanometers. Nanocomputers might be built in several ways, using mechanical, electronic, biochemical, or quantum nanotechnology. There used to be consensus among hardware developers that it is unlikely that nanocomputers will be made of semiconductor transistors, as they seem to perform significantly less well when shrunk to sizes under 100 nanometers. Neverthelesss developers reduced microprocessor features to 22 nm in April 2012. Moreover, Intel's 5 nanometer technology outlook predicts 5 nm feature size by 2022. The International Technology Roadmap for Semiconductors in the 2010s gave an industrial consensus on feature scaling following Moore's Law. A silicon-silicon bond length is 235.2 pm, which means that a 5 nm-width transistor would be 21 silicon atoms wide. See also Nanotechnology Quantum computer Starseed launcher – interstellar nanoprobes proposal
https://en.wikipedia.org/wiki/AmpliFIND
AmpliFIND is an acoustic fingerprinting service and a software development kit developed by the US company MusicIP. MusicIP first marketed their fingerprinting algorithm and service as MusicDNS. In 2006, MusicIP reported that the MusicDNS database had more than 22 million fingerprints of digital audio recordings. One of their customers was MetaBrainz Foundation, a non-profit company that used MusicDNS in their MusicBrainz and MusicBrainz Picard software products. Even so, MusicIP dissolved in 2008. The company's CEO, Andrew Stess, bought the rights to MusicDNS, renamed the software to AmpliFIND, and started a new company called AmpliFIND Music Services. In 2011, Stess sold AmpliFIND to Sony, who incorporated it into the digital music service offerings of their Gracenote division. Tribune Media subsequently purchased Gracenote, including the MusicDNS software. How MusicDNS identifies a recording To use the MusicDNS service, software developers write a computer program that incorporates an open-source software library called LibOFA. This library implements the Open Fingerprint Architecture, a specification developed during 2000–05 by MusicIP's previous incarnation, Predixis Corporation. Through LibOFA, a program can fingerprint a recording, and submit the fingerprint to MusicDNS via the Internet. MusicDNS attempts to match the submission to fingerprints in its database. If the MusicDNS service finds an approximate match, it returns a code called a PUID (Portable Unique Identifier). This code does not contain any acoustic information; rather, it enables a computer program to retrieve identifying information (such as the song title and recording artist) from the MusicDNS database. The PUID code is a short, alphanumeric string based on the universally unique identifier standard. The source code for LibOFA is distributed under a dual license: the GNU General Public License and the Adaptive Public License. The MusicDNS software that makes the fingerprints is proprie
https://en.wikipedia.org/wiki/Shiva%20hypothesis
The Shiva hypothesis, also known as coherent catastrophism, is the idea that global natural catastrophes on Earth, such as extinction events, happen at regular intervals because of the periodic motion of the Sun in relation to the Milky Way galaxy. Initial proposal in 1979 William Napier and Victor Clube in their 1979 Nature article, ”A Theory of Terrestrial Catastrophism”, proposed the idea that gravitational disturbances caused by the Solar System crossing the plane of the Milky Way galaxy are enough to disturb comets in the Oort cloud surrounding the Solar System. This sends comets in towards the inner Solar System, which raises the chance of an impact. According to the hypothesis, this results in the Earth experiencing large impact events about every 30 million years (such as the Cretaceous–Paleogene extinction event). Later work by Rampino Starting in 1984, Michael R. Rampino published followup research on the hypothesis. Certainly Rampino was aware of Napier and Clube's earlier publication, as Rampino and Stothers' letter to Nature in 1984 references it. In the 1990s, Rampino and Bruce Haggerty renamed Napier and Clube's Theory of Terrestrial Catastrophism after Shiva, the Hindu god of destruction. In 2020, Rampino and colleagues published non-marine evidence corroborating previous marine evidence in support of the Shiva hypothesis. Similar theories The Sun's passage through the higher density spiral arms of the galaxy, rather than its passage through the plane of the galaxy, could hypothetically coincide with mass extinction on Earth. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on CO data has failed to find a correlation. The Shiva Hypothesis may have inspired yet another theory: that a brown dwarf named Nemesis causes extinctions every 26 million years, which varies slightly from 30 million years. Criticism The idea of extinction periodicity has been criticised due to the fact that the hypothesis assum
https://en.wikipedia.org/wiki/Analysis%20of%20rhythmic%20variance
In statistics, analysis of rhythmic variance (ANORVA) is a method for detecting rhythms in biological time series, published by Peter Celec (Biol Res. 2004, 37(4 Suppl A):777–82). It is a procedure for detecting cyclic variations in biological time series and quantification of their probability. ANORVA is based on the premise that the variance in groups of data from rhythmic variables is low when a time distance of one period exists between the data entries.
https://en.wikipedia.org/wiki/Single-base%20extension
Single-base extension (SBE) is a method for determining the identity of a nucleotide base at a specific position along a nucleic acid. The method is used to identify a single-nucleotide polymorphism (SNP). In the method, an oligonucleotide primer hybridizes to a complementary region along the nucleic acid to form a duplex, with the primer’s terminal 3’-end directly adjacent to the nucleotide base to be identified. Using a DNA polymerase, the oligonucleotide primer is enzymatically extended by a single base in the presence of all four nucleotide terminators; the nucleotide terminator complementary to the base in the template being interrogated is incorporated and identified. The presence of all four terminators suppresses misincorporation of non-complementary nucleotides. Many approaches can be taken for determining the identity of an incorporated terminator, including fluorescence labeling, mass labeling for mass spectrometry, isotope labeling, and tagging the base with a hapten and detecting chromogenically with an anti-hapten antibody-enzyme conjugate (e.g., via an ELISA format). The method was invented by Philip Goelet, Michael Knapp, Richard Douglas and Stephen Anderson while working at the company Molecular Tool. This approach was designed for high-throughput SNP genotyping and was originally called "Genetic Bit Analysis" (GBA). Illumina, Inc. utilizes this method in their Infinium technology (http://www.illumina.com/technology/beadarray-technology/infinium-hd-assay.html) to measure DNA methylation levels in the human genome.
https://en.wikipedia.org/wiki/Personal%20data%20manager
A personal data manager (PDM) is a portable hardware tool enabling secure storage and easy access to user data. It can also be an application located on a portable smart device or PC, enabling novice end-users to directly define, classify, and manipulate a universe of information objects. Usually PDMs include password management software, web-browser favorites and cryptographic software. Advanced PDM can also store settings for VPN and Terminal Services, address books, and other features. PDM can also store and launch several portable software applications. Examples Companies such as Salmon Technologies and their SalmonPDM application have been innovative in creating personalized directory structures to aid/prompt individuals where to store key typical pieces of information, such as legal documents, education/schooling information, medical information, property/vehicle bills, service contracts, and more. The process of creating directory structures that map to individual/family unit types, such as Child, Adult, Couple, Family with Children/Dependents is referred to as Personal Directory Modeling. The Databox Project is academia-based research into developing "an open-source personal networked device, augmented by cloud-hosted services, that collates, curates, and mediates access to an individual’s personal data by verified and audited third party applications and services." See also FreedomBox example project Personal information manager
https://en.wikipedia.org/wiki/Microporous%20material
A microporous material is a material containing pores with diameters less than 2 nm. Examples of microporous materials include zeolites and metal-organic frameworks. Porous materials are classified into several kinds by their size. The recommendations of a panel convened by the International Union of Pure and Applied Chemistry (IUPAC) are: Microporous materials have pore diameters of less than 2 nm. Mesoporous materials have pore diameters between 2 nm and 50 nm. Macroporous materials have pore diameters of greater than 50 nm. Micropores may be defined differently in other contexts. For example, in the context of porous aggregations such as soil, micropores are defined as cavities with sizes less than 30 μm. Uses in laboratories Microporous materials are often used in laboratory environments to facilitate contaminant-free exchange of gases. Mold spores, bacteria, and other airborne contaminants will become trapped, while gases are allowed to pass through the material. This allows for a sterile environment within the contained area. Other uses Microporous media are used in large format printing applications, normally with a pigment based ink, to maintain colour balance and life expectancy of the resultant printed image. Microporous materials are also used as high performance insulation in applications ranging from homes to metal furnaces requiring material that can withstand more than 1000 Celsius. See also Characterisation of pore space in soil Nanoporous materials Conjugated microporous polymer, a type of microporous material
https://en.wikipedia.org/wiki/Meshfree%20methods
In the field of numerical analysis, meshfree methods are those that do not require connection between nodes of the simulation domain, i.e. a mesh, but are rather based on interaction of each node with all its neighbors. As a consequence, original extensive properties such as mass or kinetic energy are no longer assigned to mesh elements but rather to the single nodes. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort. The absence of a mesh allows Lagrangian simulations, in which the nodes can move according to the velocity field. Motivation Numerical methods such as the finite difference method, finite-volume method, and finite element method were originally defined on meshes of data points. In such a mesh, each point has a fixed number of predefined neighbors, and this connectivity between neighbors can be used to define mathematical operators like the derivative. These operators are then used to construct the equations to simulate—such as the Euler equations or the Navier–Stokes equations. But in simulations where the material being simulated can move around (as in computational fluid dynamics) or where large deformations of the material can occur (as in simulations of plastic materials), the connectivity of the mesh can be difficult to maintain without introducing error into the simulation. If the mesh becomes tangled or degenerate during simulation, the operators defined on it may no longer give correct values. The mesh may be recreated during simulation (a process called remeshing), but this can also introduce error, since all the existing data points must be mapped onto a new and different set of data points. Meshfree methods are intended to remedy these problems. Meshfree methods are also useful for: Simulations where creating a useful mesh from the geometry of a complex 3D object may be especially difficult or require human assistance Simulations where nodes ma
https://en.wikipedia.org/wiki/Signaling%20compression
For data compression, signaling compression, or SigComp, is a compression method designed especially for compression of text-based communication data as SIP or RTSP. SigComp had originally been defined in RFC 3320 and was later updated with RFC 4896. A Negative Acknowledgement Mechanism for Signaling Compression is defined in RFC 4077. The SigComp work is performed in the ROHC working group in the transport area of the IETF. Overview SigComp specifications describe a compression schema that is located in between the application layer and the transport layer (e.g. between SIP and UDP). It is implemented upon a virtual machine configuration which executes a specific set of commands that are optimized for decompression purposes (namely UDVM, Universal Decompressor Virtual Machine). One strong point for SigComp is that the bytecode to decode messages can be sent over SigComp itself, so this allows to use any kind of compression schema given that it is expressed as bytecode for the UDVM. Thus any SigComp compatible device may use compression mechanisms that did not exist when it was released without any firmware change. Additionally, some decoders may be already been standardised, so SigComp may recall that code so it is not needed to be sent over the connection. To assure that a message is decodable the only requirement is that the UDVM code is available, so the compression of messages is executed off the virtual machine, and native code can be used. As an independent system a mechanism to signal the application conversation (e.g. a given SIP session), a compartment mechanism is used, so a given application may have any given number of different, independent conversations, while persisting all the session status (as needed/specified per compression schema and UDVM code). General architecture
https://en.wikipedia.org/wiki/Sun%20Java%20System%20Directory%20Server
The Sun Java System Directory Server is a discontinued LDAP directory server and DSML server written in C and originally developed by Sun Microsystems. The Java System Directory Server is a component of the Java Enterprise System. Earlier iterations of Sun Java System Directory Server were known as Sun ONE Directory Server, iPlanet Directory Server, and, before that, Netscape Directory Server. Sun Java System Directory Server became Sun Directory Server Enterprise Edition and is currently known as Oracle Directory Server Enterprise Edition (ODSEE). The software was available free of charge for perpetual usage in individual, commercial, service provider, or research and instructional environments. It is still available for download at the Oracle website, the new official site for Sun products; however only the latest version (DSEE 7, rebranded as ODSEE 11.1.1.5.0) can be found in this site. Sun started developing OpenDS in Java in 2011, due to too many issues with developing Sun Java System Directory Server with the C language. The code base has not been updated since 2011. Supported Internet standards Directory Server supports the following RFCs: 2079, 2246, 2247, 2307, 2713, 2788, 2798, 2831, 2849, 2891, 3045, 3062, 3296, 3829, 3866, 4370, 4422, 4505, 4511, 4512, 4513, 4514, 4515, 4516, 4517, 4519, 4522, 4524, and 4532. Supported platforms Directory Server is supported by Sun on the following platforms: Sun Solaris 9 and 10 Operating Systems Sun Solaris 10 with Trusted Extensions Sun OpenSolaris 2009.06 Red Hat Enterprise Linux 4 and 5 SuSE Linux Enterprise Server 10 Hewlett-Packard HP-UX 11.23 (PA-RISC) Microsoft Windows Server 2003 and 2008 Standard Edition and Enterprise Edition See also OpenLDAP 389 Directory Server Oracle Identity Management Oracle Internet Directory List of LDAP software
https://en.wikipedia.org/wiki/LU%20decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It's also referred to as LR decomposition (factors into left and right triangular matrices). Definitions Let A be a square matrix. An LU factorization refers to the factorization of A, with proper row and/or column orderings or permutations, into two factors – a lower triangular matrix L and an upper triangular matrix U: In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all the elements below the diagonal are zero. For example, for a 3 × 3 matrix A, its LU decomposition looks like this: Without a proper ordering or permutations in the matrix, the factorization may fail to materialize. For example, it is easy to verify (by expanding the matrix multiplication) that . If , then at least one of and has to be zero, which implies that either L or U is singular. This is impossible if A is nonsingular (invertible). This is a procedural problem. It can be removed by simply reordering the rows of A so
https://en.wikipedia.org/wiki/Submarine%20Command%20System
SMCS, the Submarine Command System, was first created for the Royal Navy of the United Kingdom's s as a tactical information system and a torpedo weapon control system. Versions have now also been installed on all active Royal Navy submarine classes. Initial Phase: SMCS for Vanguard class With the decision in 1983 to build a new class of submarine to carry the Trident missile system, the UK Ministry of Defence (MoD) ran an open competition for the command system. Up to that point all Royal Navy (RN) ships and submarines had command systems built by Ferranti using custom-built electronics and specialised proprietary processors. In a departure from previous practice, which had favoured 'preferred contractor' policies, the competition was won by a new company called Gresham-CAP, leading a consortium of Gresham-Lion (now part of Ultra Electronics plc) and CAP Scientific. The consortium proposed a novel distributed processing system based on commercial off-the-shelf (COTS) processors, with a modular software architecture largely written in the Ada programming language. Each set of Initial Phase SMCS equipment has multiple computer nodes. At the centre of the system there is an Input/Output Node (which provides interfaces to weapons and sensors) and a Central Services Node (which holds fast numeric processors). Each central node is duplicated to create a fault-tolerant system which is dual modular redundant. The Human-Computer Interface is provided by Multi Function Consoles and some additional terminals. The dual redundant central nodes are linked to each other and to the consoles via a dual redundant fibre optic LAN. In the Initial Phase equipment fitted to the s most processing is done by Intel 80386 single-board computers, each with its own Ada run-time environment. CAP Scientific created a complex layer of middleware to link the many processors together. At its time SMCS was the largest Ada project so far seen. As a pioneering user of Ada, the SMCS project encount
https://en.wikipedia.org/wiki/Spectral%20energy%20distribution
A spectral energy distribution (SED) is a plot of energy versus frequency or wavelength of light (not to be confused with a 'spectrum' of flux density vs frequency or wavelength). It is used in many branches of astronomy to characterize astronomical sources. For example, in radio astronomy they are used to show the emission from synchrotron radiation, free-free emission and other emission mechanisms. In infrared astronomy, SEDs can be used to classify young stellar objects. Detector for spectral energy distribution The count rates observed from a given astronomical radiation source have no simple relationship to the flux from that source, such as might be incident at the top of the Earth's atmosphere. This lack of a simple relationship is due in no small part to the complex properties of radiation detectors. These detector properties can be divided into those that merely attenuate the beam, including residual atmosphere between source and detector, absorption in the detector window when present, quantum efficiency of the detecting medium, those that redistribute the beam in detected energy, such as fluorescent photon escape phenomena, inherent energy resolution of the detector. See also Astronomical radio source Astronomical X-ray sources Background radiation Bremsstrahlung Cosmic microwave background spectral distortions Cyclotron radiation Electromagnetic radiation Synchrotron radiation Wavelength dispersive X-ray spectroscopy
https://en.wikipedia.org/wiki/Spectral%20index
In astronomy, the spectral index of a source is a measure of the dependence of radiative flux density (that is, radiative flux per unit of frequency) on frequency. Given frequency in Hz and radiative flux density in Jy, the spectral index is given implicitly by Note that if flux does not follow a power law in frequency, the spectral index itself is a function of frequency. Rearranging the above, we see that the spectral index is given by Clearly the power law can only apply over a certain range of frequency because otherwise the integral over all frequencies would be infinite. Spectral index is also sometimes defined in terms of wavelength . In this case, the spectral index is given implicitly by and at a given frequency, spectral index may be calculated by taking the derivative The spectral index using the , which we may call differs from the index defined using The total flux between two frequencies or wavelengths is which implies that The opposite sign convention is sometimes employed, in which the spectral index is given by The spectral index of a source can hint at its properties. For example, using the positive sign convention, the spectral index of the emission from an optically thin thermal plasma is -0.1, whereas for an optically thick plasma it is 2. Therefore, a spectral index of -0.1 to 2 at radio frequencies often indicates thermal emission, while a steep negative spectral index typically indicates synchrotron emission. It is worth noting that the observed emission can be affected by several absorption processes that affect the low-frequency emission the most; the reduction in the observed emission at low frequencies might result in a positive spectral index even if the intrinsic emission has a negative index. Therefore, it is not straightforward to associate positive spectral indices with thermal emission. Spectral index of thermal emission At radio frequencies (i.e. in the low-frequency, long-wavelength limit), where the Rayleigh–Jeans
https://en.wikipedia.org/wiki/Branched%20DNA%20assay
In biology, a branched DNA assay is a signal amplification assay (as opposed to a target amplification assay) that is used to detect nucleic acid molecules. Method A branched DNA assay begins with a dish or some other solid support (e.g., a plastic dipstick). The dish is peppered with small, single stranded DNA molecules (or chains) that stick out into the solution. These are known as capture probe DNA molecules. Next, an extender DNA molecule is added. Each extender has two domains; one that hybridizes to the capture DNA molecule and one that sticks out above the surface. The purpose of the extender is two-fold. First, it creates more available surface area for target DNA molecules to bind, and second, it allows the assay to be easily adapted to detect a variety of target DNA molecules. Once the capture and extender molecules are in place and they have hybridized, the sample can be added. Target molecules in the sample will bind to the extender molecule. This results in a base peppered with capture probes, which are hybridized to extender probes, which in turn are hybridized to target molecules. At this point, signal amplification takes place. A label extender DNA molecule is added that has two domains (similar to the first extender). The label extender hybridizes to the target and to a pre-amplified molecule. The preamplifier molecule has two domains. First, it binds to the label extender and second, it binds to the amplifier molecule. An example amplifier molecule is an oligonucleotide chain bound to the enzyme alkaline phosphatase. Diagrammatically, the process can be resembled as Base → Capture Probe → Extender → Target → label extender → pre-amplifier → amplifier Uses and Advantages The assay can be used to detect and quantify many types of RNA or DNA target. In the assay, branched DNA is mixed with a sample to be tested. The detection is done using a non-radioactive method and does not require preamplification of the nucleic acid to be detected. The ass
https://en.wikipedia.org/wiki/Object-Oriented%20Software%20Construction
Object-Oriented Software Construction is a book by Bertrand Meyer, widely considered a foundational text of object-oriented programming. The first edition was published in 1988; the second, extensively revised and expanded edition (more than 1300 pages), in 1997. Numerous translations are available including Dutch (first edition only), French (1+2), German (1), Italian (1), Japanese (1+2), Persian (1), Polish (2), Romanian (1), Russian (2), Serbian (2), and Spanish (2). The book has been cited thousands of times in computer science literature. The book won a Jolt award in 1994. Unless otherwise indicated, descriptions below apply to the second edition. Focus The book, often known as "OOSC", presents object technology as an answer to major issues of software engineering, with a special emphasis on addressing the software quality factors of correctness, robustness, extendibility and reusability. It starts with an examination of the issues of software quality, then introduces abstract data types as the theoretical basis for object technology and proceeds with the main object-oriented techniques: classes, objects, genericity, inheritance, Design by Contract, concurrency, and persistence. It includes extensive discussions of methodological issues. Table of contents Notation The first edition of the book used Eiffel for the examples and served as a justification of the language design choices for Eiffel. The second edition also uses Eiffel as its notation, but in an effort to separate the notation from the concepts it does not name the language until the Epilogue, on page 1162, where "Eiffel" appears as the last word. A few months after publication of the second edition, a reader posted on Usenet his discovery that the book's 36 chapters alternatively start with the letters "E", "I", "F", "F", "E", "L", a pattern being repeated 6 times. In addition, in the Appendix, titled "Epilogue, In Full Frankness Exposing the Language" (note the initials), the first letters of
https://en.wikipedia.org/wiki/Turing%20machine%20examples
The following are examples to supplement the article Turing machine. Turing's very first example The following table is Turing's very first example (Alan Turing 1937): "1. A machine can be constructed to compute the sequence 0 1 0 1 0 1..." (0 <blank> 1 <blank> 0...) (Undecidable p. 119) With regard to what actions the machine actually does, Turing (1936) (Undecidable p. 121) states the following: "This [example] table (and all succeeding tables of the same kind) is to be understood to mean that for a configuration described in the first two columns the operations in the third column are carried out successively, and the machine then goes over into the m-configuration in the final column." (Undecidable p. 121) He makes this very clear when he reduces the above table to a single instruction called "b" (Undecidable p. 120), but his instruction consists of 3 lines. Instruction "b" has three different symbol possibilities {None, 0, 1}. Each possibility is followed by a sequence of actions until we arrive at the rightmost column, where the final m-configuration is "b": As observed by a number of commentators including Turing (1937) himself, (e.g., Post (1936), Post (1947), Kleene (1952), Wang (1954)) the Turing instructions are not atomic — further simplifications of the model can be made without reducing its computational power; see more at Post–Turing machine. As stated in the article Turing machine, Turing proposed that his table be further atomized by allowing only a single print/erase followed by a single tape movement L/R/N. He gives us this example of the first little table converted (Undecidable, p. 127): Turing's statement still implies five atomic operations. At a given instruction (m-configuration) the machine: observes the tape-symbol underneath the head based on the observed symbol goes to the appropriate instruction-sequence to use prints symbol Sj or erases or does nothing moves tape left, right or not at all goes to the final m-configuratio
https://en.wikipedia.org/wiki/Nemesis%20%28operating%20system%29
Nemesis was an operating system that was designed by the University of Cambridge, the University of Glasgow, the Swedish Institute of Computer Science and Citrix Systems. Nemesis was conceived with multimedia uses in mind. It was designed with a small lightweight kernel, using shared libraries to perform functions that most operating systems perform in the kernel. This reduces the processing that is performed in the kernel on behalf of application processes, transferring the activity to the processes themselves and facilitating accounting for resource usage. The ISAs that Nemesis supports include x86 (Intel i486, Pentium, Pentium Pro, and Pentium II), Alpha and ARM (StrongARM SA–110). Nemesis also runs on evaluation boards (21064 and 21164). See also Exokernel Xen Kernel-wide design approaches
https://en.wikipedia.org/wiki/Noninvasive%20genotyping
Noninvasive genotyping is a modern technique for obtaining DNA for genotyping that is characterized by the indirect sampling of specimen, not requiring harm to, handling of, or even the presence of the organism of interest. Beginning in the early 1990s, with the advent of PCR, researchers have been able to obtain high-quality DNA samples from small quantities of hair, feathers, scales, or excrement. These noninvasive samples are an improvement over older allozyme and DNA sampling techniques that often required larger samples of tissue or the destruction of the studied organism. Noninvasive genotyping is widely utilized in conservation efforts, where capture and sampling may be difficult or disruptive to behavior. Additionally, in medicine, this technique is being applied in humans for the diagnosis of genetic disease and early detection of tumors. In this context, invasivity takes on a separate definition where noninvasive sampling also includes simple blood samples. Conservation In conservation, noninvasive genotyping has been used to supplement traditional techniques with broadly ranging levels of success. Modern DNA amplification methods allow researchers to use fecal or hair samples collected from the field to assess basic information about the specimen, including sex or species. Despite the potential that noninvasive genotyping has in conservation genetics efforts, the efficiency of this method is in question, as field samples often suffer from degradation and contamination or are difficult to procure. For instance, a team of researchers successfully used coyote fecal samples to estimate the abundance of a population in Georgia, thereby avoiding the substantial difficulty and consequences involved in trapping and procuring samples from the animals. Medicine Fetal Genotyping The most common use of noninvasive genotyping in medicine is non-invasive prenatal diagnosis (NIPD), which provides an alternative to riskier techniques such as amniocentesis. With the
https://en.wikipedia.org/wiki/Heteroclinic%20network
In mathematics, a heteroclinic network is an invariant set in the phase space of a dynamical system. It can be thought of loosely as the union of more than one heteroclinic cycle. Heteroclinic networks arise naturally in a number of different types of applications, including fluid dynamics and populations dynamics. The dynamics of trajectories near to heteroclinic networks is intermittent: trajectories spend a long time performing one type of behaviour (often, close to equilibrium), before switching rapidly to another type of behaviour. This type of intermittent switching behaviour has led to several different groups of researchers using them as a way to model and understand various type of neural dynamics.
https://en.wikipedia.org/wiki/John%20H.%20Smith%20%28mathematician%29
John Howard Smith is an American mathematician and retired professor of mathematics at Boston College. He received his Ph.D. from the Massachusetts Institute of Technology in 1963, under the supervision of Kenkichi Iwasawa. In voting theory, he is known for the Smith set, the smallest nonempty set of candidates such that, in every pairwise matchup (two-candidate election/runoff) between a member and a non-member, the member is the winner by majority rule, and for the Smith criterion, a property of certain election systems in which the winner is guaranteed to belong to the Smith set. He has also made contributions to spectral graph theory and additive number theory.
https://en.wikipedia.org/wiki/Deep%20Dungeon
is a series of role-playing video games developed by HummingBirdSoft. The first two installments were released on the Family Computer Disk System by Square's label DOG; the third one was released on the regular Family Computer by Square directly and the final one by Asmik. Games in the series Madō Senki is a "dungeon crawler" presented in a first person perspective, similar to Wizardry. Players navigate nondescript, maze-like corridors in their bid to find the princess. The game was released exclusively in Japan, but on April 15, 2006, Deep Dungeon was unofficially translated into English. Madō Senki is set in the town of Dorl. One day, monsters raided the town, stealing both the treasures and Princess Etna's soul. Despite the attempts of brave warriors to retrieve her soul, none have been successful. In the dungeon, the player is given a command list. The player can choose to attack if an enemy is in the vicinity, view allocated items, escape from battle, examine the area for items, and talk if there are people nearby. The character's effectiveness in battle is largely determined by numerical values for attacking power (AP), defensive power (AC), and health (HP). These values are determined by the character's experience level (LEVEL), which raises after the character's accumulated experience (EX) reaches a certain point. Yūshi no Monshō , also called Deep Dungeon II: Yūshi no Monshō, is the second installment of the Deep Dungeon series. According to Square Enix, it was the first 3D dungeon crawler RPG for the Famicom console. In this game, the villain Ruu has returned. The player will need to explore an eight floor tower (consisting of four ground floors and four underground floors) to find him and defeat him. Battles were much faster paced in this sequel. Whereas the first game could get slow because of the very high miss rate for both player and enemies, creating prolonged battle scenes, this game improved that. It also has a much higher encounter rate, a
https://en.wikipedia.org/wiki/Cistrome
In simple words, the cistrome refers a collection of regulatory elements of a set of genes, including transcription factor binding-sites and histone modifications. More specifically, "the set of cis-acting targets of a trans-acting factor on a genome-wide scale, also known as the in vivo genome-wide location of transcription factor binding-sites or histone modifications". The term cistrome is a portmanteau of cistr (from cistron) + ome (from genome). The term cistrome was coined by investigators at the Dana–Farber Cancer Institute and Harvard Medical School. Technologies such as chromatin immunoprecipitation combined with microarray analysis "ChIP-on-chip" or with massively parallel DNA sequencing "ChIP-Seq" have greatly facilitated the definition of the cistrome of transcription factors and other chromatin associated proteins.
https://en.wikipedia.org/wiki/Antenna%20tuning%20hut
An antenna tuning hut or helix house is a small shed at the base of a longwave or mediumwave radio transmitting antenna. It contains antenna tuner — radio equipment for coupling the power from the feedline to the antenna. Alternative names include antenna tuning house, coupling hut, and dog house. Equipment The radio frequency current from the transmitter is supplied to the antenna through a cable called the feedline. The antenna tuning hut contains a matching network made of high wattage capacitors and inductors (coils) that in combination match the antenna's impedance to the feedline, to efficiently transfer power into the antenna. The inductors, made of large helixes of wire, give the name helix house. The powerful radio waves near the antenna can be a hazard for workers, so the interior of the antenna tuning hut is typically shielded with copper or aluminum sheeting or wire mesh, in order to reduce radiation from the tower. In operation the components can have a voltage of several hundred thousand volts. The building may also contain lightning protection devices and power transformers for aircraft warning lights on the tower. The radio transmitter which generates the radio frequency current which powers the antenna is generally located away from the antenna, to prevent the powerful radio waves from interfering with the sensitive transmitter circuits. See also Antenna tuning unit Radio frequency power transmission
https://en.wikipedia.org/wiki/Immunoproteomics
Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. Examples of common applications of immunoproteomics include: The isolation and mass spectrometric identification of MHC (major histocompatibility complex) binding peptides Purification and identification of protein antigens binding specific antibodies (or other affinity reagents) Comparative immunoproteomics to identify proteins and pathways modulated by a specific infectious organism, disease or toxin. The identification of proteins in immunoproteomics is carried out by techniques including gel based, microarray based, and DNA based techniques, with mass spectroscopy typically being the ultimate identification method. Applications Immunology Immunoproteomics is and has been used to increase scientific understanding of both autoimmune disease pathology and progression. Using biochemical techniques, gene and ultimately protein expression can be measured with high fidelity. With this information, the biochemical pathways causing pathology in conditions such as multiple sclerosis and Crohn's disease can potentially be elucidated. Serum antibody identification in particular has proven to be very useful as a diagnostic tool for a number of diseases in modern medicine, in large part due to the relatively high stability of serum antibodies. Immunoproteomic techniques are additionally used for the isolation of antibodies. By identifying and proceeding to sequence antibodies, scientists are able to identify potential protein targets of said antibodies. In doing so, it is possible to determine the antigen(s) responsible for a particular immune response. Identification and engineering of antibodies involved in autoimmune disease pathology may offer novel techniques in disease therapy. Drug engineering By identifying the antigens responsible for a particular immune response, it is possible to identify viable targets for novel drugs. In addition, specific antigens can
https://en.wikipedia.org/wiki/Indicator%20bacteria
Indicator bacteria are types of bacteria used to detect and estimate the level of fecal contamination of water. They are not dangerous to human health but are used to indicate the presence of a health risk. Each gram of human feces contains approximately ~100 billion () bacteria. These bacteria may include species of pathogenic bacteria, such as Salmonella or Campylobacter, associated with gastroenteritis. In addition, feces may contain pathogenic viruses, protozoa and parasites. Fecal material can enter the environment from many sources including waste water treatment plants, livestock or poultry manure, sanitary landfills, septic systems, sewage sludge, pets and wildlife. If sufficient quantities are ingested, fecal pathogens can cause disease. The variety and often low concentrations of pathogens in environmental waters makes them difficult to test for individually. Public agencies therefore use the presence of other more abundant and more easily detected fecal bacteria as indicators of the presence of fecal contamination. Aside from bacteria being found in fecal matter, it can also be found in oral and gut contents. Criteria for indicator organisms The US Environmental Protection Agency (EPA) lists the following criteria for an organism to be an ideal indicator of fecal contamination: The organism should be present whenever enteric pathogens are present The organism should be useful for all types of water The organism should have a longer survival time than the hardiest enteric pathogen The organism should not grow in water The organism should be found in warm-blooded animals’ intestines. None of the types of indicator organisms that are currently in use fit all of these criteria perfectly, however, when cost is considered, use of indicators becomes necessary. Types of indicator organisms Commonly used indicator bacteria include total coliforms, or a subset of this group, fecal coliforms, which are found in the intestinal tracts of warm blooded animal
https://en.wikipedia.org/wiki/Proper%20forcing%20axiom
In the mathematical field of set theory, the proper forcing axiom (PFA) is a significant strengthening of Martin's axiom, where forcings with the countable chain condition (ccc) are replaced by proper forcings. Statement A forcing or partially ordered set P is proper if for all regular uncountable cardinals , forcing with P preserves stationary subsets of . The proper forcing axiom asserts that if P is proper and Dα is a dense subset of P for each α<ω1, then there is a filter G P such that Dα ∩ G is nonempty for all α<ω1. The class of proper forcings, to which PFA can be applied, is rather large. For example, standard arguments show that if P is ccc or ω-closed, then P is proper. If P is a countable support iteration of proper forcings, then P is proper. Crucially, all proper forcings preserve . Consequences PFA directly implies its version for ccc forcings, Martin's axiom. In cardinal arithmetic, PFA implies . PFA implies any two -dense subsets of R are isomorphic, any two Aronszajn trees are club-isomorphic, and every automorphism of the Boolean algebra /fin is trivial. PFA implies that the Singular Cardinals Hypothesis holds. An especially notable consequence proved by John R. Steel is that the axiom of determinacy holds in L(R), the smallest inner model containing the real numbers. Another consequence is the failure of square principles and hence existence of inner models with many Woodin cardinals. Consistency strength If there is a supercompact cardinal, then there is a model of set theory in which PFA holds. The proof uses the fact that proper forcings are preserved under countable support iteration, and the fact that if is supercompact, then there exists a Laver function for . It is not yet known how much large cardinal strength comes from PFA. Other forcing axioms The bounded proper forcing axiom (BPFA) is a weaker variant of PFA which instead of arbitrary dense subsets applies only to maximal antichains of size ω1. Martin's maximum is th
https://en.wikipedia.org/wiki/Chord%20chart
A chord chart (or chart) is a form of musical notation that describes the basic harmonic and rhythmic information for a song or tune. It is the most common form of notation used by professional session musicians playing jazz or popular music. It is intended primarily for a rhythm section (usually consisting of piano, guitar, drums and bass). In these genres the musicians are expected to be able to improvise the individual notes used for the chords (the "voicing") and the appropriate ornamentation, counter melody or bassline. In some chord charts, the harmony is given as a series of chord symbols above a traditional musical staff. The rhythmic information can be very specific and written using a form of traditional notation, sometimes called rhythmic notation, or it can be completely unspecified using slash notation, allowing the musician to fill the bar with chords or fills any way they see fit (called comping). In Nashville notation the key is left unspecified on the chart by substituting numbers for chord names. This facilitates on-the-spot key changes to songs. Chord charts may also include explicit parts written in modern music notation (such as a musical riff that the song is dependent on for character), lyrics or lyric fragments, and various other information to help the musician compose and play their part. Rhythmic notation Rhythmic notation specifies the exact rhythm in which to play or comp the indicated chords. The chords are written above the staff and the rhythm is indicated in the traditional manner, though pitch is unspecified through the use of slashes placed on the center line instead of notes. This is contrasted with the less specific slash notation. Slash notation Slash notation is a form of purposefully vague musical notation which indicates or requires that an accompaniment player or players improvise their own rhythm pattern or comp according to the chord symbol given above the staff. On the staff a slash is placed on each beat (so that th
https://en.wikipedia.org/wiki/Non-standard%20model%20of%20arithmetic
In mathematical logic, a non-standard model of arithmetic is a model of first-order Peano arithmetic that contains non-standard numbers. The term standard model of arithmetic refers to the standard natural numbers 0, 1, 2, …. The elements of any model of Peano arithmetic are linearly ordered and possess an initial segment isomorphic to the standard natural numbers. A non-standard model is one that has additional elements outside this initial segment. The construction of such models is due to Thoralf Skolem (1934). Non-standard models of arithmetic exist only for the first-order formulation of the Peano axioms; for the original second-order formulation, there is, up to isomorphism, only one model: the natural numbers themselves. Existence There are several methods that can be used to prove the existence of non-standard models of arithmetic. From the compactness theorem The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem. To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol x. The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral n, the axiom x > n is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant x interpreted as some number larger than any numeral mentioned in the finite subset of P*. Thus by the compactness theorem there is a model satisfying all the axioms P*. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to x cannot be a standard number, because as indicated it is larger than any standard number. Using more complex methods, it is possible to build non-standard models that
https://en.wikipedia.org/wiki/Cinoxacin
Cinoxacin is a quinolone antibiotic that has been discontinued in the U.K. as well the United States, both as a branded drug or a generic. The marketing authorization of cinoxacin has been suspended throughout the EU. Cinoxacin was an older synthetic antimicrobial related to the quinolone class of antibiotics with activity similar to oxolinic acid and nalidixic acid. It was commonly used thirty years ago to treat urinary tract infections in adults. There are reports that cinoxacin had also been used to treat initial and recurrent urinary tract infections and bacterial prostatitis in dogs. however this veterinary use was never approved by the United States Food and Drug Administration (FDA). In complicated UTI, the older gyrase-inhibitors such as cinoxacin are no longer indicated. History Cinoxacin is one of the original quinolone drugs, which were introduced in the 1970s. Commonly referred to as the first generation quinolones. This first generation also included other quinolone drugs such as pipemidic acid, and oxolinic acid, but this first generation proved to be only marginal improvements over nalidixic acid. Cinoxacin is similar chemically (and in antimicrobial activity) to oxolinic acid and nalidixic acid. Relative to nalidixic acid, cinoxacin was found to have a slightly greater inhibitory and bactericidal activity. Cinoxacin was patented in 1972 and assigned to Eli Lilly. Eli Lilly obtained approval from the FDA to market cinoxacin in the United States as Cinobac on June 13, 1980. Prior to this cinobac was marketed in the U.K. and Switzerland in 1979. Oclassen Pharmaceuticals (Oclassen Dermatologics) commenced sales of Cinobac in the United States and Canada back in September 1992, under an agreement with Eli Lilly which granted Oclassen exclusive United States and Canadian distribution rights. Oclassen promoted Cinobac primarily to urologists for the outpatient treatment of initial and recurrent urinary tract infections and prophylaxis. Oclassen Pharmaceu
https://en.wikipedia.org/wiki/Cladosporium
Cladosporium is a genus of fungi including some of the most common indoor and outdoor molds. Some species are endophytes or plant pathogens, while others parasitize fungi. Description Species produce olive-green to brown or black colonies, and have dark-pigmented conidia that are formed in simple or branching chains. Many species of Cladosporium are commonly found on living and dead plant material. Including Sunflowers. The spores are wind-dispersed and they are often extremely abundant in outdoor air. Indoors Cladosporium species may grow on surfaces when moisture is present. Cladosporium fulvum, cause of tomato leaf mould, has been an important genetic model, in that the genetics of host resistance are understood. In the 1960s, it was estimated that the genus Cladosporium contained around 500 plant-pathogenic and saprotrophic species, but this number has since been increased to over 772 species. The genus is very closely related to black yeasts in the order Dothideales. Cladosporium species are often highly osmotolerant, growing easily on media containing 10% glucose or 12–17% NaCl. They are rarely grown on media containing 24% NaCl or 50% glucose and never isolated from medium with 32% NaCl or greater. Most species have very fragile spore chains, making it extremely difficult to prepare a mount for microscopic observation in which the conidial chains are preserved intact. Health effects Cladosporium species are present in the Human mycobiome but are rarely pathogenic to humans. They have been reported to cause infections of the skin and toenails as well as sinuses and lungs, with more common symptoms including nasal congestion, sneezing, coughing, and itchy eyes. The airborne spores of Cladosporium species are significant allergens, and in large amounts they can severely affect people with asthma and other respiratory diseases. Cladosporium species produce no major mycotoxins of concern, but do produce volatile organic compounds (VOCs) associated with odours
https://en.wikipedia.org/wiki/Central%20composite%20design
In statistics, a central composite design is an experimental design, useful in response surface methodology, for building a second order (quadratic) model for the response variable without needing to use a complete three-level factorial experiment. After the designed experiment is performed, linear regression is used, sometimes iteratively, to obtain results. Coded variables are often used when constructing this design. Implementation The design consists of three distinct sets of experimental runs: A factorial (perhaps fractional) design in the factors studied, each having two levels; A set of center points, experimental runs whose values of each factor are the medians of the values used in the factorial portion. This point is often replicated in order to improve the precision of the experiment; A set of axial points, experimental runs identical to the centre points except for one factor, which will take on values both below and above the median of the two factorial levels, and typically both outside their range. All factors are varied in this way. Design matrix The design matrix for a central composite design experiment involving k factors is derived from a matrix, d, containing the following three different parts corresponding to the three types of experimental runs: The matrix F obtained from the factorial experiment. The factor levels are scaled so that its entries are coded as +1 and −1. The matrix C from the center points, denoted in coded variables as (0,0,0,...,0), where there are k zeros. A matrix E from the axial points, with 2k rows. Each factor is sequentially placed at ±α and all other factors are at zero. The value of α is determined by the designer; while arbitrary, some values may give the design desirable properties. This part would look like: Then d is the vertical concatenation: The design matrix X used in linear regression is the horizontal concatenation of a column of 1s (intercept), d, and all elementwise products of a pair of c
https://en.wikipedia.org/wiki/GNOME%20Screensaver
Up until GNOME 3.5, GNOME Screensaver was the GNOME project's official screen blanking and locking framework. With the release of GNOME 3.5.5, screen locking functionality became a function of GDM and GNOME Shell by default. History GNOME Screensaver continued to be used by the GNOME Fallback mode until GNOME Fallback was deprecated with the release of GNOME 3.8. GNOME Screensaver continues to be used in the GNOME Flashback session, a continuation of the GNOME Fallback mode. In October 2014, a member of the GNOME Flashback team requested maintainer-ship of GNOME Screensaver which would allow it to officially become part of the GNOME Flashback project. On some GNOME-based Linux distributions, GNOME Screensaver was used instead of the framework that is a part of XScreenSaver. On these systems, the screen savers themselves still came from the XScreenSaver collection, GNOME Screensaver just provided the interface. The GNOME Screensaver interface was designed for improved integration with the GNOME desktop, including themeability, language support, and human interface guidelines compliance. However, it no longer runs any screensavers and is more properly referred to as a screen blanker. Compared to the front end included with XScreenSaver, GNOME Screensaver has a simplified interface but less customizability. For instance, users may not select which screensavers to select at random – either only one is selected or the program randomly selects from the whole list. In addition, the inability to configure individual screensavers and the developers' response to this issue has been criticized by some users. It also lacks a setting to control cycling through different screensavers. In GNOME 3, GNOME Screensaver was drastically simplified, supporting only screen blanking and no graphical screen savers.
https://en.wikipedia.org/wiki/Alpha%20solenoid
An alpha solenoid (sometimes also known as an alpha horseshoe or as stacked pairs of alpha helices, abbreviated SPAH) is a protein fold composed of repeating alpha helix subunits, commonly helix-turn-helix motifs, arranged in antiparallel fashion to form a superhelix. Alpha solenoids are known for their flexibility and plasticity. Like beta propellers, alpha solenoids are a form of solenoid protein domain commonly found in the proteins comprising the nuclear pore complex. They are also common in membrane coat proteins known as coatomers, such as clathrin, and in regulatory proteins that form extensive protein-protein interactions with their binding partners. Examples of alpha solenoid structures binding RNA and lipids have also been described. Terminology and classification The term "alpha solenoid" has been used somewhat inconsistently in the literature. As originally defined, alpha solenoids were composed of helix-turn-helix motifs that stacked into an open superhelix. However, protein structural classification systems have used varying terminology; the Structural Classification of Proteins (SCOP) database describes these proteins using the term "alpha alpha superhelix". The CATH database uses the term "alpha horseshoe" for these proteins, and uses "alpha solenoid" for a somewhat different and more compact structure exemplified by the peridinin-chlorophyll binding protein. Structure Alpha solenoid proteins are composed of repeating structural units containing at least two alpha helices arranged in an antiparallel orientation. Often the repeating unit is a helix-turn-helix motif, but it can be more elaborate, as in variants with an additional helix in the turn segment. Alpha solenoids can be formed by several different types of helical tandem repeats, including HEAT repeats, Armadillo repeats, tetratricopeptide (TPR) repeats, leucine-rich repeats, and ankyrin repeats. Alpha solenoids have unusual elasticity and flexibility relative to globular proteins. They ar
https://en.wikipedia.org/wiki/TSS%20%28operating%20system%29
The IBM Time Sharing System TSS/360 is a discontinued early time-sharing operating system designed exclusively for a special model of the System/360 line of mainframes, the Model 67. Made available on a trial basis to a limited set of customers in 1967, it was never officially released as a supported product by IBM. TSS pioneered a number of novel features, some of which later appeared in more popular systems such as MVS. TSS was migrated to System/370 and 303x systems, but despite its many advances and novel capabilities, TSS failed to meet expectations and was eventually canceled. TSS/370 was used as the basis for a port of UNIX to the IBM mainframe. TSS/360 also inspired the development of the TSS/8 operating system. Novel characteristics TSS/360 was one of the first implementations of tightly-coupled symmetric multiprocessing. A pair of Model 67 mainframes shared a common physical memory space, and ran a single copy of the kernel (and application) code. An I/O operation launched by one processor could end and cause an interrupt in the other. The Model 67 used a standard 360 instruction called Test and Set to implement locks on code critical sections. It also implemented virtual memory and virtual machines using position-independent code. TSS/360 included an early implementation of a "Table Driven Scheduler" a user-configured table whose columns were parameters such as current priority, working set size, and number of timeslices used to date. The kernel would refer to this table when calculating the new priority of a thread. This later appeared in systems as diverse as Honeywell CP-V and IBM z/OS. As was standard with operating system software at the time, TSS/360 customers (such as General Motors Research Laboratories) were given full access to the entire source of the operating system code and development tools. User-developed improvements and patches were frequently incorporated into the official source code. User interface TSS provides users a co
https://en.wikipedia.org/wiki/Honeynet%20Project
The Honeynet Project is an international security research organization that investigates the latest cyber attacks and develops open source security tools to improve Internet security by tracking hackers behavioral patterns. History The Honeynet Project began in 1999 as a small mailing list of a group of people. The group expanded and officially dubbed itself as The Honeynet Project in June 2000. The project includes dozens of active chapters around the world, including Brazil, Indonesia, Greece, India, Mexico, Iran, Australia, Ireland, and many in the United States. Project goals The Honeynet Project has 3 main aims: Raise awareness of the existing threats on the Internet. Conduct research covering data analysis approaches unique security tool development, and gathering data about attackers and malicious software they use. Provide the tools and techniques used by The Honeynet Project so that other organizations can benefit. Research and development The Honeynet Project volunteers collaborate on security research efforts covering data analysis approaches, security tools development, and gathering data about hackers and malicious software. The Project research provides sensitive information regarding attackers, this includes this motives, communication methods, attack timelines and their actions following a system attack. This information is provided through Know Your Enemy white-papers, The Project blog posts, and Scan of the Month Forensic challenges. The project uses unmodified computers with the same specifications, operating systems and security as those used by many companies. These computer production systems are added online and the network of volunteers scan the network for attacks or suspicious activity. The finds are published on the company site for public viewing and knowledge. See also Cyber security Honeypot (computing)
https://en.wikipedia.org/wiki/Virtual%20Museum%20of%20New%20France
The Virtual Museum of New France () is a virtual museum created and managed by the Canadian Museum of History. Its purpose is to share knowledge and raise awareness of the history, culture and legacy of early French settlements in North America. The site includes interactive maps, photos, illustrations and information based on current research into New France. This encompasses the French settlements and territories that spread from Acadia in the East through the Saint Lawrence Valley, the Great Lakes region, the Ohio Valley, and south to Louisiana from the 16th to the 18th centuries. The content is written by historical and archaeological scholars and reviewed by other experts. Different sections are devoted to colonies and empires, explorers, economic activities, population, daily life and heritage. The articles in the Virtual Museum of New France cover a variety of topics pertaining to New France, including important historical figures, territorial expansion by France and competing colonial powers, immigration, social groups, slavery, religion, food, entertainment, science, medicine and governance. The site was launched in 1997 and expanded in 2011. External links Museums established in 1997 National museums of Canada New France 1997 establishments in Canada Canadian Museum of History Corporation
https://en.wikipedia.org/wiki/Periodic%20fever%20syndrome
Periodic fever syndromes are a set of disorders characterized by recurrent episodes of systemic and organ-specific inflammation. Unlike autoimmune disorders such as systemic lupus erythematosus, in which the disease is caused by abnormalities of the adaptive immune system, people with autoinflammatory diseases do not produce autoantibodies or antigen-specific T or B cells. Instead, the autoinflammatory diseases are characterized by errors in the innate immune system. The syndromes are diverse, but tend to cause episodes of fever, joint pains, skin rashes, abdominal pains and may lead to chronic complications such as amyloidosis. Most autoinflammatory diseases are genetic and present during childhood. The most common genetic autoinflammatory syndrome is familial Mediterranean fever, which causes short episodes of fever, abdominal pain, serositis, lasting less than 72 hours. It is caused by mutations in the MEFV gene, which codes for the protein pyrin. Pyrin is a protein normally present in the inflammasome. The mutated pyrin protein is thought to cause inappropriate activation of the inflammasome, leading to release of the pro-inflammatory cytokine IL-1β. Most other autoinflammatory diseases also cause disease by inappropriate release of IL-1β. Thus, IL-1β has become a common therapeutic target, and medications such as anakinra, rilonacept, and canakinumab have revolutionized the treatment of autoinflammatory diseases. However, there are some autoinflammatory diseases that are not known to have a clear genetic cause. This includes PFAPA, which is the most common autoinflammatory disease seen in children, characterized by episodes of fever, aphthous stomatitis, pharyngitis, and cervical adenitis. Other autoinflammatory diseases that do not have clear genetic causes include adult-onset Still's disease, systemic-onset juvenile idiopathic arthritis, Schnitzler syndrome, and chronic recurrent multifocal osteomyelitis. It is likely that these diseases are mul
https://en.wikipedia.org/wiki/RF%20power%20amplifier
A radio-frequency power amplifier (RF power amplifier) is a type of electronic amplifier that converts a low-power radio-frequency signal into a higher-power signal. Typically, RF power amplifiers are used in the final stage of a radio transmitter, their output driving the antenna. Design goals often include gain, power output, bandwidth, power efficiency, linearity (low signal compression at rated output), input and output impedance matching, and heat dissipation. Amplifier classes RF amplifier circuits operate in different modes, called "classes", based on how much of the cycle of the sinusoidal radio signal the amplifier (transistor or vacuum tube) is conducting current. Some classes are class A, class AB, class B, which are considered the linear amplifier classes in which the active device is used as a controlled current source, while class C is a nonlinear class in which the active device is used as a switch. The bias at the input of the active device determines the class of the amplifier. A common trade-off in power amplifier design is the trade-off between efficiency and linearity. The previously named classes become more efficient, but less linear, in the order they are listed. Operating the active device as a switch results in higher efficiency, theoretically up to 100%, but lower linearity. Among the switch-mode classes are class D, class F and class E. The class D amplifier is not often used in RF applications because the finite switching speed of the active devices and possible charge storage in saturation could lead to a large I-V product, which deteriorates efficiency. Solid state vs. vacuum tube amplifiers Modern RF power amplifiers use solid-state devices, predominantly MOSFETs (metal–oxide–semiconductor field-effect transistors). The earliest MOSFET-based RF amplifiers date back to the mid-1960s. Bipolar junction transistors were also commonly used in the past, up until they were replaced by power MOSFETs, particularly LDMOS transistors, as the
https://en.wikipedia.org/wiki/Contact%20Conference
Contact is an annual interdisciplinary conference that brings together renowned social and space scientists, science fiction writers and artists to exchange ideas, stimulate new perspectives, and encourage serious, creative speculation about humanity's future. The intent of Contact is to promote the integration of human factors into space research and policy, to explore the intersection of science and art, and to develop ethical approaches to cross-cultural contact. Since its beginnings, the Contact conference has fostered interdisciplinary inquiries into art, literature, exploration and scientific investigation. Contact was conceived by anthropologist Jim Funaro in 1979, and the first formal conference was held in 1983 in Santa Cruz, California. Twenty-six annual events have followed and several held at NASA Ames Research Center. In many previous years, the COTI HI project involved teams of high school students in the creation of scientifically accurate extraterrestrial beings, and in simulated encounters between two such races. Many spin-off organizations have formed, on line, and as far away as Japan. One such organization is the Contact consortium, which is focused on the medium of contact in multi-user virtual worlds on the Internet. Contact has been closely allied with the SETI Institute, and its early participants created the hypothetical planet Epona as covered in the Discovery Channel documentary Natural History of an Alien. The 27th Contact conference was held on March 31 - April 2, 2012, at the SETI Institute and the Domain Hotel, in Sunnyvale, California. Beginning with the 28th conference, held in Mountain View, the organization adopted a biennial schedule. The 29th conference was held on April 1-3, 2016, at the Domain Hotel, California. External links Contact Conference homepage Contact Conference semi-official archive SETI League Announces 2005 Best Ideas Awards The Epona Project Conferences in the United States Astrobiology
https://en.wikipedia.org/wiki/Hilton%27s%20law
Hilton's law, espoused by John Hilton in a series of medical lectures given in 1860–1862, is the observation that in the study of anatomy, the nerve supplying the muscles extending directly across and acting at a given joint not only supplies the muscle, but also innervates the joint and the skin overlying the muscle. This law remains applicable to anatomy. For example, the musculocutaneous nerve supplies the elbow joint of humans with pain and proprioception fibres. It also supplies coracobrachialis, biceps brachii, brachialis, and the forearm skin close to the insertion of each of those muscles. Hilton's law arises as a result of the embryological development of humans (or indeed other animals). Hilton based his law upon his extensive anatomical knowledge and clinical experiences. As with most British surgeons of his day (1805–1878), he intensely studied anatomy. The knee joint is supplied by branches from femoral nerve, sciatic nerve, and obturator nerve because all the three nerves are supplying the muscles moving the joint. These nerves not only innervate the muscles, but also the fibrous capsule, ligaments, and synovial membrane of the knee joint. Extensions of the law Hilton's law is described above. Similar observations can be made, to extend the theory; often a nerve will supply both the muscles and skin relating to a particular joint. The observation often holds true in reverse - that is to say, a nerve that supplies skin or a muscle will often supply the applicable joint. See also John Hilton (surgeon) Hilton's Line Hilton's Muscle Hilton's Pit
https://en.wikipedia.org/wiki/MYD88
Myeloid differentiation primary response 88 (MYD88) is a protein that, in humans, is encoded by the MYD88 gene. Model organisms Model organisms have been used in the study of MYD88 function. The gene was originally discovered and cloned by Dan Liebermann and Barbara Hoffman in mice. In that species it is a universal adapter protein as it is used by almost all TLRs (except TLR 3) to activate the transcription factor NF-κB. Mal (also known as TIRAP) is necessary to recruit Myd88 to TLR 2 and TLR 4, and MyD88 then signals through IRAK. It also interacts functionally with amyloid formation and behavior in a transgenic mouse model of Alzheimer's disease. A conditional knockout mouse line, called Myd88tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty-one tests were carried out on homozygous mutant animals, revealing one abnormality: male mutants had an increased susceptibility to bacterial infection. Function The MYD88 gene provides instructions for making a protein involved in signaling within immune cells. The MyD88 protein acts as an adapter, connecting proteins that receive signals from outside the cell to the proteins that relay signals inside the cell. In innate immunity, the MyD88 plays a pivotal role in immune cell activation through Toll-like receptors (TLRs), which belong to large group of pattern recognition receptors (PRR). In general, these receptors sense common patterns which are shared by various pathogens – Pathogen-associated molecular pattern (PAMPs), or which are produced/released during cellular damage – damage-associated molecular patterns (DAMPs). TLRs are homologous to Toll receptors, which were first described in the onthogenesis of fruit flies Drosophila, being re
https://en.wikipedia.org/wiki/Communications%20%26%20Information%20Services%20Corps
The Communications and Information Services Corps (CIS) () – formerly the Army Corps of Signals – is one of the combat support corps of the Irish Defence Forces, the military of Ireland. It is responsible for the installation, maintenance and operation of communications and information systems for the command, control and administration of the Defence Forces, and the facilitation of accurate, real-time sharing of intelligence between the Army, Naval Service and Air Corps branches at home and overseas. The CIS Corps is headquartered at McKee Barracks, Dublin, and comes under the command of an officer of Colonel rank, known as the Director of CIS Corps. Mission Formerly the Army Corps of Signals, the Communications and Information Services Corps is responsible for the development and operation of Information Technology and Telecommunications systems in support of Defence Forces tasks. It is also responsible for coordinating all communications – radio and line – and information systems, communications research and updating of communications in line with modern developments and operational requirements. The CIS Corps is tasked with utilising networking and information technologies in order to dramatically increase Defence Force operational effectiveness through the provision of timely and accurate information to the appropriate commander, along with the real time efficient sharing of information and intelligence with the Army, Naval Service and Air Corps, as well as with multinational partners involved in international peacekeeping and other actors as required. This role includes the development and maintenance of a secure nationwide Defence Forces Telecommunications Network (DFTN), which can support both protected voice and data services, and the provision and maintenance of encrypted military communications equipment for use by Defence Forces personnel at home and abroad. CIS Corps units are dispersed throughout the DF giving Communications and IT support to each
https://en.wikipedia.org/wiki/Oophagy
Oophagy ( ) sometimes ovophagy, literally "egg eating", is the practice of embryos feeding on eggs produced by the ovary while still inside the mother's uterus. The word oophagy is formed from the classical Greek (, "egg") and classical Greek (, "to eat"). In contrast, adelphophagy is the cannibalism of a multi-celled embryo. Oophagy is thought to occur in all sharks in the order Lamniformes and has been recorded in the bigeye thresher (Alopias superciliosus), the pelagic thresher (A. pelagicus), the shortfin mako (Isurus oxyrinchus) and the porbeagle (Lamna nasus) among others. It also occurs in the tawny nurse shark (Nebrius ferrugineus), and in the family Pseudotriakidae. This practice may lead to larger embryos or prepare the embryo for a predatory lifestyle. There are variations in the extent of oophagy among the different shark species. The grey nurse shark (Carcharias taurus) practices intrauterine cannibalism, the first developed embryo consuming both additional eggs and any other developing embryos. Slender smooth-hounds (Gollum attenuatus), form egg capsules which contain 30-80 ova, within which only one ovum develops; the remaining ova are ingested and their yolks stored in its external yolk sac. The embryo then proceeds to develop normally, without ingesting further eggs. Oophagy is also used as a synonym of egg predation practised by some snakes and other animals. Similarly, the term can be used to describe the destruction of non-queen eggs in nests of certain social wasps, bees, and ants. This is seen in the wasp species Polistes biglumis and Polistes humilis. Oophagy has been observed in Leptothorax acervorum and Parachartergus fraternus, where oophagy is practiced to increase energy circulation and provide more dietary protein. Polistes fuscatus use oophagy as a method to establish a dominance hierarchy; dominant females will eat the eggs of subordinate females such that they no longer produce eggs, possibly due to the unnecessary expenditure
https://en.wikipedia.org/wiki/Power%20harassment
Power harassment is a form of harassment and workplace bullying in which someone in a position of greater power uses that power to harass or bully a lower-ranking person. It includes a range of behavior from mild irritation and annoyances to serious abuses which can even involve forced activity beyond the boundaries of the job description. Prohibited in some countries, power harassment is considered a form of illegal discrimination and political and psychological abuse. Types of power harassment include physical or psychological attacks, segregation, excessive or demeaning work assignments, and intrusion upon the victim's personal life. Power harassment may combine with other forms of bias and harassment, including sexual harassment. In the context of sexual harassment, power harassment is distinguished from contra power harassment, in which the harasser is of lower rank than that of the victim, and peer harassment, in which the victim and harasser are of the same rank. The term "political power harassment" was coined by Ramona Rush in a 1993 paper on sexual harassment in academia. Because it operates to reinforce and justify an existing hierarchy, political power harassment can be difficult to assess. By country Japan Although power harassment is not unique to Japan, it has received significant attention in Japan as a policy and legal problem since the 1990s. A government survey in 2016 found that more than 30% of workers had experienced power harassment in the preceding three years. The Japanese term "power harassment" () was independently coined by Yasuko Okada of Tokoha Gakuen Junior College in 2002. The Japanese courts have applied the general compensation principle of Article 709 of the Civil Code of Japan to compensate victims of workplace bullying and power harassment. In 2019, the National Diet adopted the Power Harassment Prevention Act, which amends the Labor Policy Comprehensive Promotion Act to require employers to address power harassment. The 2
https://en.wikipedia.org/wiki/359%20%28number%29
359 (three hundred [and] fifty-nine) is the natural number following 358 and preceding 360. 359 is the 72nd prime number. In mathematics 359 is a Sophie Germain prime: (also a Sophie Germain prime). It is also a safe prime, because subtracting 1 and halving it gives another prime number (179, itself also safe). Since the reversal of its digits gives 953, which is prime, it is also an emirp. 359 is an Eisenstein prime with no imaginary part and a Chen prime. It is a strictly non-palindromic number. In other fields According to the author Douglas Adams, 359 is the funniest three-digit number. Integers
https://en.wikipedia.org/wiki/Thunbergia%20alata
Thunbergia alata, commonly called black-eyed Susan vine, is a herbaceous perennial climbing plant species in the family Acanthaceae. It is native to Eastern Africa, and has been naturalized in other parts of the world. It is grown as an ornamental plant in gardens and in hanging baskets. The name 'Black-eyed Susan' is thought to have come from a character that figures in many traditional ballads and songs. In the Ballad of Black-eyed Susan by John Gay, Susan goes aboard a ship in-dock to ask the sailors where her lover Sweet William has gone. Black-eyed Susan is also a name given to other species of flowers in the genus Rudbeckia. Description Thunbergia alata has a vine habit, and can grow to a height of high in warmer zones, or much less as a container plant or as an annual. It has twining stems with heart or arrow-shaped leaves. The three and a half to seven and a half centimeters long and two and a half centimeters wide leaves are triangular to heart-shaped. Their edges are wavy and both surfaces are hairy. The leaf blades sit on up to six and a half centimeters long petioles, which attach at a distance of four and a half to 13 centimeters on the one to one and a quarter millimeter thick stem axis. Inflorescence The hairy, mostly orange-yellow flowers have five petals and appear throughout the growing season, which grow on up to eight and a half centimeters long inflorescence axes. They typically are warm orange with a characteristic dark spot in the centre. The central two centimeter long corolla tube is black-violet. Each of the single flowers has two triangular to oval, hairy bracts that taper towards the outside. They are 18 to 20 millimeters long and nine to ten millimeters wide. The serrated calyx is about two millimeters long and has between 15 and 17 awl-shaped lobes. The corolla tube measures around four centimeters and shows five two centimeter large corolla lobes with right-hand covering buds on the outside. The plant flowers from mid-summe
https://en.wikipedia.org/wiki/Numerical%20cognition
Numerical cognition is a subdiscipline of cognitive science that studies the cognitive, developmental and neural bases of numbers and mathematics. As with many cognitive science endeavors, this is a highly interdisciplinary topic, and includes researchers in cognitive psychology, developmental psychology, neuroscience and cognitive linguistics. This discipline, although it may interact with questions in the philosophy of mathematics, is primarily concerned with empirical questions. Topics included in the domain of numerical cognition include: How do non-human animals process numerosity? How do infants acquire an understanding of numbers (and how much is inborn)? How do humans associate linguistic symbols with numerical quantities? How do these capacities underlie our ability to perform complex calculations? What are the neural bases of these abilities, both in humans and in non-humans? What metaphorical capacities and processes allow us to extend our numerical understanding into complex domains such as the concept of infinity, the infinitesimal or the concept of the limit in calculus? Heuristics in numerical cognition Comparative studies A variety of research has demonstrated that non-human animals, including rats, lions and various species of primates have an approximate sense of number (referred to as "numerosity"). For example, when a rat is trained to press a bar 8 or 16 times to receive a food reward, the number of bar presses will approximate a Gaussian or Normal distribution with peak around 8 or 16 bar presses. When rats are more hungry, their bar-pressing behavior is more rapid, so by showing that the peak number of bar presses is the same for either well-fed or hungry rats, it is possible to disentangle time and number of bar presses. In addition, in a few species the parallel individuation system has been shown, for example in the case of guppies which successfully discriminated between 1 and 4 other individuals. Similarly, researchers have set up hi
https://en.wikipedia.org/wiki/Biomaterial
A biomaterial is a substance that has been engineered to interact with biological systems for a medical purpose, either a therapeutic (treat, augment, repair, or replace a tissue function of the body) or a diagnostic one. The corresponding field of study, called biomaterials science or biomaterials engineering, is about fifty years old. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. Note that a biomaterial is different from a biological material, such as bone, that is produced by a biological system. Additionally, care should be exercised in defining a biomaterial as biocompatible, since it is application-specific. A biomaterial that is biocompatible or suitable for one application may not be biocompatible in another. Introduction Biomaterials can be derived either from nature or synthesized in the laboratory using a variety of chemical approaches utilizing metallic components, polymers, ceramics or composite materials. They are often used and/or adapted for a medical application, and thus comprise the whole or part of a living structure or biomedical device which performs, augments, or replaces a natural function. Such functions may be relatively passive, like being used for a heart valve, or maybe bioactive with a more interactive functionality such as hydroxy-apatite coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as a transplant material. Bioactivity The ability of an engineered biomaterial to induce a physiological response that is suppor
https://en.wikipedia.org/wiki/Kikayon
Kikayon (קִיקָיוֹן qîqāyōn) is the Hebrew name of a plant mentioned in the Biblical Book of Jonah. Origins The first use of the term kikayon is in the biblical book of Jonah, Chapter 4. In the quote below, from the Jewish Publication Society translation of 1917, the English word 'gourd' occurs where the Hebrew has kikayon. 6 And the God prepared a gourd, and made it to come up over Jonah, that it might be a shadow over his head, to deliver him from his evil. So Jonah was exceeding glad because of the gourd. 7 But God prepared a worm when the morning rose the next day, and it smote the gourd, that it withered. 8 And it came to pass, when the sun arose, that God prepared a vehement east wind; and the sun beat upon the head of Jonah, that he fainted, and requested for himself that he might die, and said: 'It is better for me to die than to live.' 9 And God said to Jonah: 'Art thou greatly angry for the gourd?' And he said: 'I am greatly angry, even unto death.' 10 And the said: 'Thou hast had pity on the gourd, for which thou hast not laboured, neither madest it grow, which came up in a night, and perished in a night; 11 and should not I have pity on Nineveh, that great city, wherein are more than sixscore thousand persons that cannot discern between their right hand and their left hand, and also much cattle?' --Jonah 4:6–11 Classification The word kikayon is only referenced in the book of Jonah and there is some question as to what kind of plant it is. Some hypotheses include a gourd and a castor oil plant (Ricinus communis). The current Hebrew usage of the word refers to the castor oil plant. A well-known argument between Jerome and Augustine concerned whether to translate kikayon as "gourd" or "ivy", although Jerome indicates that in fact the plant is neither: I have already given a sufficient answer to this in my commentary on Jonah. At present, I deem it enough to say that in that passage, where the Septuagint has gourd, and Aquila and the others have r
https://en.wikipedia.org/wiki/Wireless%20configuration%20utility
A wireless configuration utility, wireless configuration tool, or is a class of network management software that manages the activities and features of a wireless network connection. It may control the process of selecting an available access point, authenticating and associating to it and setting up other parameters of the wireless connection. There are many wireless LAN clients available for use. Clients vary in technical aspects, support of protocols and other factors. Some clients only work with certain hardware devices, while others only on certain operating systems. Comparison The table below compares various wireless LAN clients. See also Wireless tools for Linux
https://en.wikipedia.org/wiki/Biological%20integrity
Biological integrity is associated with how "pristine" an environment is and its function relative to the potential or original state of an ecosystem before human alterations were imposed. Biological integrity is built on the assumption that a decline in the values of an ecosystem's functions are primarily caused by human activity or alterations. The more an environment and its original processes are altered, the less biological integrity it holds for the community as a whole. If these processes were to change over time naturally, without human influence, the integrity of the ecosystem would remain intact. The integrity of the ecosystem relies heavily on the processes that occur within it because those determine what organisms can inhabit an area and the complexities of their interactions. Most of the applications of the notion of biological integrity have addressed aquatic environments, but there have been efforts to apply the concept to terrestrial environments. Determining the pristine condition of the ecosystem is in theory scientifically derived, but deciding which of the many possible states or conditions of an ecosystem is the appropriate or desirable goal is a political or policy decision and is typically the focus of policy and political disagreements. Ecosystem health is a related concept but differs from biological integrity in that the "desired condition" of the ecosystem or environment is explicitly based on the values or priorities of society. History The concept of biological integrity first appeared in the 1972 amendments to the U.S. Federal Water Pollution Control Act, also known as the Clean Water Act. The United States Environmental Protection Agency (EPA) had used the term as a way to gauge the standards to which water should be maintained, but the vocabulary instigated years of debate about the implications of not only the meaning of biological integrity, but also how it can be measured. EPA sponsored the first conference about the term in Ma
https://en.wikipedia.org/wiki/Secure%20two-party%20computation
Secure two-party computation (2PC) a.k.a. Secure function evaluation is sub-problem of secure multi-party computation (MPC) that has received special attention by researchers because of its close relation to many cryptographic tasks. The goal of 2PC is to create a generic protocol that allows two parties to jointly compute an arbitrary function on their inputs without sharing the value of their inputs with the opposing party. One of the most well known examples of 2PC is Yao's Millionaires' problem, in which two parties, Alice and Bob, are millionaires who wish to determine who is wealthier without revealing their wealth. Formally, Alice has wealth , Bob has wealth , and they wish to compute without revealing the values or . Yao's garbled circuit protocol for two-party computation only provided security against passive adversaries. One of the first general solutions for achieving security against active adversary was introduced by Goldreich, Micali and Wigderson by applying Zero-Knowledge Proof to enforce semi-honest behavior. This approach was known to be impractical for years due to high complexity overheads. However, significant improvements have been made toward applying this method in 2PC and Abascal, Faghihi Sereshgi, Hazay, Yuval Ishai and Venkitasubramaniam gave the first efficient protocol based on this approach. Another type of 2PC protocols that are secure against active adversaries were proposed by Yehuda Lindell and Benny Pinkas, Ishai, Manoj Prabhakaran and Amit Sahai and Jesper Buus Nielsen and Claudio Orlandi. Another solution for this problem, that explicitly works with committed input was proposed by Stanisław Jarecki and Vitaly Shmatikov. Secure multi-party computation Security The security of a two-party computation protocol is usually defined through a comparison with an idealised scenario that is secure by definition. The idealised scenario involves a trusted party that collects the input of the two parties mostly client and server over se
https://en.wikipedia.org/wiki/Stretched%20exponential%20function
The stretched exponential function is obtained by inserting a fractional power law into the exponential function. In most applications, it is meaningful only for arguments between 0 and +∞. With , the usual exponential function is recovered. With a stretching exponent β between 0 and 1, the graph of log f versus t is characteristically stretched, hence the name of the function. The compressed exponential function (with ) has less practical importance, with the notable exception of , which gives the normal distribution. In mathematics, the stretched exponential is also known as the complementary cumulative Weibull distribution. The stretched exponential is also the characteristic function, basically the Fourier transform, of the Lévy symmetric alpha-stable distribution. In physics, the stretched exponential function is often used as a phenomenological description of relaxation in disordered systems. It was first introduced by Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor; thus it is also known as the Kohlrausch function. In 1970, G. Williams and D.C. Watts used the Fourier transform of the stretched exponential to describe dielectric spectra of polymers; in this context, the stretched exponential or its Fourier transform are also called the Kohlrausch–Williams–Watts (KWW) function. The Kohlrausch–Williams–Watts (KWW) function corresponds to the time domain charge response of the main dielectric models, such as the Cole-Cole equation, the Cole-Davidson equation, and the Havriliak–Negami relaxation, for small time arguments. In phenomenological applications, it is often not clear whether the stretched exponential function should be used to describe the differential or the integral distribution function—or neither. In each case, one gets the same asymptotic decay, but a different power law prefactor, which makes fits more ambiguous than for simple exponentials. In a few cases, it can be shown that the asymptotic decay is a stretched exponentia
https://en.wikipedia.org/wiki/BioMOBY
BioMOBY is a registry of web services used in bioinformatics. It allows interoperability between biological data hosts and analytical services by annotating services with terms taken from standard ontologies. BioMOBY is released under the Artistic License. The BioMOBY project The BioMoby project began at the Model Organism Bring Your own Database Interface Conference (MOBY-DIC), held in Emma Lake, Saskatchewan on September 21, 2001. It stemmed from a conversation between Mark D Wilkinson and Suzanna Lewis during a Gene Ontology developers meeting at the Carnegie Institute, Stanford, where the functionalities of the Genquire and Apollo genome annotation tools were being discussed and compared. The lack of a simple standard that would allow these tools to interact with the myriad of data-sources required to accurately annotate a genome was a critical need of both systems. Funding for the BioMOBY project was subsequently adopted by Genome Prairie (2002-2005), Genome Alberta (2005-date), in part through Genome Canada , a not-for-profit institution leading the Canadian X-omic initiatives. There are two main branches of the BioMOBY project. One is a web-service-based approach, while the other utilizes Semantic Web technologies. This article will refer only to the Web Service specifications. The other branch of the project, Semantic Moby, is described in a separate entry. Moby The Moby project defines three Ontologies that describe biological data-types, biological data-formats, and bioinformatics analysis types. Most of the interoperable behaviours seen in Moby are achieved through the Object (data-format) and Namespace (data-type) ontologies. The MOBY Namespace Ontology is derived from the Cross-Reference Abbreviations List of the Gene Ontology project. It is simply a list of abbreviations for the different types of identifiers that are used in bioinformatics. For example, Genbank has "gi" identifiers that are used to enumerate all of their sequence rec
https://en.wikipedia.org/wiki/Yaw-rate%20sensor
A yaw-rate sensor is a gyroscopic device that measures a vehicle's yaw rate, its angular velocity around its vertical axis. The angle between the vehicle's heading and velocity is called its slip angle, which is related to the yaw rate. Types There are two types of yaw-rate sensors: the piezoelectric type and the micromechanical type. In the piezoelectric type, the sensor is a "tuning fork"-shaped structure with four piezoelectric elements, two on top and two below. When the slip angle is zero (i.e., no slip), the upper elements produce no voltage as no Coriolis force acts on them. But when cornering, the rotational movement causes the upper part of the tuning fork to leave the oscillatory plane, creating an alternating voltage (and thus an alternating current) proportional to the yaw rate and oscillatory speed. The output signal's sign depends on the direction of rotation. In the micromechanical type, the Coriolis acceleration is measured by a micromechanical capacitive acceleration sensor placed on an oscillating element. This acceleration is proportional to the product of the yaw rate and oscillatory velocity, the latter of which is maintained electronically at a constant value. Applications Yaw rate sensors are used in aircraft and electronic stability control systems in cars.
https://en.wikipedia.org/wiki/Graph%20traversal
In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal. Redundancy Unlike tree traversal, graph traversal may require that some vertices be visited more than once, since it is not necessarily known before transitioning to a vertex that it has already been explored. As graphs become more dense, this redundancy becomes more prevalent, causing computation time to increase; as graphs become more sparse, the opposite holds true. Thus, it is usually necessary to remember which vertices have already been explored by the algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to prevent the traversal from continuing indefinitely). This may be accomplished by associating each vertex of the graph with a "color" or "visitation" state during the traversal, which is then checked and updated as the algorithm visits each vertex. If the vertex has already been visited, it is ignored and the path is pursued no further; otherwise, the algorithm checks/updates the vertex and continues down its current path. Several special cases of graphs imply the visitation of other vertices in their structure, and thus do not require that visitation be explicitly recorded during the traversal. An important example of this is a tree: during a traversal it may be assumed that all "ancestor" vertices of the current vertex (and others depending on the algorithm) have already been visited. Both the depth-first and breadth-first graph searches are adaptations of tree-based algorithms, distinguished primarily by the lack of a structurally determined "root" vertex and the addition of a data structure to record the traversal's visitation state. Graph traversal algorithms Note. — If each vertex in a graph is to be traversed by a tree-based algo
https://en.wikipedia.org/wiki/Turing%20machine%20equivalents
A Turing machine is a hypothetical computing device, first conceived by Alan Turing in 1936. Turing machines manipulate symbols on a potentially infinite strip of tape according to a finite table of rules, and they provide the theoretical underpinnings for the notion of a computer algorithm. While none of the following models have been shown to have more power than the single-tape, one-way infinite, multi-symbol Turing-machine model, their authors defined and used them to investigate questions and solve problems more easily than they could have if they had stayed with Turing's a-machine model. Machines equivalent to the Turing machine model Turing equivalence Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power. They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesizes this to be true: that anything that can be "computed" can be computed by some Turing machine.) The sequential-machine models All of the following are called "sequential machine models" to distinguish them from "parallel machine models". Tape-based Turing machines Turing's a-machine model Turing's a-machine (as he called it) was left-ended, right-end-infinite. He provided symbols əə to mark the left end. A finite number of tape symbols were permitted. The instructions (if a universal machine), and the "input" and "out" were written only on "F-squares", and markers were to appear on "E-squares". In essence he divided his machine into two tapes that always moved together. The instructions appeared in a tabular form called "5-tuples" and were not executed sequentially. Single-tape machines with restricted symbols and/or restricted instructions The following models are single tape Turing machines but restricted with (i) restricted tape symbols { m
https://en.wikipedia.org/wiki/Functional%20testing
In software development, functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered (unlike white-box testing). Functional software testing is conducted to evaluate the compliance of a system or component with specified functional requirements. Functional testing usually describes what the system does. Since functional testing is a type of black-box testing, the software's functionality can be tested without knowing the internal workings of the software. This means that testers do not need to know programming languages or how the software has been implemented. This, in turn, could lead to reduced developer bias (or confirmation bias) in testing since the tester has not been involved in the software's development. Functional testing does not imply that you are testing a function (method) of your module or class. Functional testing tests a slice of functionality of the whole system. Functional testing differs from system testing in that functional testing "verifies a program by checking it against ... design document(s) or specification(s)", while system testing "validate[s] a program by checking it against the published user or system requirements." The concept of incorporating testing earlier in the delivery cycle is not restricted to functional testing. Types Functional testing has many types: Sanity testing, often referred to as smoke testing and various other terms in software testing Regression testing Usability testing Six steps Functional testing typically involves six steps The identification of functions that the software is expected to perform The creation of input data based on the function's specifications The determination of output based on the function's specifications The execution of the test case The com
https://en.wikipedia.org/wiki/Friendly%20Floatees%20spill
Friendly Floatees are plastic bath toys (including rubber ducks) marketed by The First Years and made famous by the work of Curtis Ebbesmeyer, an oceanographer who models ocean currents on the basis of flotsam movements. Ebbesmeyer studied the movements of a consignment of 28,800 Friendly Floatees—yellow ducks, red beavers, blue turtles, and green frogs—that were washed into the Pacific Ocean in 1992. Some of the toys landed along Pacific Ocean shores, such as Hawaii. Others traveled over , floating over the site where the Titanic sank, and spent years frozen in Arctic ice before reaching the U.S. Eastern Seaboard as well as British and Irish shores, fifteen years later, in 2007. Oceanography A consignment of Friendly Floatee toys, manufactured in China for The First Years Inc., departed from Hong Kong on a container ship, the Evergreen Ever Laurel, destined for Tacoma, Washington. On 10 January 1992, during a storm in the North Pacific Ocean close to the International Date Line, twelve 40-foot (12-m) intermodal containers were washed overboard. One of these containers held 28,800 Floatees, a child's bath toy which came in a number of forms: red beavers, green frogs, blue turtles and yellow ducks. At some point, the container opened (possibly because it collided with other containers or the ship itself) and the Floatees were released. Although each toy was mounted in a cardboard housing attached to a backing card, subsequent tests showed that the cardboard quickly degraded in sea water allowing the Floatees to escape. Unlike many bath toys, Friendly Floatees have no holes in them so they do not take on water. Seattle oceanographers Curtis Ebbesmeyer and James Ingraham, who were working on an ocean surface current model, began to track their progress. The mass release of 28,800 objects into the ocean at one time offered significant advantages over the standard method of releasing 500–1000 drift bottles. The recovery rate of objects from the Pacific Ocean is typical
https://en.wikipedia.org/wiki/Stanislas%20Dehaene
Stanislas Dehaene (born May 12, 1965) is a French author and cognitive neuroscientist whose research centers on a number of topics, including numerical cognition, the neural basis of reading and the neural correlates of consciousness. As of 2017, he is a professor at the Collège de France and, since 1989, the director of INSERM Unit 562, "Cognitive Neuroimaging". Dehaene was one of ten people to be awarded the James S. McDonnell Foundation Centennial Fellowship in 1999 for his work on the "Cognitive Neuroscience of Numeracy". In 2003, together with Denis Le Bihan, Dehaene was awarded the Grand Prix scientifique de la Fondation Louis D. from the Institut de France. He was elected to the American Philosophical Society in 2010. In 2014, together with Giacomo Rizzolatti and Trevor Robbins, he was awarded the Brain Prize. Dehaene is an associate editor of the journal Cognition, and a member of the editorial board of several other journals, including NeuroImage, PLoS Biology, Developmental Science, and Neuroscience of Consciousness. Early life and education Dehaene studied mathematics at the École Normale Supérieure in Paris from 1984 to 1989. He obtained his master's degree in Applied mathematics and computer science in 1985 from the University of Paris VI. He turned to neuroscience and psychology after reading Jean-Pierre Changeux's book, L'Homme neuronal (Neuronal Man: The Biology of The Mind). Dehaene began to collaborate on computational neuronal models of human cognition, including working memory and task control, collaborations which continue to the present day. Dehaene completed his PhD in Experimental Psychology in 1989 with Jacques Mehler at the École des Hautes Études en Sciences Sociales (EHESS), Paris. Career After receiving his doctorate, Dehaene became a research scientist at INSERM in the Cognitive Sciences and Psycholinguistics Laboratory (Laboratoire de Sciences Cognitives et Psycholinguistique) directed by Mehler. He spent two years, from 1992
https://en.wikipedia.org/wiki/Elliptic%20rational%20functions
In mathematics the elliptic rational functions are a sequence of rational functions with real coefficients. Elliptic rational functions are extensively used in the design of elliptic electronic filters. (These functions are sometimes called Chebyshev rational functions, not to be confused with certain other functions of the same name). Rational elliptic functions are identified by a positive integer order n and include a parameter ξ ≥ 1 called the selectivity factor. A rational elliptic function of degree n in x with selectivity factor ξ is generally defined as: where cd(u,k) is the Jacobi elliptic cosine function. K() is a complete elliptic integral of the first kind. is the discrimination factor, equal to the minimum value of the magnitude of for . For many cases, in particular for orders of the form n = 2a3b where a and b are integers, the elliptic rational functions can be expressed using algebraic functions alone. Elliptic rational functions are closely related to the Chebyshev polynomials: Just as the circular trigonometric functions are special cases of the Jacobi elliptic functions, so the Chebyshev polynomials are special cases of the elliptic rational functions. Expression as a ratio of polynomials For even orders, the elliptic rational functions may be expressed as a ratio of two polynomials, both of order n.      (for n even) where are the zeroes and are the poles, and is a normalizing constant chosen such that . The above form would be true for even orders as well except that for odd orders, there will be a pole at x=∞ and a zero at x=0 so that the above form must be modified to read:      (for n odd) Properties The canonical properties for at for The slope at x=1 is as large as possible The slope at x=1 is larger than the corresponding slope of the Chebyshev polynomial of the same order. The only rational function satisfying the above properties is the elliptic rational function . The following properties are deri
https://en.wikipedia.org/wiki/Humouse
A humouse is an immunodeficient mouse reconstituted with a human immune system, also generally known as humanised mouse. Although conventional mouse models have allowed for an increased understanding of mammalian immune systems, this knowledge cannot necessarily be directly applied to humans due to biological differences between the two species. Humice could theoretically be used as novel pre-clinical models of the human immune system, with uses including assessing vaccine efficacy. See also Immunology Knockout mouse
https://en.wikipedia.org/wiki/Simulation%20noise
Simulation noise is a function that creates a divergence-free vector field. This signal can be used in artistic simulations for the purposes of increasing the perception of extra detail. The function can be calculated in three dimensions by dividing the space into a regular lattice grid. With each edge is associated a random value, indicating a rotational component of material revolving around the edge. By following rotating material into and out of faces, one can quickly sum the flux passing through each face of the lattice. Flux values at lattice faces are then interpolated to create a field value for all positions. Perlin noise is the earliest form of lattice noise, which has become very popular in computer graphics. Perlin Noise is not suited for simulation because it is not divergence-free. Noises based on lattices, such as simulation noise and Perlin noise, are often calculated at different frequencies and summed together to form band-limited fractal signals. Other approaches developed later that use vector calculus identities to produce divergence free fields, such as "Curl-Noise" as suggested by Robert Bridson, and "Divergence-Free Noise" due to Ivan DeWolf. These often require calculation of lattice noise gradients, which sometimes are not readily available. A naive implementation would call a lattice noise function several times to calculate its gradient, resulting in more computation than is strictly necessary. Unlike these noises, simulation noise has a geometric rationale in addition to its mathematical properties. It simulates vortices scattered in space, to produce its pleasing aesthetic.
https://en.wikipedia.org/wiki/The%20Web%20Conference
The ACM Web Conference (formerly known as International World Wide Web Conference, abbreviated as WWW) is a yearly international academic conference on the topic of the future direction of the World Wide Web. The first conference of many was held and organized by Robert Cailliau in 1994 at CERN in Geneva, Switzerland. The conference has been organized by the International World Wide Web Conference Committee (IW3C2), also founded by Robert Cailliau and colleague Joseph Hardin, every year since. In 2020, the Web Conference series became affiliated with the Association for Computing Machinery (ACM), where it is supported by ACM SIGWEB. The conference's location rotates among North America, Europe, and Asia and its events usually span a period of five days. The conference aims to provide a forum in which "key influencers, decision makers, technologists, businesses and standards bodies" can both present their ongoing work, research, and opinions as well as receive feedback from some of the most knowledgeable people in the field. The web conference series is aimed at providing a global forum for discussion and debate in regard to the standardization of its associated technologies and the impact of said technologies on society and culture. Developers, researchers, internet users as well as commercial ventures and organizations come together at the conference to discuss the latest advancements of the Web and its evolving uses and trends, such as the development and popularization of the eTV and eBusiness. The conferences usually include a variety of events, such as tutorials and workshops, as well as the main conference and special dedications of space in memory of the history of the Web and specific notable events. The conferences are organized by the IW3C2 in collaboration with the World Wide Web Consortium (W3C), Local Organizing Committees, and Technical Program Committees. History Robert Cailliau, a founder of the World Wide Web himself, lobbied inside CERN and at
https://en.wikipedia.org/wiki/Kalman%E2%80%93Yakubovich%E2%80%93Popov%20lemma
The Kalman–Yakubovich–Popov lemma is a result in system analysis and control theory which states: Given a number , two n-vectors B, C and an n x n Hurwitz matrix A, if the pair is completely controllable, then a symmetric matrix P and a vector Q satisfying exist if and only if Moreover, the set is the unobservable subspace for the pair . The lemma can be seen as a generalization of the Lyapunov equation in stability theory. It establishes a relation between a linear matrix inequality involving the state space constructs A, B, C and a condition in the frequency domain. The Kalman–Popov–Yakubovich lemma which was first formulated and proved in 1962 by Vladimir Andreevich Yakubovich where it was stated that for the strict frequency inequality. The case of nonstrict frequency inequality was published in 1963 by Rudolf E. Kálmán. In that paper the relation to solvability of the Lur’e equations was also established. Both papers considered scalar-input systems. The constraint on the control dimensionality was removed in 1964 by Gantmakher and Yakubovich and independently by Vasile Mihai Popov. Extensive reviews of the topic can be found in and in Chapter 3 of. Multivariable Kalman–Yakubovich–Popov lemma Given with for all and controllable, the following are equivalent: for all there exists a matrix such that and The corresponding equivalence for strict inequalities holds even if is not controllable.
https://en.wikipedia.org/wiki/Pragmatic%20theory%20of%20information
The pragmatic theory of information is derived from Charles Sanders Peirce's general theory of signs and inquiry. Peirce explored a number of ideas about information throughout his career. One set of ideas is about the "laws of information" having to do with the logical properties of information. Another set of ideas about "time and thought" have to do with the dynamic properties of inquiry. All of these ideas contribute to the pragmatic theory of inquiry. Peirce set forth many of these ideas very early in his career, periodically returning to them on scattered occasions until the end, and they appear to be implicit in much of his later work on the logic of science and the theory of signs, but he never developed their implications to the fullest extent. The 20th century thinker Ernst Ulrich and his wife Christine von Weizsäcker reviewed the pragmatics of information; their work is reviewed by Gennert. Overview The pragmatic information content is the information content received by a recipient; it is focused on the recipient and defined in contrast to Claude Shannon's information definition, which focuses on the message. The pragmatic information measures the information received, not the information contained in the message. Pragmatic information theory requires not only a model of the sender and how it encodes information, but also a model of the receiver and how it acts on the information received. The determination of pragmatic information content is a precondition for the determination of the value of information. Claude Shannon and Warren Weaver completed the viewpoint on information encoding in the seminal paper by Shannon A Mathematical Theory of Communication, with two additional viewpoints (B and C): A. How accurately can the symbols that encode the message be transmitted ("the technical problem")? B. How precisely do the transmitted symbols convey the desired meaning("the semantics problem")? C. How effective is the received message in changin
https://en.wikipedia.org/wiki/Media%20type
A media type (formerly known as a MIME type) is a two-part identifier for file formats and format contents transmitted on the Internet. Their purpose is somewhat similar to file extensions in that they identify the intended data format. The Internet Assigned Numbers Authority (IANA) is the official authority for the standardization and publication of these classifications. Media types were originally defined in Request for Comments (MIME) Part One: Format of Internet Message Bodies (Nov 1996) in November 1996 as a part of the MIME (Multipurpose Internet Mail Extensions) specification, for denoting type of email message content and attachments; hence the original name, MIME type. Media types are also used by other internet protocols such as HTTP and document file formats such as HTML, for similar purposes. Naming A media type consists of a type and a subtype, which is further structured into a tree. A media type can optionally define a suffix and parameters: As an example, an HTML file might be designated . In this example, is the type, is the subtype, and is an optional parameter indicating the character encoding. Types, subtypes, and parameter names are case-insensitive. Parameter values are usually case-sensitive, but may be interpreted in a case-insensitive fashion depending on the intended use. Types The "type" part defines the broad use of the media type. As of November 1996, the registered types were: , , , , , and . By December 2020, the registered types included the foregoing, plus , , and . An unofficial top-level type in common use is . Subtypes A subtype typically consists of a media format, but it may or must also contain other content, such as a tree prefix, producer, product or suffix, according to the different rules in registration trees. All media types should be registered using the IANA registration procedures. For the efficiency and flexibility of the media type registration process, different structures of subtypes can be registe
https://en.wikipedia.org/wiki/Thioredoxin%20fold
The thioredoxin fold is a protein fold common to enzymes that catalyze disulfide bond formation and isomerization. The fold is named for the canonical example thioredoxin and is found in both prokaryotic and eukaryotic proteins. It is an example of an alpha/beta protein fold that has oxidoreductase activity. The fold's spatial topology consists of a four-stranded antiparallel beta sheet sandwiched between three alpha helices. The strand topology is 2134 with 3 antiparallel to the rest. Sequence conservation Despite sequence variability in many regions of the fold, thioredoxin proteins share a common active site sequence with two reactive cysteine residues: Cys-X-Y-Cys, where X and Y are often but not necessarily hydrophobic amino acids. The reduced form of the protein contains two free thiol groups at the cysteine residues, whereas the oxidized form contains a disulfide bond between them. Disulfide bond formation Different thioredoxin fold-containing proteins vary greatly in their reactivity and in the pKa of their free thiols, which derives from the ability of the overall protein structure to stabilize the activated thiolate. Although the structure is fairly consistent among proteins containing the thioredoxin fold, the pKa is extremely sensitive to small variations in structure, especially in the placement of protein backbone atoms near the first cysteine. Examples Human proteins containing this domain include: DNAJC10 ERP70 GLRX3 P4HB; PDIA2; PDIA3; PDIA4; PDIA5; PDIA6 (P5); PDILT QSOX1; QSOX2 STRF8 TXN; TXN2; TXNDC1; TXNDC10; TXNDC11; TXNDC13; TXNDC14; TXNDC15; TXNDC16; TXNDC2; TXNDC3; TXNDC4; TXNDC5; TXNDC6; TXNDC8; TXNL1; TXNL3
https://en.wikipedia.org/wiki/Microsoft%20Forefront
Microsoft Forefront is a discontinued family of line-of-business security software by Microsoft Corporation. Microsoft Forefront products are designed to help protect computer networks, network servers (such as Microsoft Exchange Server and Microsoft SharePoint Server) and individual devices. As of 2015, the only actively developed Forefront product is Forefront Identity Manager. Components Forefront includes the following products: Identity Manager: State-based identity management software product, designed to manage users' digital identities, credentials and groupings throughout the lifecycle of their membership of an enterprise computer system Rebranded System Center Endpoint Protection: A business antivirus software product that can be controlled over the network, formerly known as Forefront Endpoint Protection, Forefront Client Security and Client Protection. Exchange Online Protection: A software as a service version of Forefront Protect for Exchange Server: Instead of installing a security program on the server, the customer re-routes its email traffic to the Microsoft online service before receiving them. Discontinued Threat Management Gateway: Discontinued server product that provides three functions: Routing, firewall and web cache. Formerly called Internet Security and Acceleration Server or ISA Server. Unified Access Gateway: Discontinued server product that protects network assets by encrypting all inbound access request from authorized users. Supports Virtual Private Networks (VPN) and DirectAccess. Formerly called Intelligent Application Gateway. Server Management Console: Discontinued web-based application that enables management of multiple instances of Protection for Exchange, Protection for SharePoint and Microsoft Antigen from a single interface. Protection for Exchange: A discontinued software product that detects viruses, spyware, and spam by integrating multiple scanning engines from security partners in a single solution to prot
https://en.wikipedia.org/wiki/NO%20CARRIER
NO CARRIER (capitalized) is a text message transmitted from a modem to its attached device (typically a computer), indicating the modem is not (or no longer) connected to a remote system. NO CARRIER is a response message that is defined in the Hayes command set. Due to the popularity of Hayes modems during the heyday of dial-up connectivity, most other modem manufacturers supported the Hayes command set. For this reason, the NO CARRIER message was ubiquitously understood to mean that one was no longer connected to a remote system. Carrier tone A carrier tone is an audio carrier signal used by two modems to suppress echo cancellation and establish a baseline frequency for communication. When the answering modem detects a ringtone on the phone line, it picks up that line and starts transmitting a carrier tone. If it does not receive data from the calling modem within a set amount of time, it disconnects the line. The calling modem waits for the tone after it dials the phone line before it initiates data transmission. If it does not receive a carrier tone within a set amount of time, it will disconnect the phone line and issues the NO CARRIER message. The actual data is transmitted from the answering modem to the calling modem via modulation of the carrier. Practical meaning The NO CARRIER message is issued by a modem for any of the following reasons: A dial (ATD) or answer (ATA) command did not result in a successful connection to another modem, and the reason wasn't that the line was BUSY (a separately defined message). A dial or answer command was aborted while in progress. The abort can be triggered by the computer receiving a keypress to abort or the computer dropping the Data Terminal Ready (DTR) signal to hang up. A previously established data connection has ended (either at the attached computer's command, or as a result of being disconnected from the remote end), and the modem has now gone from the data mode to the command mode. Current use As modems
https://en.wikipedia.org/wiki/Alliance%20for%20Aging%20Research
The Alliance for Aging Research is a non-profit organization based in Washington, D.C., that promotes medical research to improve the human experience of aging. Founded in 1986 by Daniel Perry, the Alliance also advocates and implements health education for consumers and health professionals. The Alliance is governed by a board of directors. Susan Peschin is the chief executive officer and president. Activities Policy Main policy areas include aging research funding, FDA funding, stem cell research funding, and improving health care for older Americans. The Alliance holds congressional briefings to increase awareness of such diseases and conditions as osteoporosis, Alzheimer's disease, oral care and diabetes. Coalitions The Alliance also serves on several coalitions and committees including Friends of the National Institute on Aging (NIA), the Alliance for a Stronger FDA, Coalition for the Advancement of Medical Research (CAMR), Partnership to Fight Chronic Disease, and the National Coalition on Mental Health and Aging. White House Conference on Aging The Alliance has been a part of the once-a-decade White House Conference on Aging, helping the President and Congress adopt resolutions to make aging research a national priority. Task Force on Aging Research Funding The Alliance collaborates with many patient and advocacy organizations on the annual Task Force on Aging Research Funding, a call to action to Congress and other national policymakers. ACT-AD Coalition ACT-AD (Accelerate Cure/ Treatments for Alzheimer's Disease) is a coalition of more than 50 organizations working to accelerate the development of treatments and a cure for Alzheimer's disease. Programs The Alliance produces materials focused on healthy aging and chronic disease, particularly for the Baby Boomer population. The Alliance has developed resources on the following topics: Age-related macular degeneration, Alzheimer's disease and caregivers, osteoporosis, heart disease, Parkinson's
https://en.wikipedia.org/wiki/Elxsi
Elxsi Corporation was a minicomputer manufacturing company established in the late 1970s in Silicon Valley, US, along with a host of competitors (Trilogy Systems, Sequent, Convex Computer). The Elxsi processor was an Emitter Coupled Logic (ECL) design that featured a 50-nanosecond clock, a 25-nanosecond back panel bus, IEEE floating-point arithmetic and a 64-bit architecture. It allowed multiple processors to communicate over a common bus called the Gigabus, believed to be the first company to do so. The operating system was a message-based operating system called EMBOS. The Elxsi CPU was a microcoded design, allowing custom instructions to be coded into microcode. History Elxsi was founded in 1979 by Joe Rizzi (previously a manager at Intersil) and Thampy Thomas (who would go on to found NexGen Microsystems). It is believed that Elxsi was the first startup founded by an Indian in Silicon Valley. Much of the architecture of the Elxsi machine was designed by former Stanford University professors Len Shar and Balasubrimanian Kumar. Another key contributor to the design was Harold (Mac) McFarland, who was also a key designer on the team that created the PDP-11. George Taylor (on the IEEE standard committee and a student of UC Berkeley Professor William Kahan) provided a key design for the IEEE floating-point unit. Elxsi was bought out by Gene Amdahl in 1985 with money that was leftover from the Trilogy venture. Venture investors in Elxsi included Tata Group (India) and Arthur Rock. In 1989, however, Elxsi left the computer business because of the general shift away from the use of mainframes in the global computer industry and the advent of the personal computer. The Tata Group kept the name Tata Elxsi but it now belongs to the Tata group of companies. The original Elxsi Corporation, however, remained in business as a going concern. In 1989, the company sold its computer maintenance business to National Computer Systems. In 1991, the company entered two entire
https://en.wikipedia.org/wiki/Beta%20diversity
In ecology, beta diversity (β-diversity or true beta diversity) is the ratio between regional and local species diversity. The term was introduced by R. H. Whittaker together with the terms alpha diversity (α-diversity) and gamma diversity (γ-diversity). The idea was that the total species diversity in a landscape (γ) is determined by two different things: the mean species diversity at the local level (α) and the differentiation among local sites (β). Other formulations for beta diversity include "absolute species turnover", "Whittaker's species turnover" and "proportional species turnover". Whittaker proposed several ways of quantifying differentiation, and subsequent generations of ecologists have invented more. As a result, there are now many defined types of beta diversity. Some use beta diversity to refer to any of several indices related to compositional heterogeneity. Confusion is avoided by using distinct names for other formulations. Beta diversity as a measure of species turnover overemphasizes the role of rare species as the difference in species composition between two sites or communities is likely reflecting the presence and absence of some rare species in the assemblages. Beta diversity can also be a measure of nestedness, which occurs when species assemblages in species-poor sites are a subset of the assemblages in more species-rich sites. Moreover, pairwise beta diversity are inadequate in building all biodiversity partitions (some partitions in a Venn diagram of 3 or more sites cannot be expressed by alpha and beta diversity). Consequently, some macroecological and community patterns cannot be fully expressed by alpha and beta diversity. Due to these two reasons, a new way of measuring species turnover, coined Zeta diversity (ζ-diversity), has been proposed and used to connect all existing incidence-based biodiversity patterns. Types Whittaker beta diversity Gamma diversity and alpha diversity can be calculated directly from species inventory
https://en.wikipedia.org/wiki/Ochrea
An ochrea (Latin ocrea, greave or protective legging), also spelled ocrea, is a plant structure formed of stipules fused into a sheath surrounding the stem, and is typically found in the Polygonaceae. In palms it denotes an extension of the leaf sheath beyond the petiole insertion.
https://en.wikipedia.org/wiki/Clarke%20Medal
The Clarke Medal is awarded by the Royal Society of New South Wales, the oldest learned society in Australia and the Southern Hemisphere, for distinguished work in the Natural sciences. The medal is named in honour of the Reverend William Branwhite Clarke, one of the founders of the Society and was to be "awarded for meritorious contributions to Geology, Mineralogy and Natural History of Australasia, to be open to men of science, whether resident in Australasia or elsewhere". It is now awarded annually for distinguished work in the Natural Sciences (geology, botany and zoology) done in the Australian Commonwealth and its territories. Each discipline is considered in rotation every three years. Recipients Source: Royal Society of New South Wales 1878: Richard Owen (Zoology) 1879: George Bentham (Botany) 1880: Thomas Huxley (Palaeontology) 1881: Frederick McCoy (Palaeontology) 1882: James Dwight Dana (Geology) 1883: Ferdinand von Mueller (Botany) 1884: Alfred Richard Cecil Selwyn (Geology) 1885: Joseph Dalton Hooker (Botany) 1886: Laurent-Guillaume de Koninck (Palaeontology) 1887: Sir James Hector (Geology) 1888: Julian Tenison Woods (Geology) 1889: Robert L. J. Ellery (Astronomy) 1890: George Bennett (Zoology) 1891: Frederick Hutton (Geology) 1892: William Turner Thiselton-Dyer (Botany) 1893: Ralph Tate (Botany and Geology) 1895: Joint Award: Robert Logan Jack (Geology) and Robert Etheridge, Jr. (Palaeontology) 1896: Augustus Gregory (Exploration) 1900: John Murray (Oceanography) 1901: Edward John Eyre (Exploration) 1902: Frederick Manson Bailey (Botany) 1903: Alfred William Howitt (Anthropology) 1907: Walter Howchin (Geology) 1909: Walter Roth (Anthropology) 1912: William Harper Twelvetrees (Geology) 1914: Arthur Smith Woodward (Palaeontology) 1915: William Aitcheson Haswell (Zoology) 1917: Edgeworth David (Geology) 1918: Leonard Rodway (Botany) 1920: Joseph Edmund Carne (Geology) 1921: Joseph James Fletcher (Biology) 1922: Richa
https://en.wikipedia.org/wiki/Encircled%20energy
In optics, encircled energy is a measure of concentration of energy in an image, or projected laser at a given range. For example, if a single star is brought to its sharpest focus by a lens giving the smallest image possible with that given lens (called a point spread function or PSF), calculation of the encircled energy of the resulting image gives the distribution of energy in that PSF. Encircled energy is calculated by first determining the total energy of the PSF over the full image plane, then determining the centroid of the PSF. Circles of increasing radius are then created at that centroid and the PSF energy within each circle is calculated and divided by the total energy. As the circle increases in radius, more of the PSF energy is enclosed, until the circle is sufficiently large to completely contain all the PSF energy. The encircled energy curve thus ranges from zero to one. A typical criterion for encircled energy (EE) is the radius of the PSF at which either 50% or 80% of the energy is encircled. This is a linear dimension, typically in micrometers. When divided by the lens or mirror focal length, this gives the angular size of the PSF, typically expressed in arc-seconds when specifying astronomical optical system performance. Encircled energy is also used to quantify the spreading of a laser beam at a given distance. All laser beams spread due to the necessarily limited aperture of the optical system projecting the beam. As in star image PSF's, the linear spreading of the beam expressed as encircled energy is divided by the projection distance to give the angular spreading. An alternative to encircled energy is ensquared energy, typically used when quantifying image sharpness for digital imaging cameras using pixels. See also Point spread function Airy disc
https://en.wikipedia.org/wiki/Blunt-jawed%20elephantnose
The blunt-jawed elephantnose or wormjawed mormyrid (Campylomormyrus tamandua) is a species of elephantfish. It is found in rivers in West and Middle Africa. It is brown or black with a long elephant-like snout with the mouth located near the tip. Its diet consists of worms, fish, and insects. See also List of freshwater aquarium fish species
https://en.wikipedia.org/wiki/Merton%27s%20portfolio%20problem
Merton's portfolio problem is a problem in continuous-time finance and in particular intertemporal portfolio choice. An investor must choose how much to consume and must allocate their wealth between stocks and a risk-free asset so as to maximize expected utility. The problem was formulated and solved by Robert C. Merton in 1969 both for finite lifetimes and for the infinite case. Research has continued to extend and generalize the model to include factors like transaction costs and bankruptcy. Problem statement The investor lives from time 0 to time T; their wealth at time T is denoted WT. He starts with a known initial wealth W0 (which may include the present value of wage income). At time t he must choose what amount of his wealth to consume, ct, and what fraction of wealth to invest in a stock portfolio, πt (the remaining fraction 1 − πt being invested in the risk-free asset). The objective is where E is the expectation operator, u is a known utility function (which applies both to consumption and to the terminal wealth, or bequest, WT), ε parameterizes the desired level of bequest, ρ is the subjective discount rate, and is a constant which expresses the investor's risk aversion: the higher the gamma, the more reluctance to own stocks. The wealth evolves according to the stochastic differential equation where r is the risk-free rate, (μ, σ) are the expected return and volatility of the stock market and dBt is the increment of the Wiener process, i.e. the stochastic term of the SDE. The utility function is of the constant relative risk aversion (CRRA) form: Consumption cannot be negative: ct ≥ 0, while πt is unrestricted (that is borrowing or shorting stocks is allowed). Investment opportunities are assumed constant, that is r, μ, σ are known and constant, in this (1969) version of the model, although Merton allowed them to change in his intertemporal CAPM (1973). Solution Somewhat surprisingly for an optimal control problem, a closed-form s
https://en.wikipedia.org/wiki/Programming%20by%20permutation
Programming by permutation, sometimes called "programming by accident" or "shotgunning", is an approach to software development wherein a programming problem is solved by iteratively making small changes (permutations) and testing each change to see if it behaves as desired. This approach sometimes seems attractive when the programmer does not fully understand the code and believes that one or more small modifications may result in code that is correct. This tactic is not productive when: There is lack of easily executed automated regression tests with significant coverage of the codebase: a series of small modifications can easily introduce new undetected bugs into the code, leading to a "solution" that is even less correct than the starting point Without Test Driven Development it is rarely possible to measure, by empirical testing, whether the solution will work for all or significant part of the relevant cases No Version Control System is used (for example GIT, Mercurial or SVN) or it is not used during iterations to reset the situation when a change has no visible effect many false starts and corrections usually occur before a satisfactory endpoint is reached in the worst case, the original state of the code may be irretrievably lost Programming by permutation gives little or no assurance about the quality of the code produced—it is the polar opposite of formal verification. Programmers are often compelled to program by permutation when an API is insufficiently documented. This lack of clarity drives others to copy and paste from reference code which is assumed to be correct, but was itself written as a result of programming by permutation. In some cases where the programmer can logically explain that exactly one out of a small set of variations must work, programming by permutation leads to correct code (which then can be verified) and makes it unnecessary to think about the other (wrong) variations. Example For example, the following code sample i
https://en.wikipedia.org/wiki/Armed%20Forces%20Radiobiology%20Research%20Institute
The Armed Forces Radiobiology Research Institute (AFRRI) is an American triservice research laboratory in Bethesda, Maryland chartered by Congress in 1960 and formally established in 1961. It conducts research in the field of radiobiology and related matters which are essential to the operational and medical support of the U.S. Department of Defense (DoD) and the U.S. military services. AFRRI provides services and performs cooperative research with other federal and civilian agencies and institutions. History Department of Defense (DoD) interest in the health effects of exposure to radiological agents (radiobiology), born in the wake of the Manhattan Project, motivated a 1958 Bureau of Medicine and Surgery proposal that a bionuclear research facility be established to study such issues. On June 8, 1960, Public Law 86-500 authorized the construction of such a facility, including a laboratory and vivarium under the Defense Atomic Support Agency (DASA, now the Defense Threat Reduction Agency (DTRA)); on December 2, 1960, DASA and the surgeons general of the Army, Navy, and Air Force approved a charter for the Armed Forces Radiobiology Research Institute (AFFRI). The institute was formally established on May 12, 1961, by DoD Directive 5154.16 as a joint agency of the Army, Navy, and Air Force under the command and administrative control of the Office of the Secretary of Defense (OSD). Research at AFRRI began in January 1962, although the laboratory became fully operational only in September 1963. AFFRI included a Training, Research, Isotopes, General Atomics (TRIGA) Mark F nuclear reactor (uniquely allowing studies of nuclear weapon radiation characteristics facilities), laboratory space, and an animal facility. A high-dose cobalt-60 facility, 54-megaelectron volt (54,000,000 electron volt) linear accelerator (LINAC), and low-level cobalt-60 irradiation facility were later added. In July 1964, AFRRI was moved to DASA, and the Chief of DASA became ex officio chair of
https://en.wikipedia.org/wiki/Anthrax%20toxin
Anthrax toxin is a three-protein exotoxin secreted by virulent strains of the bacterium, Bacillus anthracis—the causative agent of anthrax. The toxin was first discovered by Harry Smith in 1954. Anthrax toxin is composed of a cell-binding protein, known as protective antigen (PA), and two enzyme components, called edema factor (EF) and lethal factor (LF). These three protein components act together to impart their physiological effects. Assembled complexes containing the toxin components are endocytosed. In the endosome, the enzymatic components of the toxin translocate into the cytoplasm of a target cell. Once in the cytosol, the enzymatic components of the toxin disrupts various immune cell functions, namely cellular signaling and cell migration. The toxin may even induce cell lysis, as is observed for macrophage cells. Anthrax toxin allows the bacteria to evade the immune system, proliferate, and ultimately kill the host animal. Research on anthrax toxin also provides insight into the generation of macromolecular assemblies, and on protein translocation, pore formation, endocytosis, and other biochemical processes. Bacillus anthracis virulence factors Anthrax is a disease caused by Bacillus anthracis, a spore-forming, Gram positive, rod-shaped bacterium (Fig. 1). The lethality of the disease is caused by the bacterium's two principal virulence factors: (i) the polyglutamic acid capsule, which is anti-phagocytic, and (ii) the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein components: (i) protective antigen (PA), (ii) edema factor (EF), and (iii) lethal factor (LF). Mechanism of action Anthrax toxin is an A-B toxin. Each individual anthrax toxin protein is nontoxic. Toxic symptoms are not observed when these proteins are injected individually into laboratory animals. The co-injection of PA and EF causes edema, and the co-injection of PA and LF is lethal. The former combination is called edema toxin, and the latte
https://en.wikipedia.org/wiki/Grimm%27s%20conjecture
In number theory, Grimm's conjecture (named after Carl Albert Grimm, 1 April 1926 – 2 January 2018) states that to each element of a set of consecutive composite numbers one can assign a distinct prime that divides it. It was first published in American Mathematical Monthly, 76(1969) 1126-1128. Formal statement If n + 1, n + 2, …, n + k are all composite numbers, then there are k distinct primes pi such that pi divides n + i for 1 ≤ i ≤ k. Weaker version A weaker, though still unproven, version of this conjecture states: If there is no prime in the interval , then has at least k distinct prime divisors. See also Prime gap
https://en.wikipedia.org/wiki/Coleman%E2%80%93Weinberg%20potential
The Coleman–Weinberg model represents quantum electrodynamics of a scalar field in four-dimensions. The Lagrangian for the model is where the scalar field is complex, is the electromagnetic field tensor, and the covariant derivative containing the electric charge of the electromagnetic field. Assume that is nonnegative. Then if the mass term is tachyonic, there is a spontaneous breaking of the gauge symmetry at low energies, a variant of the Higgs mechanism. On the other hand, if the squared mass is positive, the vacuum expectation of the field is zero. At the classical level the latter is true also if . However, as was shown by Sidney Coleman and Erick Weinberg, even if the renormalized mass is zero, spontaneous symmetry breaking still happens due to the radiative corrections (this introduces a mass scale into a classically conformal theory - model have a conformal anomaly). The same can happen in other gauge theories. In the broken phase the fluctuations of the scalar field will manifest themselves as a naturally light Higgs boson, as a matter of fact even too light to explain the electroweak symmetry breaking in the minimal model - much lighter than vector bosons. There are non-minimal models that give a more realistic scenarios. Also the variations of this mechanism were proposed for the hypothetical spontaneously broken symmetries including supersymmetry. Equivalently one may say that the model possesses a first-order phase transition as a function of . The model is the four-dimensional analog of the three-dimensional Ginzburg–Landau theory used to explain the properties of superconductors near the phase transition. The three-dimensional version of the Coleman–Weinberg model governs the superconducting phase transition which can be both first- and second-order, depending on the ratio of the Ginzburg–Landau parameter , with a tricritical point near which separates type I from type II superconductivity. Historically, the order of the superconductin
https://en.wikipedia.org/wiki/Extranuclear%20inheritance
Extranuclear inheritance or cytoplasmic inheritance is the transmission of genes that occur outside the nucleus. It is found in most eukaryotes and is commonly known to occur in cytoplasmic organelles such as mitochondria and chloroplasts or from cellular parasites like viruses or bacteria. Organelles Mitochondria are organelles which function to transform energy as a result of cellular respiration. Chloroplasts are organelles which function to produce sugars via photosynthesis in plants and algae. The genes located in mitochondria and chloroplasts are very important for proper cellular function. The mitochondrial DNA and other extranuclear types of DNA replicate independently of the DNA located in the nucleus, which is typically arranged in chromosomes that only replicate one time preceding cellular division. The extranuclear genomes of mitochondria and chloroplasts however replicate independently of cell division. They replicate in response to a cell's increasing energy needs which adjust during that cell's lifespan. Since they replicate independently, genomic recombination of these genomes is rarely found in offspring, contrary to nuclear genomes in which recombination is common. Mitochondrial diseases are inherited from the mother, not from the father. Mitochondria with their mitochondrial DNA are already present in the egg cell before it gets fertilized by a sperm. In many cases of fertilization, the head of the sperm enters the egg cell; leaving its middle part, with its mitochondria, behind. The mitochondrial DNA of the sperm often remains outside the zygote and gets excluded from inheritance. Parasites Extranuclear transmission of viral genomes and symbiotic bacteria is also possible. An example of viral genome transmission is perinatal transmission. This occurs from mother to fetus during the perinatal period, which begins before birth and ends about 1 month after birth. During this time viral material may be passed from mother to child in the bloodst
https://en.wikipedia.org/wiki/The%20Michael%20J.%20Fox%20Foundation
The Michael J. Fox Foundation for Parkinson's Research aims to find a cure for Parkinson's disease (PD) founded in 2000 by Michael J. Fox. It concentrates on funding research and ensuring the development of improved therapies for people with Parkinson's. History Established in 2000 by Canadian actor Michael J. Fox, the foundation has since become the largest non-profit funder of Parkinson's disease research in the world, with more than $1 billion of research projects to date. In 2010, the Foundation launched the first large-scale clinical study on evolution biomarkers of the disease at a cost of $45 million over five years. Research funding The Foundation works towards "translational" research—the work of translating basic scientific discoveries into simple treatments with definition to benefit the estimated five million people living with Parkinson's disease today. The Foundation drives progress by awarding grants to ensure that the most promising research avenues are thoroughly funded, explored and carried forward toward pharmacy shelves. The Foundation's four annually recurring Pipeline Programs aim to speed research along the drug development pipeline. The Pipeline Programs include: Rapid Response Innovation Awards quickly support high-risk, high-reward projects with little to no existing preliminary data, but potential to significantly impact our understanding or treatment of PD (an Edmond J. Safra Core Program for PD Research). Target Validation Awards provide support for work demonstrating whether modulation of a novel biological target has impact in a PD-relevant pre-clinical model — an essential step to the development of potential targeted therapies (an Edmond J. Safra Core Program for PD Research). Clinical Intervention Awards support clinical testing of promising PD therapies that may significantly and fundamentally improve treatment of PD (an Edmond J. Safra Core Program for PD Research). Therapeutics Development Initiative, an industry-exclus
https://en.wikipedia.org/wiki/Economic%20credentialing
Economic credentialing is a term of disapproval used by the American Medical Association (AMA). The association defines the term as "the use of economic criteria unrelated to quality of care or professional competence in determining a physician's qualifications for initial or continuing hospital medical staff membership or privileges." Traditionally, physicians applied for hospital staff membership based on education, medical licensure and a record of quality care. Privileges are requests to perform certain procedures or use certain skills based on training and experience. For example, an obstetrician and a family practitioner might request privileges for both routine deliveries and caesarean sections. Typically an obstetrician could demonstrate enough experience and be granted those privileges. The FP might obtain both procedures or be restricted to routine deliveries only, or none at all, based on hospital policy. As medical costs have increased and reimbursement has declined or been stagnant, both hospitals and physicians have come under increasing financial pressure. One response by physicians has been the formation of specialty hospitals or diagnostic centers with physician ownership. Some hospitals have seen this as a threat to their economic interests and have denied or revoked membership and privileges of the physician owners. External links AMA statement on economic credentialing Medical terminology
https://en.wikipedia.org/wiki/Dog%20odor
Dogs, as with all mammals, have natural odors. Natural dog odor can be unpleasant to dog owners, especially when dogs are kept inside the home, as some people are not used to being exposed to the natural odor of a non-human species living in proximity to them. Dogs may also develop unnatural odors as a result of skin disease or other disorders or may become contaminated with odors from other sources in their environment. Healthy odors All natural dog odors are most prominent near the ears and from the paw pads. Dogs naturally produce secretions, the function of which is to produce scents allowing for individual animal recognition by dogs and other species in the scent-marking of territory. Dogs only produce sweat on areas not covered with fur, such as the nose and paw pads, unlike humans who sweat almost everywhere. However, they do have sweat glands, called apocrine glands, associated with every hair follicle on the body. The exact function of these glands is not known, but they may produce pheromones or chemical signals for communication with other dogs. It is believed that these sweat secretions produce an individual odor signal that is recognizable by other dogs. Dogs also have sweat glands on their noses. These are eccrine glands. When these glands are active, they leave the nose and paw pads slightly moist and help these specialized skin features maintain their functional properties. The odor associated with dog paw pads is much more noticeable on dogs with moist paw pads than on those with dry pads. Dogs also have numerous apocrine glands in their external ear canals. In this location, they are referred to as ceruminous glands. The ear canals also have numerous sebaceous glands. Together, these two sets of glands produce natural ear wax, or cerumen. Micro-organisms live naturally in this material and give the ears a characteristic slightly yeasty odor, even when healthy. When infected, the ears can give off a strong disagreeable smell. It is no
https://en.wikipedia.org/wiki/Interactive%20skeleton-driven%20simulation
Interactive skeleton-driven simulation (or Interactive skeleton-driven dynamic deformations) is a scientific computer simulation technique used to approximate realistic physical deformations of dynamic bodies in real-time. It involves using elastic dynamics and mathematical optimizations to decide the body-shapes during motion and interaction with forces. It has various applications within realistic simulations for medicine, 3D computer animation and virtual reality. Background Methods for simulating deformation, such as changes of shapes, of dynamic bodies involve intensive calculations, and several models have been developed. Some of these are known as free-form deformation, skeleton-driven deformation, dynamic deformation and anatomical modelling. Skeletal animation is well known in computer animation and 3D character simulation. Because of the calculation insensitivity of the simulation, few interactive systems are available which realistically can simulate dynamic bodies in real-time. Being able to interact with such a realistic 3D model would mean that calculations would have to be performed within the constraints of a frame rate which would be acceptable via a user interface. Recent research has been able to build on previously developed models and methods to provide sufficiently efficient and realistic simulations. The promise for this technique can be as widespread as mimicking human facial expressions for perception of simulating a human actor in real-time or other cell organisms. Using skeletal constraints and parameterized force to calculate deformations also has the benefit of matching how a single cell has a shaping skeleton, as well as how a larger living organism might have an internal bone skeleton - such as the vertebrae. The generalized external body force simulations makes elasticity calculations more efficient, and means real-time interactions are possible. Basic theory There are several components to such a simulation system: a polygon mesh
https://en.wikipedia.org/wiki/Bur
A bur (also spelled burr) is a seed or dry fruit or infructescence that has hooks or teeth. The main function of the bur is to spread the seeds of the bur plant, often through epizoochory. The hooks of the bur are used to latch onto fur or fabric, enabling the bur which contain seeds to be transported to another location for dispersal. Another use for the spines and hooks are physical protection against herbivores. Their ability to stick to animals and fabrics has shaped their reputation as bothersome. Some other forms of diaspores, such as the stems of certain species of cactus also are covered with thorns and may function as burs. Bur-bearing plants, such as Tribulus terrestris and Xanthium species, are often single-stemmed when growing in dense groups, but branch and spread when growing singly. The number of burs per fruit along with the size and shape can vary largely between different bur plants. Function Containing seeds, burs spread through catching on the fur of passing animals (epizoochory) or machinery as well as by being transported together with water, gravel and grain. The hooks or teeth generally cause irritation, and some species commonly cause gross injury to animals, or expensive damage to clothing or to vehicle tires. Burs serve the plants that bear them in two main ways. Firstly, burs are spinescent and tend to repel some herbivores, much as other spines and prickles do. Secondly, plants with burs rely largely on living agents to disperse their seeds; their burs are mechanisms of seed dispersal by epizoochory (dispersal by attaching to the outside of animals). Spinescent plants repel herbivores mechanically by wounding the herbivore's mouth or digestive system. Moreover, burs' mechanical defence can work alongside the color of the bur that can visually warn off herbivores. Most epizoochorous burs attach to hair on the body or legs of the host animal, but a special class of epizoochorous bur is known as the trample-bur (or trample-bu
https://en.wikipedia.org/wiki/Minor%20physical%20anomalies
Minor physical anomalies (MPAs) are relatively minor (typically painless and, in themselves, harmless) congenital physical abnormalities consisting of features such as low-set ears, single transverse palmar crease, telecanthus, micrognathism, macrocephaly, hypotonia and furrowed tongue. While MPAs may have a genetic basis, they might also be caused by factors in the fetal environment: anoxia, bleeding, or infection. MPAs have been linked to disorders of pregnancy and are thought by some to be a marker for insults to the fetal neural development towards the end of the first trimester. Thus, in the neurodevelopmental literature, they are seen as indirect indications of interferences with brain development. MPAs have been studied in autism, Down syndrome, and in schizophrenia. A 2008 meta-analysis found that MPAs are significantly increased in the autistic population. A 1998 study found that 60% of its schizophrenic sample and 38% of their siblings had 6 or more MPAs (especially in the craniofacial area), while only 5% of the control group showed that many. The most often cited MPA, high arched palate, is described in articles as a microform of a cleft palate. Cleft palates are partly attributable to hypoxia. The vaulted palate caused by nasal obstruction and consequent mouth breathing, without the lateralising effect of the tongue, can produce hypoxia at night. Other MPAs are reported only sporadically. Capillary malformation is induced by RASA1 mutation and can be changed by hypoxia. A study in the American Journal of Psychiatry by Trixler et al.: found hemangiomas to be highly significant in schizophrenia. Exotropia is reported as having low correlation and high significance as well. It can be caused by perinatal hypoxia. See also Incidentaloma Fluctuating asymmetry Developmental theory of crime
https://en.wikipedia.org/wiki/Log%20area%20ratio
Log area ratios (LAR) can be used to represent reflection coefficients (another form for linear prediction coefficients) for transmission over a channel. While not as efficient as line spectral pairs (LSPs), log area ratios are much simpler to compute. Let be the kth reflection coefficient of a filter, the kth LAR is: Use of Log Area Ratios have now been mostly replaced by Line Spectral Pairs, but older codecs, such as GSM-FR use LARs. See also Line spectral pairs Lossy compression algorithms
https://en.wikipedia.org/wiki/Blue%E2%80%93white%20screen
The blue–white screen is a screening technique that allows for the rapid and convenient detection of recombinant bacteria in vector-based molecular cloning experiments. This method of screening is usually performed using a suitable bacterial strain, but other organisms such as yeast may also be used. DNA of transformation is ligated into a vector. The vector is then inserted into a competent host cell viable for transformation, which are then grown in the presence of X-gal. Cells transformed with vectors containing recombinant DNA will produce white colonies; cells transformed with non-recombinant plasmids (i.e. only the vector) grow into blue colonies. Background Molecular cloning is one of the most commonly used procedures in molecular biology. A gene of interest may be inserted into a plasmid vector via ligation, and the plasmid is then transformed into Escherichia coli cells. However, not all the plasmids transformed into cells may contain the desired gene insert, and checking each individual colony for the presence of the insert is time-consuming. Therefore, a method for the detection of the insert would be useful for making this procedure less time- and labor-intensive. One of the early methods developed for the detection of insert is blue–white screening which allows for identification of successful products of cloning reactions through the colour of the bacterial colony. The method is based on the principle of α-complementation of the β-galactosidase gene. This phenomenon of α-complementation was first demonstrated in work done by Agnes Ullmann in the laboratory of François Jacob and Jacques Monod, where the function of an inactive mutant β-galactosidase with deleted sequence was shown to be rescued by a fragment of β-galactosidase in which that same sequence, the α-donor peptide, is still intact. Langley et al. showed that the mutant non-functional β-galactosidase was lacking in part of its N-terminus with its residues 11—41 deleted, but it may be
https://en.wikipedia.org/wiki/Crawl%20ratio
Crawl ratio is a term used in the automotive world to describe the highest gear ratio that a vehicle is capable of. Note that gear ratio, also known as speed ratio, of a gear train is defined as the ratio of the angular velocity of the input gear to the angular velocity of the output gear, and thus a higher gear ratio implies a larger speed reduction, i.e. the input speed is reduced more at the output. The highest gear ratio is obtained at either first gear or reverse gear, but only first gear is typically taken into consideration while talking about crawl ratio. A potentially confusing terminology is that although a better crawl ratio is achieved by a higher gear ratio, it is common to refer to a better crawl ratio as “lower crawl ratio” rather than “higher crawl ratio” because it’s for driving at lower speeds. The crawl ratio is aptly named because when a vehicle is driven using the lowest gear (i.e. first gear), it moves the slowest (i.e. crawl speed) at a given engine rpm, and thus produces the highest output torque (i.e. crawl torque) at the road wheels due to conservation of power. Since crawl ratio of a vehicle represents the total reduction of the engine speed until the road wheels, it is determined by combining the contributions of different elements on the entire drive train, including transmission and differential, both of which typically introduce a certain amount of speed reduction. Since a lower crawl ratio (higher gear ratio) implies a larger output torque on the road wheels, it is desirable for vehicles that need to pull large loads, climb steep inclines, or drive over obstacles on the road or terrain, such as rocks, which is sometimes referred to as crawling over the rocks. Therefore, crawl ratios are most often discussed for large SUVs, trucks and off-road vehicles. Note that tire size (or dimensions of the road wheels) does not affect the gear ratio of a vehicle, and thus using a different size tire on the same vehicle does not affect the torq
https://en.wikipedia.org/wiki/Compression%20%28geology%29
In geology, the term compression refers to a set of stress directed toward the center of a rock mass. Compressive strength refers to the maximum compressive stress that can be applied to a material before failure occurs. When the maximum compressive stress is in a horizontal orientation, thrust faulting can occur, resulting in the shortening and thickening of that portion of the crust. When the maximum compressive stress is vertical, a section of rock will often fail in normal faults, horizontally extending and vertically thinning a given layer of rock. Compressive stresses can also result in folding of rocks. Because of the large magnitudes of lithostatic stress in tectonic plates, tectonic-scale deformation is always subjected to net compressive stress. Compressive stresses can result in a number of different features at varying scales, most notably including Folds, and Thrust faults. See also Gravitational compression
https://en.wikipedia.org/wiki/Submental%20artery
The submental artery is the largest branch of the facial artery in the neck. It first runs forward under the mouth, then turns upward upon reaching the chin. Anatomy Origin The submental artery is the largest branch of the facial artery in the neck. It arises from the facial artery just as the facial artery splits the submandibular gland. Course and distribution The artery passes anterior-ward upon the mylohyoid muscle, coursing inferior to the body of the mandible and deep to the digastric muscle. Here, the artery supplies adjacent muscles and skin; it also forms anastomoses with the sublingual artery and with the mylohyoid branch of the inferior alveolar artery. Upon reaching the chin, artery turns superior-ward at the mandibular symphysis to pass over the mandible before dividing into a superficial branch and a deep branch; the two terminal branches are distributed to the chin and lower lip, and form anastomoses with the inferior labial and mental arteries. Distribution Branches The superficial branch passes between the integument and depressor labii inferioris, and anastomoses with the inferior labial artery. The deep branch runs between the muscle and the bone, supplies the lip, and anastomoses with the inferior labial artery and the mental branch of the inferior alveolar artery. Additional images