source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Mask%20data%20preparation
|
Mask data preparation (MDP), also known as layout post processing, is the procedure of translating a file containing the intended set of polygons from an integrated circuit layout into set of instructions that a photomask writer can use to generate a physical mask. Typically, amendments and additions to the chip layout are performed in order to convert the physical layout into data for mask production.
Mask data preparation requires an input file which is in a GDSII or OASIS format, and produces a file that is in a proprietary format specific to the mask writer.
MDP procedures
Although historically converting the physical layout into data for mask production was relatively simple, more recent MDP procedures require various procedures:
Chip finishing which includes custom designations and structures to improve manufacturability of the layout. Examples of the latter are a seal ring and filler structures.
Producing a reticle layout with test patterns and alignment marks.
Layout-to-mask preparation that enhances layout data with graphics operations and adjusts the data to mask production devices. This step includes resolution enhancement technologies (RET), such as optical proximity correction (OPC) or inverse lithography technology (ILT).
Special considerations in each of these steps must also be made to mitigate the negative affects associated with the enormous amounts of data they can produce; too much data can sometimes become a problem for the mask writer to be able to create a mask in a reasonable amount of time.
Mask Fracturing
MDP usually involves mask fracturing where complex polygons are translated into simpler shapes, often rectangles and trapezoids, that can be handled by the mask writing hardware. Because mask fracturing is such a common procedure within the whole MDP, the term fracture, used as a noun, is sometimes used inappropriately in place of the term mask data preparation. The term fracture does however accurately describe that sub-proc
|
https://en.wikipedia.org/wiki/Real-time%20polymerase%20chain%20reaction
|
A real-time polymerase chain reaction (real-time PCR, or qPCR when used quantitatively) is a laboratory technique of molecular biology based on the polymerase chain reaction (PCR). It monitors the amplification of a targeted DNA molecule during the PCR (i.e., in real time), not at its end, as in conventional PCR. Real-time PCR can be used quantitatively and semi-quantitatively (i.e., above/below a certain amount of DNA molecules).
Two common methods for the detection of PCR products in real-time PCR are (1) non-specific fluorescent dyes that intercalate with any double-stranded DNA and (2) sequence-specific DNA probes consisting of oligonucleotides that are labelled with a fluorescent reporter, which permits detection only after hybridization of the probe with its complementary sequence.
The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines propose that the abbreviation qPCR be used for quantitative real-time PCR and that RT-qPCR be used for reverse transcription–qPCR. The acronym "RT-PCR" commonly denotes reverse transcription polymerase chain reaction and not real-time PCR, but not all authors adhere to this convention.
Background
Cells in all organisms regulate gene expression by turnover of gene transcripts (single stranded RNA): The amount of an expressed gene in a cell can be measured by the number of copies of an RNA transcript of that gene present in a sample. In order to robustly detect and quantify gene expression from small amounts of RNA, amplification of the gene transcript is necessary. The polymerase chain reaction (PCR) is a common method for amplifying DNA; for RNA-based PCR the RNA sample is first reverse-transcribed to complementary DNA (cDNA) with reverse transcriptase.
In order to amplify small amounts of DNA, the same methodology is used as in conventional PCR using a DNA template, at least one pair of specific primers, deoxyribonucleotide triphosphates, a suitable buffer solution and a thermo
|
https://en.wikipedia.org/wiki/Fillet%20%28mechanics%29
|
In mechanical engineering, a fillet is a rounding of an interior or exterior corner of a part designed in CAD. An interior or exterior corner, with an angle or type of bevel, is called a "chamfer". Fillet geometry, when on an interior corner is a line of concave function, whereas a fillet on an exterior corner is a line of convex function (in these cases, fillets are typically referred to as rounds). Fillets commonly appear on welded, soldered, or brazed joints.
Depending on a geometric modelling kernel different CAD software products may provide different fillet functionality. Usually fillets can be quickly designed onto parts using 3D solid modeling engineering by picking edges of interest and invoking the function. Smooth edges connecting two simple flat features are generally simple for a computer to create and fast for a human user to specify. It is pronounced as "fill-et" similarly like the Fillet in picture framing. Once these features are included in the CAD design of a part, they are often manufactured automatically using computer-numerical control.
Applications
Stress concentration is a problem of load-bearing mechanical parts which is reduced by employing fillets on points and lines of expected high stress. The fillets distribute the stress over a broader area and effectively make the parts more durable and capable of bearing larger loads.
For considerations in aerodynamics, fillets are employed to reduce interference drag where aircraft components such as wings, struts, and other surfaces meet one another.
For manufacturing, concave corners are sometimes filleted to allow the use of round-tipped end mills to cut out an area of a material. This has a cycle time benefit if the round mill is simultaneously being used to mill complex curved surfaces.
Radii are used to eliminate sharp edges that can be easily damaged or that can cause injury when the part is handled.
Terminology
Different design packages use different names for the same operations.
A
|
https://en.wikipedia.org/wiki/Eco-Management%20and%20Audit%20Scheme
|
The Eco-Management and Audit Scheme (EMAS) is a voluntary environmental management instrument, which was developed in 1993 by the European Commission. It enables organizations to assess, manage and continuously improve their environmental performance. The scheme is globally applicable and open to all types of private and public organizations. In order to register with EMAS, organisations must meet the requirements of the EU EMAS-Regulation. Currently, more than 4,600 organisations and more than 7,900 sites are EMAS registered.
Regulation: structure
The EU EMAS Regulation entails 52 Articles and 8 Annexes:
Chapter I: General provisions
Chapter II: Registration of organisations
Chapter III: Obligations of registered organisations
Chapter IV: Rules applicable to Competent Bodies
Chapter V: Environmental verifier's
Chapter VI: Accreditation and Licensing Bodies
Chapter VII: Rules applicable to Member States
Chapter VIII: Rules applicable to the Commission
Chapter IX: Final provisions
Annex I: Environmental review
Annex II: Environmental management system requirements (based on EN ISO 14001:2004) and additional issues to be addressed by organisations implementing EMAS
Annex III: Internal environmental audit
Annex IV: Environmental reporting
Annex V: EMAS logo
Annex VI: Information requirements for registration
Annex VII: Environmental verifier's declaration on verification and validation activities
Annex VIII: Correlation table (EMAS II/EMAS III)
Although EMAS is an official EU Regulation, it is binding only for organisations which voluntarily decide to implement the scheme. The EMAS Regulation includes the environmental management system requirements of the international standard for environmental management, ISO 14001, and additional requirements for EMAS registered organisations such as employee engagement, ensuring legal compliance or the publication of an environmental statement. Because of its additional requirements, EMAS is known as the pr
|
https://en.wikipedia.org/wiki/Actinomycetia
|
The Actinomycetia are a class of bacteria.
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI).
Acidothermales Sen et al. 2014
Actinomycetales Buchanan 1917 (Approved Lists 1980)
Actinopolysporales Goodfellow and Trujillo 2015
Bifidobacteriales Stackebrandt et al. 1997
Catenulisporales Donadio et al. 2015
Cryptosporangiales Nouioui et al. 2018
Frankiales Sen et al. 2014
Geodermatophilales Sen et al. 2014
Glycomycetales Labeda 2015
Jatrophihabitantales Salam et al. 2020
Jiangellales Tang et al. 2015
Kineosporiales Kämpfer 2015
Micrococcales Prévot 1940 (Approved Lists 1980)
Micromonosporales Genilloud 2015
Mycobacteriales Janke 1924 (Approved Lists 1980)
Nakamurellales Sen et al. 2014
Propionibacteriales (Rainey et al. 1997) Patrick and McDowell 2015
Pseudonocardiales Labeda and Goodfellow 2015
Sporichthyales Nouioui et al. 2018
Streptomycetales Cavalier-Smith 2002
Streptosporangiales Goodfellow 2015
Phylogeny
See also
List of bacteria genera
List of bacterial orders
|
https://en.wikipedia.org/wiki/Reynolds%20transport%20theorem
|
In differential calculus, the Reynolds transport theorem (also known as the Leibniz–Reynolds transport theorem), or simply the Reynolds theorem, named after Osborne Reynolds (1842–1912), is a three-dimensional generalization of the Leibniz integral rule. It is used to recast time derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics.
Consider integrating over the time-dependent region that has boundary , then taking the derivative with respect to time:
If we wish to move the derivative into the integral, there are two issues: the time dependence of , and the introduction of and removal of space from due to its dynamic boundary. Reynolds transport theorem provides the necessary framework.
General form
Reynolds transport theorem can be expressed as follows:
in which is the outward-pointing unit normal vector, is a point in the region and is the variable of integration, and are volume and surface elements at , and is the velocity of the area element (not the flow velocity). The function may be tensor-, vector- or scalar-valued. Note that the integral on the left hand side is a function solely of time, and so the total derivative has been used.
Form for a material element
In continuum mechanics, this theorem is often used for material elements. These are parcels of fluids or solids which no material enters or leaves. If is a material element then there is a velocity function , and the boundary elements obey
This condition may be substituted to obtain:
A special case
If we take to be constant with respect to time, then and the identity reduces to
as expected. (This simplification is not possible if the flow velocity is incorrectly used in place of the velocity of an area element.)
Interpretation and reduction to one dimension
The theorem is the higher-dimensional extension of differentiation under the integral sign and reduces to that expression in some cases. Suppose is independent of and
|
https://en.wikipedia.org/wiki/Applied%20Physics%20Letters
|
Applied Physics Letters is a weekly peer-reviewed scientific journal that is published by the American Institute of Physics. Its focus is rapid publication and dissemination of new experimental and theoretical papers regarding applications of physics in all disciplines of science, engineering, and modern technology. Additionally, there is an emphasis on fundamental and new developments which lay the groundwork for fields that are rapidly evolving.
The journal was established in 1962. The editor-in-chief is physicist Lesley F. Cohen of the Imperial College London.
Abstracting and indexing
This journal is indexed in the following databases:
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
Science Citation Index Expanded
According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.0.
|
https://en.wikipedia.org/wiki/Tamga
|
A tamga or tamgha (from ; ; ; ; ; ) was an abstract seal or stamp used by Eurasian nomads and by cultures influenced by them. The tamga was normally the emblem of a particular tribe, clan or family. They were common among the Eurasian nomads throughout Classical Antiquity and the Middle Ages. As clan and family identifiers, the collection and systematic comparison of tamgas is regarded to provide insights into relations between families, individuals and ethnic groups in the steppe territory.
Similar tamga-like symbols were sometimes adopted by sedentary peoples adjacent to the Pontic–Caspian steppe both in Eastern Europe and Central Asia.
It has been speculated that Turkic tamgas represent one of the sources of the Old Turkic script of the 6th–10th centuries, but since the mid-20th century, this hypothesis is widely rejected as being unverifiable.
Tamgas in the steppe tradition
Ancient origins
Tamgas originate in pre-historic times, but their exact usage and development cannot be continuously traced over time. There are, however, symbols represented in rock art that are referred to as tamgas or tamga-like. If they serve to record the presence of individuals at a particular place, they may be functionally equivalent with medieval tamgas.
In the later phases of the Bosporan Kingdom, the ruling dynasty applied personal tamgas, composed of a fragment representing the family and a fragment representing the individual king, apparently in continuation of steppe traditions and in an attempt to consolidate seditary and nomadic factions within the kingdom.
Turkic peoples
According to Clauson (1972, p.504f.), Common Turkic tamga means "originally a `brand' or mark of ownership placed on horses, cattle, and other livestock; it became at a very early date something like a European coat of arms or crest, and as such appears at the head of several Türkü and many O[ld] Kir[giz] funary monuments".
Among modern Turkic peoples, the tamga is a design identifying property or ca
|
https://en.wikipedia.org/wiki/Service%20Data%20Objects
|
Service Data Objects is a technology that allows heterogeneous data to be accessed in a uniform way. The SDO specification was originally developed in 2004 as a joint collaboration between Oracle (BEA) and IBM and approved by the Java Community Process in JSR 235. Version 2.0 of the specification was introduced in November 2005 as a key part of the Service Component Architecture.
Relation to other technologies
Originally, the technology was known as Web Data Objects, or WDO, and was shipped in IBM WebSphere Application Server 5.1 and IBM WebSphere Studio Application Developer 5.1.2. Other similar technologies are JDO, EMF, JAXB and ADO.NET.
Design
Service Data Objects denote the use of language-agnostic data structures that facilitate communication between structural tiers and various service-providing entities. They require the use of a tree structure with a root node and provide traversal mechanisms (breadth/depth-first) that allow client programs to navigate the elements. Objects can be static (fixed number of fields) or dynamic with a map-like structure allowing for unlimited fields. The specification defines meta-data for all fields and each object graph can also be provided with change summaries that can allow receiving programs to act more efficiently on them.
Developers
The specification is now being developed by IBM, Rogue Wave, Oracle, SAP, Siebel, Sybase, Xcalia, Software AG within the OASIS Member Section Open CSA since April 2007. Collaborative work and materials remain on the collaboration platform of Open SOA, an informal group of actors of the industry.
Implementations
The following SDO products are available:
Rogue Wave Software HydraSDO
Xcalia (for Java and .Net)
Oracle (Data Service Integrator)
IBM (Virtual XML Garden)
IBM (WebSphere Process Server)
There are open source implementations of SDO from:
The Eclipse Persistence Services Project (EclipseLink)
The Apache Tuscany project for Java and C++
The fcl-sdo library included with
|
https://en.wikipedia.org/wiki/Lin%20Hsin%20Hsin
|
Lin Hsin Hsin () is an IT inventor, artist, poet and composer from Singapore, deeply rooted in mathematics and information technology.
Early life and education
Lin was born in Singapore. She graduated in mathematics from the University of Singapore and received a postgraduate degree in computer science from Newcastle University, England. She studied music and art in Singapore, printmaking at the University of Ulster, papermaking in Ogawamachi, Japan and paper conservation at the University of Melbourne Conservation Services.
Career
Lin is a digital native. Lin builds paradigm shift & patent-grade inventions.She is an IT visionary some 20 years ahead of time, who pens her IT vision in computing, poems, and paintings.
In 1976, Lin painted "Distillation of an Apple", an oil painting claimed to visualised the construction and usage of Apple computer 7 days before the birth of Apple computer. In 1977, she painted "The Computer as Architect", an oil painting depicting the vision of the power of computer in architecture. Lin claimed she has never seen nor used a Computer-aided design (CAD) system prior to her painting while commercial CAD systems are available since early 1970s.
1988 March organized 1st Artificial Intelligence conference in Singapore
1991 February 1 poem titled "Cellular Phone Galore" predicted mobile phone, & cellular network BEFORE 2G GSM launch, 27 March 1991, p. 54,55, "from time to time"
1992 wanted to build a multimedia museum (letter to National Computer Board, Singapore)
1993 February, predicted the Y2K bug while building a ten-year forecasting model on an IBM i486 PC, Journal of the Asia Pacific Economic Conference (APEC), 1999
1993 August 21, poem title "Online Intimacy" on Online dating service, p. 235, "Sunny Side Up"
1993 August 23, poem titled "Till Bankrupt Do Us Part", on online shopping & e-commerce, p. 241, "Sunny Side Up"
1994 May, painted "Voices of the Future" – oil painting depicted the wireless and mobile entertainment futu
|
https://en.wikipedia.org/wiki/Knight%20Tyme
|
Knight Tyme is a computer game released for the ZX Spectrum, Amstrad CPC, Commodore 64 and MSX compatibles in 1986. It was published by Mastertronic as part of their Mastertronic Added Dimension label. Two versions of the ZX Spectrum release were published: a full version for the 128K Spectrum (which was published first) and a cut-down version for the 48K Spectrum that removed the music, some graphics and some locations (which was published later).
It was programmed by David Jones and is the third game in the Magic Knight series. The in-game music was written by David Whittaker on the C64 version and Rob Hubbard on the Spectrum and Amstrad versions. Graphics were by Ray Owen.
Plot
Having rescued his friend Gimbal the wizard from a self-inflicted white-out spell, the Magic Knight finds himself transported into the far future aboard the starship USS Pisces. Magic Knight must find a way back to his own time, with the help of the Tyme Guardians, before he is apprehended by the Paradox Police. On board the USS Pisces, the Magic Knight is first not recognized at all by the crew of the ship, and must create an ID Card, which he receives a template of from Derby IV, the ship's main computer. After getting his ID completed, he then takes command of the ship, first arriving at Starbase 1 to refuel the ship. After refueling, the Magic Knight collects the pieces of the Golden Sundial from Monopole, Retreat and Outpost. Returning to the ship with all the pieces of the sundial, he discovers that a time machine has appeared inside the USS Pisces to take him back to his own time.
Gameplay
Gameplay is very similar to Knight Tyme'''s predecessor, Spellbound. Once again, the game's wide range of commands are carried out using "Windimation", a system whereby text commands are carried out through choosing options in command windows.
The importance of watching Magic Knight's energy level and keeping him from harm is rather different this time around. Whilst Spellbound required the pl
|
https://en.wikipedia.org/wiki/Instituto%20de%20Biologia%20Molecular%20e%20Celular
|
The Institute of Molecular and Cell Biology (IBMC - Instituto de Biologia Molecular e Celular) in Porto, Portugal, was founded in the 1990s as a multidisciplinary research institution in the fields of genetic diseases, infectious diseases and immunology, neuroscience, stress and structural biology.
Most of its investigators are University of Porto's faculty and many work also at the two university's teaching Hospitals, as well as other national biomedical and environmental research institutions, other public and private universities and a couple of enterprises. Bial, a well known Portuguese pharmaceutical company with headquarters in Porto region is one of that associated enterprises.
Its first director and co-founder was Alexandre Quintanilha.
See also
Science and technology in Portugal
External links
Official homepage
University of Porto
Research institutes in Portugal
Molecular biology institutes
|
https://en.wikipedia.org/wiki/Signal%20integrity
|
Signal integrity or SI is a set of measures of the quality of an electrical signal. In digital electronics, a stream of binary values is represented by a voltage (or current) waveform. However, digital signals are fundamentally analog in nature, and all signals are subject to effects such as noise, distortion, and loss. Over short distances and at low bit rates, a simple conductor can transmit this with sufficient fidelity. At high bit rates and over longer distances or through various mediums, various effects can degrade the electrical signal to the point where errors occur and the system or device fails. Signal integrity engineering is the task of analyzing and mitigating these effects. It is an important activity at all levels of electronics packaging and assembly, from internal connections of an integrated circuit (IC), through the package, the printed circuit board (PCB), the backplane, and inter-system connections. While there are some common themes at these various levels, there are also practical considerations, in particular the interconnect flight time versus the bit period, that cause substantial differences in the approach to signal integrity for on-chip connections versus chip-to-chip connections.
Some of the main issues of concern for signal integrity are ringing, crosstalk, ground bounce, distortion, signal loss, and power supply noise.
History
Signal integrity primarily involves the electrical performance of the wires and other packaging structures used to move signals about within an electronic product. Such performance is a matter of basic physics and as such has remained relatively unchanged since the inception of electronic signaling. The first transatlantic telegraph cable suffered from severe signal integrity problems, and analysis of the problems yielded many of the mathematical tools still used today to analyze signal integrity problems, such as the telegrapher's equations. Products as old as the Western Electric crossbar telephone
|
https://en.wikipedia.org/wiki/Transmission%20disequilibrium%20test
|
The transmission disequilibrium test (TDT) was proposed by Spielman, McGinnis and Ewens (1993) as a family-based association test for the presence of genetic linkage between a genetic marker and a trait. It is an application of McNemar's test.
A specificity of the TDT is that it will detect genetic linkage only in the presence of genetic association.
While genetic association can be caused by population structure, genetic linkage will not be affected, which makes the TDT robust to the presence of population structure.
The case of trios: one affected child per family
Description of the test
We first describe the TDT in the case where families consist of trios (two parents and one affected child). Our description follows the notations used in Spielman, McGinnis & Ewens (1993).
The TDT measures the over-transmission of an allele from heterozygous parents to affected offsprings.
The n affected offsprings have 2n parents. These can be represented by the transmitted and the non-transmitted alleles and at some genetic locus. Summarizing the data in a 2 by 2 table gives:
The derivation of the TDT shows that one should only use the heterozygous parents (total number b+c).
The TDT tests whether the proportions b/(b+c) and c/(b+c) are compatible with probabilities (0.5, 0.5).
This hypothesis can be tested using a binomial (asymptotically chi-square) test with one degree of freedom:
Outline of the test derivation
A derivation of the test consists of using a population genetics model to obtain the expected proportions for the quantities and in the table above. In particular, one can show that under nearly all disease models the expected proportion of and are identical. This result motivates the use of a binomial (asymptotically ) test to test whether these proportions are equal.
On the other hand, one can also show that under such models the proportions and are not equal to the product of the marginals probabilities , and , . A rewording of this statement wou
|
https://en.wikipedia.org/wiki/Poly-Bernoulli%20number
|
In mathematics, poly-Bernoulli numbers, denoted as , were defined by M. Kaneko as
where Li is the polylogarithm. The are the usual Bernoulli numbers.
Moreover, the Generalization of Poly-Bernoulli numbers with a,b,c parameters defined as follows
where Li is the polylogarithm.
Kaneko also gave two combinatorial formulas:
where is the number of ways to partition a size set into non-empty subsets (the Stirling number of the second kind).
A combinatorial interpretation is that the poly-Bernoulli numbers of negative index enumerate the set of by (0,1)-matrices uniquely reconstructible from their row and column sums. Also it is the number of open tours by a biased rook on a board (see A329718 for definition).
The Poly-Bernoulli number satisfies the following asymptotic:
For a positive integer n and a prime number p, the poly-Bernoulli numbers satisfy
which can be seen as an analog of Fermat's little theorem. Further, the equation
has no solution for integers x, y, z, n > 2; an analog of Fermat's Last Theorem.
Moreover, there is an analogue of Poly-Bernoulli numbers (like Bernoulli numbers and Euler numbers) which is known as Poly-Euler numbers.
See also
Bernoulli numbers
Stirling numbers
Gregory coefficients
Bernoulli polynomials
Bernoulli polynomials of the second kind
Stirling polynomials
|
https://en.wikipedia.org/wiki/Fiber%20to%20the%20x
|
Fiber to the x (FTTX; also spelled "fibre") or fiber in the loop is a generic term for any broadband network architecture using optical fiber to provide all or part of the local loop used for last mile telecommunications. As fiber optic cables are able to carry much more data than copper cables, especially over long distances, copper telephone networks built in the 20th century are being replaced by fiber.
FTTX is a generalization for several configurations of fiber deployment, arranged into two groups: FTTP/FTTH/FTTB (Fiber laid all the way to the premises/home/building) and FTTC/N (fiber laid to the cabinet/node, with copper wires completing the connection).
Residential areas already served by balanced pair distribution plant call for a trade-off between cost and capacity. The closer the fiber head, the higher the cost of construction and the higher the channel capacity. In places not served by metallic facilities, little cost is saved by not running fiber to the home.
Fiber to the x is the key method used to drive next-generation access (NGA), which describes a significant upgrade to the broadband available by making a step change in speed and quality of the service. This is typically thought of as asymmetrical with a download speed of 24 Mbit/s plus and a fast upload speed.
Ofcom have defined super-fast broadband as "broadband products that provide a maximum download speed that is greater than 24 Mbit/s - this threshold is commonly considered to be the maximum speed that can be supported on current generation (copper-based) networks."
A similar network called a hybrid fiber-coaxial (HFC) network is used by cable television operators but is usually not synonymous with "fiber In the loop", although similar advanced services are provided by the HFC networks. Fixed wireless and mobile wireless technologies such as Wi-Fi, WiMAX and 3GPP Long Term Evolution (LTE) are an alternative for providing Internet access.
Definitions
The telecommunications industry differe
|
https://en.wikipedia.org/wiki/Unified%20Display%20Interface
|
Unified Display Interface (UDI) was a digital video interface specification released in 2006 which was based on Digital Visual Interface (DVI). It was intended to be a lower cost implementation while providing compatibility with existing High-Definition Multimedia Interface (HDMI) and DVI displays. Unlike HDMI, which is aimed at high-definition multimedia consumer electronics devices such as television monitors and DVD players, UDI was specifically targeted towards computer monitor and video card manufacturers and did not support the transfer of audio data. A contemporary rival standard, DisplayPort, gained significant industry support starting in 2007 and the UDI specification was abandoned shortly thereafter without having released any products.
Development
On December 20, 2005, the UDI Special Interest Group (UDI SIG) was announced, along with a tentative specification called version 0.8. The group, which worked on refining the specification and promoting the interface, was led by Intel and included Apple Computer, Intel, LG, NVIDIA, Samsung, and Silicon Image Inc.
The announcement of UDI lagged the DisplayPort standard by a few months, which had been unveiled by the Video Electronics Standards Association (VESA) in May 2005. DisplayPort was being developed by a rival consortium including ATI Technologies, Samsung, NVIDIA, Dell, Hewlett-Packard, and Molex. Fundamentally, DisplayPort transmits video in packets of data, while the preceding DVI and HDMI standards transmit raw video as a digital signal; UDI took an approach closer to DVI/HDMI. The UDI specification 1.0 was finalized and released in July 2006. The differences between UDI and HDMI were kept to a minimum since both specifications were designed for long-term compatibility. Again, UDI lagged DisplayPort by a few months, which had released its finalized version 1.0 specification in May 2006.
The group changed its title in late 2006 from "special interest group" to "working group" and contemporary pres
|
https://en.wikipedia.org/wiki/Lamium%20amplexicaule
|
Lamium amplexicaule, commonly known as common henbit, or greater henbit, is a species of Lamium native to Europe, Asia and northern Africa.
It is a low-growing annual plant growing to tall, with soft, finely hairy stems. The leaves are opposite, rounded, diameter, with a lobed margin. The flowers are pink to purple, long. The specific name refers to the amplexicaul leaves (leaves grasping the stem).
Description
Henbit is an annual herb with a sprawling habit and short, erect, squarish, lightly hairy stems. It grows to a height of about . The leaves are in opposite pairs, often with long internodes. The lower leaves are stalked and the upper ones stalkless, often fused, and clasping the stems. The blades are hairy and kidney-shaped, with rounded teeth. The flowers are relatively large and form a few-flowered terminal spike with axillary whorls. The calyx is regular with five lobes and closes up after flowering. The corolla is purplish-red, fused into a tube long. The upper lip is convex, long and the lower lip has three lobes, two small side ones and a larger central one long. There are four stamens, two long and two short. The gynoecium has two fused carpels and the fruit is a four-chambered schizocarp.
This plant flowers very early in the spring even in northern areas, and for most of the winter and the early spring in warmer locations such as the Mediterranean region. At times of year when there are not many pollinating insects, the flowers self-pollinate.
Distribution and habitat
Henbit dead-nettle is probably native to the Mediterranean region but has since spread around the world. It is found growing in open areas, gardens, fields and meadows.
It propagates freely by seed, where it becomes a key part of a meadow ecosystem, Sometimes entire fields will be reddish-purple with its flowers before spring ploughing. Where common, it is an important nectar and pollen plant for bees, especially honeybees, where it helps start the spring build up.
It is wi
|
https://en.wikipedia.org/wiki/Botanical%20Society%20of%20Britain%20and%20Ireland
|
The Botanical Society of Britain and Ireland (BSBI) is a scientific society for the study of flora, plant distribution and taxonomy relating to Great Britain, Ireland, the Channel Islands and the Isle of Man. The society was founded as the Botanical Society of London in 1836, and became the Botanical Society of the British Isles, eventually changing to its current name in 2013. It includes both professional and amateur members and is the largest organisation devoted to botany in the British Isles. Its history is recounted in David Allen's book The Botanists.
The society publishes handbooks and journals, conducts national surveys and training events, and hosts conferences. It also awards grants and bursaries, sets professional standards (with Field Identification Skills Certificates (FISCs)), and works in an advisory capacity for governments and NGOs.
The society is managed by a council of elected members, and is a Registered Charity in England & Wales (212560) and Scotland (SC038675).
Publications
The BSBI has produced three atlases covering the distribution of vascular plants in British Isles. The third atlas, Atlas 2020, was published in March 2023.
It publishes a newsletter, BSBI News (ISSN 0309-930X), that is distributed to members three times a year and is available online.
The BSBI published a scientific periodical, New Journal of Botany (formerly Watsonia), that was discontinued in 2017. The journal had a north-western European scope covering vascular plants, their taxonomy, biosystematics, ecology, distribution and conservation, as well as topics of a more general or historical nature". This has been replaced by an online journal, British and Irish Botany.
The society produced the Atlas of the British Flora in 2002, the Vice-county Census Catalogue of the Vascular Plants of Great Britain in 2003, and publishes the BSBI Handbooks series.
Handbook series
The following Handbooks have been produced, with more promised for the future.
Sedges. (3rd edit
|
https://en.wikipedia.org/wiki/Eigenvector%20centrality
|
In graph theory, eigenvector centrality (also called eigencentrality or prestige score) is a measure of the influence of a node in a network. Relative scores are assigned to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of the node in question than equal connections to low-scoring nodes. A high eigenvector score means that a node is connected to many nodes who themselves have high scores.
Google's PageRank and the Katz centrality are variants of the eigenvector centrality.
Using the adjacency matrix to find eigenvector centrality
For a given graph with vertices let be the adjacency matrix, i.e. if vertex is linked to vertex , and otherwise. The relative centrality score, , of vertex can be defined as:
where is the set of neighbors of and is a constant. With a small rearrangement this can be rewritten in vector notation as the eigenvector equation
In general, there will be many different eigenvalues for which a non-zero eigenvector solution exists. However, the additional requirement that all the entries in the eigenvector be non-negative implies (by the Perron–Frobenius theorem) that only the greatest eigenvalue results in the desired centrality measure. The component of the related eigenvector then gives the relative centrality score of the vertex in the network. The eigenvector is only defined up to a common factor, so only the ratios of the centralities of the vertices are well defined. To define an absolute score, one must normalise the eigenvector e.g. such that the sum over all vertices is 1 or the total number of vertices n. Power iteration is one of many eigenvalue algorithms that may be used to find this dominant eigenvector. Furthermore, this can be generalized so that the entries in A can be real numbers representing connection strengths, as in a stochastic matrix.
Normalized eigenvector centrality scoring
Google's PageRank is based on the normalized eigenvector
|
https://en.wikipedia.org/wiki/Garde%20manger
|
A (; French) is a cool, well-ventilated area where cold dishes (such as salads, , appetizers, canapés, pâtés, and terrines) are prepared and other foods are stored under refrigeration. The person in charge of this area is known as the "" or "pantry chef". Larger hotels and restaurants may have staff to perform additional duties, such as creating decorative elements of buffet presentation like ice carving and edible centerpieces.
History
The term originated in pre-Revolutionary France, where large, wealthy households designated a kitchen manager to supervise the use and storage of large amounts of foodstuffs. The term literally means 'keeping to eat'.
The term is also related to the cold rooms inside castles and manor houses where the food was stored. These food storage areas were usually located in the lower levels, since the cool basement-like environment was ideal for storing food. These cold storage areas developed over time into the modern cold kitchen.
Most merchants who worked outside noble manors at this time were associated with a guild, an association of persons of the same trade formed for their mutual aid and protection. Guilds would develop training programs for their members, thereby preserving their knowledge and skills. was the name of a guild that prepared and sold cooked items made from pigs. Through this organization, the methods of preparing hams, bacon, sausages, pâtés, and terrines were preserved. When the guild system was abolished in 1791 following the French Revolution of 1789, took on the responsibility for tasks that had formerly been performed by , who had difficulty competing with the versatile garde mangers due to the limited range of skills involved.
The position of "butcher" first developed as a specialty within the garde manger kitchen. As both the cost of and demand for animal meats increased, more space was required for the task of fabricating and portioning the raw meats. This increased need for space was due not only
|
https://en.wikipedia.org/wiki/Cauchy%20condensation%20test
|
In mathematics, the Cauchy condensation test, named after Augustin-Louis Cauchy, is a standard convergence test for infinite series. For a non-increasing sequence of non-negative real numbers, the series converges if and only if the "condensed" series converges. Moreover, if they converge, the sum of the condensed series is no more than twice as large as the sum of the original.
Estimate
The Cauchy condensation test follows from the stronger estimate,
which should be understood as an inequality of extended real numbers. The essential thrust of a proof follows, patterned after Oresme's proof of the divergence of the harmonic series.
To see the first inequality, the terms of the original series are rebracketed into runs whose lengths are powers of two, and then each run is bounded above by replacing each term by the largest term in that run. That term is always the first one, since by assumption the terms are non-increasing.
To see the second inequality, these two series are again rebracketed into runs of power of two length, but "offset" as shown below, so that the run of which begins with lines up with the end of the run of which ends with , so that the former stays always "ahead" of the latter.
Integral comparison
The "condensation" transformation recalls the integral variable substitution yielding .
Pursuing this idea, the integral test for convergence gives us, in the case of monotone , that converges if and only if converges. The substitution yields the integral . We then notice that , where the right hand side comes from applying the integral test to the condensed series . Therefore, converges if and only if converges.
Examples
The test can be useful for series where appears as in a denominator in . For the most basic example of this sort, the harmonic series is transformed into the series , which clearly diverges.
As a more complex example, take
Here the series definitely converges for , and diverges for . When , the condensation
|
https://en.wikipedia.org/wiki/Design%20for%20manufacturability
|
Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.
Depending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM.
While DFM is applicable to the design process, a similar concept called DFSS (design for Six Sigma) is also practiced in many organizations.
For printed circuit boards (PCB)
In the PCB design process, DFM leads to a set of design guidelines that attempt to ensure manufacturability. By doing so, probable production problems may be addressed during the design stage.
Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving.
As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs. For example, if a process may be done automatically by machines (i.e. SMT component placement and soldering), such process is likely to be cheaper than doing so by hand.
For integrated circuits (IC)
Achieving high-yielding designs, in the state of the art VLSI technology has become an extremely challenging t
|
https://en.wikipedia.org/wiki/Caustic%20%28optics%29
|
In optics, a caustic or caustic network is the envelope of light rays which have been reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface. The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. Therefore, in the photo to the right, caustics can be seen as patches of light or their bright edges. These shapes often have cusp singularities.
Explanation
Concentration of light, especially sunlight, can burn. The word caustic, in fact, comes from the Greek καυστός, burnt, via the Latin causticus, burning.
A common situation where caustics are visible is when light shines on a drinking glass. The glass casts a shadow, but also produces a curved region of bright light. In ideal circumstances (including perfectly parallel rays, as if from a point source at infinity), a nephroid-shaped patch of light can be produced. Rippling caustics are commonly formed when light shines through waves on a body of water.
Another familiar caustic is the rainbow. Scattering of light by raindrops causes different wavelengths of light to be refracted into arcs of differing radius, producing the bow.
Computer graphics
In computer graphics, most modern rendering systems support caustics. Some of them even support volumetric caustics. This is accomplished by raytracing the possible paths of a light beam, accounting for the refraction and reflection. Photon mapping is one implementation of this. Volumetric caustics can also be achieved by volumetric path tracing. Some computer graphic systems work by "forward ray tracing" wherein photons are modeled as coming from a light source and bouncing around the environment according to rules. Caustics are formed in the regions where sufficient photons strike a surface causing it to be brighter than the average area in the scene. “Backward ray tracing” works in the reverse manner beginnin
|
https://en.wikipedia.org/wiki/Rubber%20diode
|
In electronics, a rubber diode or V multiplier is a bipolar junction transistor circuit that serves as a voltage reference. It consists of one transistor and two resistors, and the reference voltage across the circuit is determined by the selected resistor values and the base-to-emitter voltage (V) of the transistor. The circuit behaves as a voltage divider, but with the voltage across the base-emitter resistor determined by the forward base-emitter junction voltage.
It is commonly used in the biasing of push-pull output stages of amplifiers, where one benefit is thermal compensation: The temperature-dependent variations in the multiplier's V, approximately -2.2 mV/°C, can be made to match variations occurring in the V of the power transistors by mounting to the same heat sink. In this context, it is sometimes called a bias servo.
|
https://en.wikipedia.org/wiki/Circulant%20graph
|
In graph theory, a circulant graph is an undirected graph acted on by a cyclic group of symmetries which takes any vertex to any other vertex. It is sometimes called a cyclic graph, but this term has other meanings.
Equivalent definitions
Circulant graphs can be described in several equivalent ways:
The automorphism group of the graph includes a cyclic subgroup that acts transitively on the graph's vertices. In other words, the graph has a graph automorphism, which is a cyclic permutation of its vertices.
The graph has an adjacency matrix that is a circulant matrix.
The vertices of the graph can be numbered from 0 to in such a way that, if some two vertices numbered and are adjacent, then every two vertices numbered and are adjacent.
The graph can be drawn (possibly with crossings) so that its vertices lie on the corners of a regular polygon, and every rotational symmetry of the polygon is also a symmetry of the drawing.
The graph is a Cayley graph of a cyclic group.
Examples
Every cycle graph is a circulant graph, as is every crown graph with vertices.
The Paley graphs of order (where is a prime number congruent to ) is a graph in which the vertices are the numbers from 0 to and two vertices are adjacent if their difference is a quadratic residue modulo . Since the presence or absence of an edge depends only on the difference modulo of two vertex numbers, any Paley graph is a circulant graph.
Every Möbius ladder is a circulant graph, as is every complete graph. A complete bipartite graph is a circulant graph if it has the same number of vertices on both sides of its bipartition.
If two numbers and are relatively prime, then the rook's graph (a graph that has a vertex for each square of an chessboard and an edge for each two squares that a chess rook can move between in a single move) is a circulant graph. This is because its symmetries include as a subgroup the cyclic group Cmn Cm×Cn. More generally, in this case, the tensor product of graphs
|
https://en.wikipedia.org/wiki/Disintegrin
|
Disintegrins are a family of small proteins (45–84 amino acids in length) from viper venoms that function as potent inhibitors of both platelet aggregation and integrin-dependent cell adhesion.
Operation
Disintegrins work by countering the blood clotting steps, inhibiting the clumping of platelets. They interact with the beta-1 and -3 families of integrins receptors. Integrins are cell receptors involved in cell–cell and cell–extracellular matrix interactions, serving as the final common pathway leading to aggregation via formation of platelet–platelet bridges, which are essential in thrombosis and haemostasis. Disintegrins contain an RGD (Arg-Gly-Asp) or KGD (Lys-Gly-Asp) sequence motif that binds specifically to integrin IIb-IIIa receptors on the platelet surface, thereby blocking the binding of fibrinogen to the receptor–glycoprotein complex of activated platelets. Disintegrins act as receptor antagonists, inhibiting aggregation induced by ADP, thrombin, platelet-activating factor and collagen. The role of disintegrin in preventing blood coagulation renders it of medical interest, particularly with regard to its use as an anti-coagulant.
Types of disintegrin
Disintegrins from different snake species have been characterised: albolabrin, applagin, barbourin, batroxostatin, bitistatin, obtustatin, schistatin, echistatin, elegantin, eristicophin, flavoridin, halysin, kistrin, mojastin (Crotalus scutulatus), rubistatin (Crotalus ruber), tergeminin, salmosin, tzabcanin (Crotalus simus tzabcan) and triflavin.
Disintegrins are split into 5 classes: small, medium, large, dimeric, and snake venom metalloproteinases.
Small Disintegrins: 49-51 amino acids, 4 disulfide bonds
Medium Disintegrins: 70 amino acids, 6 disulfide bonds
Large Disintegrins: 84 amino acids, 7 disulfide bonds
Dimeric Disintegrins: 67 amino acids, 4 intra-chain disulfide bonds
Snake Venom Metalloproteinases: 100 amino acids, 8 disulfide bond
Evolution of disintegrin family
Disintegrins evolved via g
|
https://en.wikipedia.org/wiki/Operation%20Windmill
|
Operation Windmill (OpWml) was the United States Navy's Second Antarctica Developments Project, an exploration and training mission to Antarctica in 1947–1948. This operation was a follow-up to the First Antarctica Development Project known as Operation Highjump. The expedition was commanded by Commander Gerald L. Ketchum, USN, and the flagship of Task Force 39 was the icebreaker USS Burton Island.
Missions during Operation Windmill varied including supply activities, helicopter reconnaissance of ice flows, scientific surveys, underwater demolition surveys, and convoy exercises. Malcolm Davis collected live animals, such as penguins and leopard seals, for zoological studies.
The icebreaker USS Edisto (AG-89) sailed on 1 November 1947 for the Panama Canal to rendezvous with the Burton Island for the expedition.
See also
List of Antarctic expeditions
Military activity in the Antarctic
|
https://en.wikipedia.org/wiki/Quest%20International
|
Quest International was a major producer of flavors, fragrances and food ingredients with sales of £560 million in 2005 before its acquisition by rival Givaudan. Quest created and marketed flavours and fragrance concepts and solutions for the fast-moving consumer goods industries. With operations in 31 countries, Quest made ingredients for foods, snacks, beverages, personal care, fine fragrances, and home hygiene products.
Quest Flavours and Food Ingredients was headquartered in Naarden, Netherlands; Quest Fragrances was based in Ashford, Kent, UK.
Major competitors included Firmenich, Givaudan, International Flavors and Fragrances and Symrise.
History
Some highlights in the 100-year history of Quest include -
1905 N.V. Chemische Fabriek "Naarden" is established. It is to become widely known simply as "Naarden", the name of the nearby ancient fortress town. In English, the name is Chemical Factory Naarden (CFN). The company starts operations with 14 employees at the site of a former sugar beet factory, making glycerine for South Africa (glycerine was used in a wide range of industries, including explosives, paints, food and drinks). The company soon hit financial problems and for the next few years survives by distilling caraway seeds and other materials for essential oils. The manufacture of aromatic chemicals from which essences and perfume compounds emerged - the basis of the future industry on which the company was to flourish and become today's Quest International.
1908 Willem A van Dorp joins as manager of CFN at the age of 26. He and his son - also named W A van Dorp - were to have a huge impact on the future of the company by adopting more modern approaches to chemistry.
1910 CFN exhibited at a Brussels fair with an ornate display case of products.
1914 At the outbreak of the first world war, CFN produced 5,000 tons of glycerine a year that was to be exported to Britain, which took over the existing contract to South Africa because of the war. Hollan
|
https://en.wikipedia.org/wiki/Fast%20fission
|
Fast fission is fission that occurs when a heavy atom absorbs a high-energy neutron, called a fast neutron, and splits. Most fissionable materials need thermal neutrons, which move more slowly.
Fast reactors vs. thermal reactors
Fast neutron reactors use fast fission to produce energy, unlike most nuclear reactors. In a conventional reactor, a moderator is needed to slow down the neutrons so that they are more likely to fission atoms. A fast neutron reactor uses fast neutrons, so it does not use a moderator. Moderators may absorb a lot of neutrons in a thermal reactor, and fast fission produces a higher average number of neutrons per fission, so fast reactors have better neutron economy making a plutonium breeder reactor possible. However, a fast neutron reactor must use relatively highly enriched uranium or plutonium so that the neutrons have a better chance of fissioning atoms.
Fissionable but not fissile
Some atoms, notably uranium-238, do not usually undergo fission when struck by slow neutrons, but do split when struck with neutrons of high enough energy. The fast neutrons produced in a hydrogen bomb by fusion of deuterium and tritium have even higher energy than the fast neutrons produced in a nuclear reactor. This makes it possible to increase the yield of any given fusion weapon by the simple expedient of adding layers of cheap natural (or even depleted) uranium. Fast fission of uranium-238 provides a large part of the explosive yield, and fallout, in many designs of hydrogen bomb.
Differences in fission product yield
A graph of fission product yield against the mass number of the fission fragments has two pronounced but fairly flat peaks, at around 90 to 100, and 130 to 140. With thermal neutrons, yields of fission products with mass between the peaks, such as 113mCd, 119mSn, 121mSn, 123Sn, 125Sb, 126Sn, and 127Sb are very low.
The higher the energy of the state that undergoes nuclear fission, the more likely a symmetric fission is, hence as the neutron
|
https://en.wikipedia.org/wiki/The%20Fourth%20Dimension%20%28book%29
|
The Fourth Dimension: Toward a Geometry of Higher Reality (1984) is a popular mathematics book by Rudy Rucker, a Silicon Valley professor of mathematics and computer science. It provides a popular presentation of set theory and four dimensional geometry as well as some mystical implications. A foreword is provided by Martin Gardner and the 200+ illustrations are by David Povilaitis.
The Fourth Dimension: Toward a Geometry of Higher Reality was reprinted in 1985 as the paperback The Fourth Dimension: A Guided Tour of the Higher Universes. It was again reprinted in paperback in 2014 by Dover Publications with its original subtitle.
Like other Rucker books, The Fourth Dimension is dedicated to Edwin Abbott Abbott, author of the novella Flatland.
Synopsis
The Fourth Dimension teaches readers about the concept of a fourth spatial dimension. Several analogies are made to Flatland; in particular, Rucker compares how a square in Flatland would react to a cube in Spaceland to how a cube in Spaceland would react to a hypercube from the fourth dimension.
The book also includes multiple puzzles.
Reception
Kirkus Reviews called it "animated, often amusing", and a "rare treat", but noted that the book eventually leaves mathematical topics behind to focus instead on "mysticism of the all-is-one-one-is-all thinking of an Ouspensky." The Quarterly Review of Biology declared it to be "nice", and "at times (...) enchanting", comparing it to The Tao of Physics.
See also
Hiding in the Mirror, a similar book by Lawrence M. Krauss.
|
https://en.wikipedia.org/wiki/Conservas%20Ramirez
|
Ramirez & Cia (Filhos), SA is a Portuguese producer of canned fish products, such as tuna and sardines with tomato sauce. It also produces other foodstuffs such as canned salads. Manuel Guerreiro Ramirez, great-grandson of the founder Sebastian Ramirez, was the owner until his death in 2022.
Profile and history
The company was founded in 1853. As the first canned fish undertaking in the country, the Vila Real de Santo António plant in the southern Portuguese region of Algarve was the cradle of the sector in Portugal. Applying the principle discovered by Nicholas Appert, and later drawn up into the theories of Louis Pasteur, it revolutionized the concept of food conservation in Portugal. At the start of the 20th century, exportation of products began and the first sardine steam ship was launched to the sea (in 1908), it was named Nossa Senhora da Encarnação. By the 2000s, with 250 employees, it had plants in Leça da Palmeira and Peniche, and sold 40 million cans/year exporting 45% of its production which is commercialized under its own brands.
Internationalization
Created in 1853 Ramirez is present in the Portuguese market and in 50 international markets with a range of 55 canned fish varieties and sixteen brands.
Ramirez's internationalization process began at the end of the 19th century, creating brands such as Cocagne in the Benelux countries (exported since 1906). Tomé in the Philippines, Canada, and the United States, Al-Fares in the Arab world, or Gabriel in South Africa are all brands of the Portuguese company. Exportation of Ramirez's canned fish products continues to grow in the markets of Austria, Spain, Belgium, the Netherlands, Luxembourg, France, Brazil, England, Switzerland, South Africa, Canada, USA, Venezuela, Angola, Mozambique, Germany, Israel, Japan, China, and Australia.
Quality certification
Ramirez possesses its own laboratory and complies with the HACCP system. The Ramirez quality control system has been ratified by control departments such
|
https://en.wikipedia.org/wiki/Lead%20castle
|
A lead castle, also called a lead cave or a lead housing, is a structure composed of lead to provide shielding against gamma radiation in a variety of applications in the nuclear industry and other activities which use ionizing radiation.
Applications
Shielding of radioactive materials
Castles are widely used to shield radioactive "sources" (see notes) and radioactive materials, either in the laboratory or plant environment. The purpose of the castle is to shield people from gamma radiation. Lead will not efficiently attenuate neutrons. If an experiment or pilot plant is to be observed, a viewing window of lead glass may be used to give gamma shielding but allow visibility.
Shielding of radiation detectors
Plant radiation detectors that are operating in a high ambient gamma background are sometimes shielded to prevent the background swamping the detector. Such a detector may be looking for alpha and beta particles, and gamma radiation will affect this.
Laboratory or health physics detectors, even if remote from nuclear operations, may require shielding if very low levels of radiation are to be detected. This is the case with, for instance, a scintillation counter measuring low levels of contamination on a swab or sample.
Construction
The castle can be made from individual bricks; usually with interlocking chevron edges to prevent "shine paths" of direct radiation through the gaps. They can also be made from lead produced in bespoke shapes by machining or casting. Such an example would be the annular ring castle commonly used for shielding scintillation counters.
A typical lead brick weighs about ten kilograms. Lead castles can be made of hundreds of bricks and weigh thousands of kilograms, so the floor must be able to withstand a heavy load. It is best to set up on a floor designed to carry the weight, or in the basement of a building built on a concrete slab. If the castle is not put directly on such a floor, it will require a suitably strong structure
|
https://en.wikipedia.org/wiki/Vx32
|
The Vx32 virtual extension environment is an application-level virtual machine implemented as an ordinary user-mode library and designed to run native x86 code. Applications can link with and use Vx32 in order to create safe, OS-independent execution environments, in which to run untrusted plug-ins or other extensions written in any language that compiles to x86 code.
From the host processor's viewpoint, plug-ins running under the Vx32 virtual machine monitor run in the context of the application process itself, but the Vx32 library uses dynamic recompilation to prevent the "guest" plug-in code from accessing memory or jumping to instructions outside its designated sandbox. The Vx32 library redirects any system calls the plug-in makes to the application itself rather than to the host operating system, thereby giving the application exclusive control over the API and security environment in which the plug-in code executes.
Vx32 thus provides an application extension facility comparable in function to the Java virtual machine (JVM) or the Common Language Runtime (CLR), but with less overhead and with the ability to run code written in any language, safe or unsafe. Vx32's primary disadvantage is that it is more difficult to make it run on non-x86 host processors.
Criticism
There are some disadvantages that have been proposed by critics of Vx32:
Vx32 is closely tied to the IA-32 instruction set, which makes it difficult to use on non-x86 architectures
The IA-32e (AMD64) mode cannot be used by guests (the host can still run in 64-bit mode), because of the use of segmentation which is inherent to Vx32's design
External links
The Vx32 Virtual Extension Environment
Vx32: Lightweight User-level Sandboxing on the x86 - Paper presented at USENIX 2008
9vx - A port of Plan 9 from Bell Labs to vx32.
vx32 for Win32
Virtualization software
Virtualization software for Linux
X86 emulators
|
https://en.wikipedia.org/wiki/Radeon%20R300%20series
|
The R300 GPU, introduced in August 2002 and developed by ATI Technologies, is its third generation of GPU used in Radeon graphics cards. This GPU features 3D acceleration based upon Direct3D 9.0 and OpenGL 2.0, a major improvement in features and performance compared to the preceding R200 design. R300 was the first fully Direct3D 9-capable consumer graphics chip. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs.
The first graphics cards using the R300 to be released were the Radeon 9700. It was the first time that ATI marketed its GPU as a Visual Processing Unit (VPU). R300 and its derivatives would form the basis for ATI's consumer and professional product lines for over 3 years.
The integrated graphics processor based upon R300 is the Xpress 200.
Development
ATI had held the lead for a while with the Radeon 8500 but Nvidia retook the performance crown with the launch of the GeForce 4 Ti line. A new high-end refresh part, the 8500XT (R250) was supposedly in the works, ready to compete against NVIDIA's high-end offerings, particularly the top line Ti 4600. Pre-release information listed a 300 MHz core and RAM clock speed for the R250 chip. ATI, perhaps mindful of what had happened to 3dfx when they took focus off their Rampage processor, abandoned it in favor of finishing off their next-generation R300 card. This proved to be a wise move, as it enabled ATI to take the lead in development for the first time instead of trailing NVIDIA. The R300, with its next-generation architecture giving it unprecedented features and performance, would have been superior to any R250 refresh.
The R3xx chip was designed by ATI's West Coast team (formerly ArtX Inc.), and the first product to use it was the Radeon 9700 PRO (internal ATI code name: R300; internal ArtX codename: Khan), launched in August 2002. The architecture of R300 was quite different from its predecessor, Radeon 8500 (R200), in nearly every way. The core of 9700 P
|
https://en.wikipedia.org/wiki/Champ%20%28folklore%29
|
In American folklore, Champ or Champy is the name of a lake monster said to live in Lake Champlain, a -long body of fresh water shared by New York and Vermont, with a portion extending into Quebec, Canada. The legend of the monster is considered a draw for tourism in the Burlington, Vermont and Plattsburgh, New York areas.
History of the legend
Over the years, there have been over 300 reported sightings of Champ.
The original story is related to Iroquois legends of giant snakes, which the Mohawk named Onyare'kowa.
French cartographer Samuel de Champlain, the founder of Québec and the lake's namesake, is often claimed to be the first European to have sighted Champ, in 1609. The earliest source for this claim is the summer 1970 issue of the magazine Vermont Life. The magazine quoted Champlain as having documented a "20-foot serpent thick as a barrel, and a head like a horse." There is no evidence that Champlain ever said this,, although he did document large fish:
The 1878 translation of his journals clarifies that Chaoufaou refers to gar (or gar pike), specifically Lepisosteus osseus (the longnose gar).
An 1819 report in the Plattsburgh Republican, entitled "Cape Ann Serpent on Lake Champlain", reports a "Capt. Crum" sighting an enormous serpentine monster. Crum estimated the monster to have been about 187 feet long and approximately two hundred yards away from him. Despite the great distance, he claimed to have witnessed it being followed by "two large Sturgeon and a Bill-fish" and was able to see that it had three teeth and eyes the color of peeled onions. He also described the monster as having "a belt of red" around its neck and a white star on its forehead.
In 1883, Sheriff Nathan H. Mooney claimed that he had seen a water serpent about "20 rods" (the equivalent of 110 yards in length) from where he was on the shore. He claimed that he was so close that he could see "round white spots inside its mouth" and that "the creature appeared to be about 25 to 30
|
https://en.wikipedia.org/wiki/Shift%20rule
|
The shift rule is a mathematical rule for sequences and series.
Here and are natural numbers.
For sequences, the rule states that if is a sequence, then it converges if and only if also converges, and in this case both sequences always converge to the same number.
For series, the rule states that the series converges to a number if and only if converges.
|
https://en.wikipedia.org/wiki/Conidiation
|
Conidiation is a biological process in which filamentous fungi reproduce asexually from spores. Rhythmic conidiation is the most obvious output of fungal circadian rhythms. Neurospora species are most often used to study this rhythmic conidiation. Physical stimuli, such as light exposure and mechanical injury to the mycelium trigger conidiation; however, conidiogenesis itself is a holistic response determined by the cell's metabolic state, as influenced by the environment and endogenous biological rhythms.
See also
Conidium
|
https://en.wikipedia.org/wiki/Webisode
|
A webisode (portmanteau of "web" and "episode") is an episode of a series that is distributed as part of a web series or on streaming television. It is available either for download or in streaming, as opposed to first airing on broadcast or cable television. The format can be used as a preview, as a promotion, as part of a collection of shorts, or as a commercial. A webisode may or may not have been broadcast on TV. What defines it is its online distribution on the web, or through video-sharing web sites such as Vimeo or YouTube. While there is no set standard for length, most webisodes are relatively short, ranging from 3–15 minutes in length. It is a single web episode, but collectively is part of a web series. The term webisode (a portmanteau formed from the words web and episode) was first introduced in the Merriam-Webster's Collegiate Dictionary in 2009.
History
Webisodes have become increasingly common in the midst of the post-broadcast era, which implies that audiences are drifting away past free-to-use television design. The post-broadcast era has been influenced by new media formats such as the Internet. Contemporary trends indicate that the Internet has become the dominant mechanism for accessing Media Content. In 2012, the Nielsen Company reported that the number of American households with television access has diminished for the second straight year, showing that viewers are transitioning away from broadcast television. The post-broadcast era is best defined as embodiment by a complex mediascape that cannot be maintained by broadcast television; in its wake, the popularity of webisodes has expanded because the internet has become a potential solution to television's ailments by combining interpersonal communication and multimedia elements alongside entertainment programing.
These original web series are a means to monetize this transitional audience and produce new celebrities, both independently on the web and working in accordance to the previous m
|
https://en.wikipedia.org/wiki/Open%20Transport%20Network
|
Open Transport Network (OTN) is a flexible private communication network based on fiber optic technology, manufactured by OTN Systems.
It is a networking technology used in vast, private networks with a great diversity of communication requirements, such as subway systems, pipelines, the mining industry, tunnels and the like (ref). It permits all kinds of applications such as video images, different forms of speech and data traffic, information for process management and the like to be sent flawlessly and transparent over a practically unlimited distance. The system is a mix of Transmission and Access NE, communicating over an optical fiber. The communication protocols include serial protocols (e.g. RS232) as well as telephony (POTS/ISDN), audio, Ethernet, video and video-over-IP (via M-JPEG, MPEG2/4, H.264 or DVB) (ref).
Open Transport Network is a brand name and not to be mistaken with Optical Transport Network.
Concept
The basic building block of OTN is called a node. It is a 19" frame that houses and interconnects the building blocks that produce the OTN functionality. Core building blocks are the power supply and the optical ring adapter (called BORA : Broadband Optical Ring Adapter) (ref). The remaining node space can be configured with up to 8 (different) layer 1 interfaces as required.
OTN nodes are interconnected using pluggable optical fibers in a dual counterrotating ring topology. The primary ring consists of fibers carrying data from node to node in one direction, the secondary ring runs parallel with the primary ring but carries data in the opposite direction. Under normal circumstances, only one ring carries active data. If a failure is detected in this data path, the secondary ring is activated. This hot standby topology results in a 1 + 1 path redundancy. The switchover mechanism is hardware based and results in ultrafast (50ms) switchover without service loss.
Virtual bidirectional point-to-point or point-to-multipoint connections (se
|
https://en.wikipedia.org/wiki/EDA%20database
|
An EDA database is a database specialized for the purpose of electronic design automation. These application specific databases are required because general purpose databases have historically not provided enough performance for EDA applications.
In examining EDA design databases, it is useful to look at EDA tool architecture, to determine which parts are to be considered part of the design database, and which parts are the application levels. In addition to the database itself, many other components are needed for a useful EDA application. Associated with a database are one or more language systems (which, although not directly part of the database, are used by EDA applications such as parameterized cells and user scripts). On top of the database are built the algorithmic engines within the tool (such as timing, placement, routing, or simulation engines ), and the highest level represents the applications built from these component blocks, such as floorplanning. The scope of the design database includes the actual design, library information, technology information, and the set of translators to and from external formats such as Verilog and GDSII.
Mature design databases
Many instances of mature design databases exist in the EDA industry, both as a basis for commercial EDA tools as well as proprietary EDA tools developed by the CAD groups of major electronics companies.
IBM, Hewlett-Packard, SDA Systems and ECAD (now Cadence Design Systems), High Level Design Systems, and many other companies developed EDA specific databases over the last 20 years, and these continue to be the basis of IC-design systems today. Many of these systems took ideas from university research and successfully productized them. Most of the mature design databases have evolved to the point where they can represent netlist data, layout data, and the ties between the two. They are hierarchical to allow for reuse and smaller designs. They can support styles of layout from digital through pur
|
https://en.wikipedia.org/wiki/Algebra%20of%20communicating%20processes
|
The algebra of communicating processes (ACP) is an algebraic approach to reasoning about concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras or process calculi. ACP was initially developed by Jan Bergstra and Jan Willem Klop in 1982, as part of an effort to investigate the solutions of unguarded recursive equations. More so than the other seminal process calculi (CCS and CSP), the development of ACP focused on the algebra of processes, and sought to create an abstract, generalized axiomatic system for processes, and in fact the term process algebra was coined during the research that led to ACP.
Informal description
ACP is fundamentally an algebra, in the sense of universal algebra. This algebra is a way to describe systems in terms of algebraic process expressions that define compositions of other processes, or of certain primitive elements.
Primitives
ACP uses instantaneous, atomic actions () as its primitives. Some actions have special meaning, such as the action , which represents deadlock or stagnation, and the action , which represents a silent action (abstracted actions that have no specific identity).
Algebraic operators
Actions can be combined to form processes using a variety of operators. These operators can be roughly categorized as providing a basic process algebra, concurrency, and communication.
Choice and sequencing – the most fundamental of algebraic operators are the alternative operator (), which provides a choice between actions, and the sequencing operator (), which specifies an ordering on actions. So, for example, the process
first chooses to perform either or , and then performs action . How the choice between and is made does not matter and is left unspecified. Note that alternative composition is commutative but sequential composition is not (because time flows forward).
Concurrency – to allow the description of concurrency, ACP provides the merge and left-merge
|
https://en.wikipedia.org/wiki/Nord-100
|
The Nord-100 was a 16-bit minicomputer series made by Norsk Data, introduced in 1979. It shipped with the Sintran III operating system, and the architecture was based on, and backward compatible with, the Nord-10 line.
The Nord-100 was originally named the Nord-10/M (M for Micro) as a bit sliced OEM processor. The board was laid out, finished, and tested when they realized that the central processing unit (CPU) was far faster than the Nord-10/S. The result was that all the marketing material for the new NORD-10/M was discarded, the board was rechristened the Nord-100, and extensively advertised as the successor of the Nord-10 line. Later, in an effort to internationalize their line, the machine was renamed ND-100.
Performance
CPU
The ND-100 line used a custom processor, and like the PDP-11 line, the CPU decided the name of the computer.
Nord-100/CE, Commercial Extended, with decimal arithmetic instructions (The decimal instruction set was later renamed CX)
ND-110, incrementally improved ND-100
ND-110/CX, an ND-110 with decimal instructions
ND-120/CX, full redesign
The ND-100 line was machine-instruction compatible with the Nord-10 line, except for some extended instructions, all in supervisor mode, mostly used by the operating system. Like most processors of its time, the native bit grouping was octal, despite the 16-bit word length.
The ND-100 series had a microcoded CPU, with downloadable microcode, and was considered a complex instruction set computer (CISC) processor.
===ND-100===
The ND-100 was implemented using medium-scale integration (MSI) logic and bit-slice processors.
The ND-100 was frequently sold together with a memory management unit card, the MMS. The combined power use of these boards was 90 watts. The boards would usually occupy slots 2 and 3, for the CPU and MMS, respectively. Slot 1 was reserved for the Tracer, a hardware debugger system.
ND-100/CE
The CE stood for Commercial Extended. The processor was upgraded by replacing the microcod
|
https://en.wikipedia.org/wiki/ND-500
|
The ND-500 was a 32-bit superminicomputer delivered in 1981 by Norsk Data. It relied on a ND-100 to do housekeeping tasks and run the OS, SINTRAN III. A configuration could feature up to four ND-500 CPUs in a shared-memory configuration.
Hardware implementations
The ND-500 architecture lived through four distinct implementations. Each implementation was sold under a variety of different model numbers.
ND also sold multiprocessor configurations, naming them ND-580/n and an ND-590n, where n represented the number of CPUs in a given configuration, 2, 3, or 4.
ND-500/1
Sold as the ND-500, ND-520, ND-540, and ND-560.
ND-500/2
Sold as the ND-570, ND-570/CX, and ND-570/ACX.
ND-505
A 28-bit version of the ND-500 machine. Pins were snipped on the backplane, removing its status as a superminicomputer, allowing it to legally pass through the CoCom embargo.
Samson
Sold as the ND-5200, ND-5400, ND-5500, ND-5700, and ND-5800. The ND-120 CPU line, which constituted the ND-100 side of most ND-5000 computers, was named Delilah. As the 5000 line progressed in speed, the dual-arch ND-100/500 configuration increasingly became bottlenecked by all input/output (I/O) having to go through the ND-100.
Rallar
Sold as the ND-5830 and ND-5850. The Rallar processor consisted of two main VLSI gate arrays, KUSK (En: Jockey) and GAMP (En: Horse).
Software
LED was a programmer's source-code editor by Norsk Data running on the ND-500 computers running Sintran III. It featured automatic indenting, pretty-printing of source code, and integration with the compiler environment. It was sold as an advanced alternative to PED. Several copies exist, and it is installed on the NODAF public access ND-5700.
In 198283, Logica PLC in London undertook a project, on behalf of ND, to port Unix Berkley Software Distribution (BSD) 4.2 to the ND-500. A C compiler from Luleå University College in Northern Sweden was used. The goal was to port Unix BSD to the ND-500 and use the ND-100 running Sintran-III as t
|
https://en.wikipedia.org/wiki/Anonymous%20pipe
|
In computer science, an anonymous pipe is a simplex FIFO communication channel that may be used for one-way interprocess communication (IPC). An implementation is often integrated into the operating system's file IO subsystem. Typically a parent program opens anonymous pipes, and creates a new process that inherits the other ends of the pipes, or creates several new processes and arranges them in a pipeline.
Full-duplex (two-way) communication normally requires two anonymous pipes.
Pipelines are supported in most popular operating systems, from Unix and DOS onwards, and are created using the "|" character in many shells.
Unix
Pipelines are an important part of many traditional Unix applications and support for them is well integrated into most Unix-like operating systems. Pipes are created using the pipe system call, which creates a new pipe and returns a pair of file descriptors referring to the read and write ends of the pipe. Many traditional Unix programs are designed as filters to work with pipes.
Microsoft Windows
Like many other device IO and IPC facilities in the Windows API, anonymous pipes are created and configured with API functions that are specific to the IO facility. In this case CreatePipe is used to create an anonymous pipe with separate handles for the read and write ends of the pipe. Read and write IO operations on the pipe are performed with the standard IO facility API functions ReadFile and WriteFile.
On Microsoft Windows, reads and writes to anonymous pipes are always blocking. In other words, a read from an empty pipe will cause the calling thread to wait until at least one byte becomes available or an end-of-file is received as a result of the write handle of the pipe being closed. Likewise, a write to a full pipe will cause the calling thread to wait until space becomes available to store the data being written. Reads may return with fewer than the number of bytes requested (also called a short read).
New processes can inherit hand
|
https://en.wikipedia.org/wiki/Multi-core%20processor
|
A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores (for example, dual-core or quad-core), each of which reads and executes program instructions. The instructions are ordinary CPU instructions (such as add, move data, and branch) but the single processor can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP) or onto multiple dies in a single chip package. The microprocessors currently used in almost all personal computers are multi-core.
A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies used to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores; heterogeneous multi-core systems have cores that are not identical (e.g. big.LITTLE have heterogeneous cores that share the same instruction set, while AMD Accelerated Processing Units have cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading.
Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to h
|
https://en.wikipedia.org/wiki/Alpha%20Microsystems
|
Alpha Microsystems, Inc., often shortened to Alpha Micro, was an American computer company founded in California in 1977. The company was founded in 1977 in Costa Mesa, California, by John French, Dick Wilcox and Bob Hitchcock. During the dot-com boom, the company changed its name to AlphaServ, then NQL Inc., reflecting its pivot toward being a provider of Internet software. However, the company soon reverted to its original Alpha Microsystems name after the dot-com bubble burst.
Products
The first Alpha Micro computer was the S-100 AM-100, based upon the WD16 microprocessor chipset from Western Digital. As of 1982, AM-100/L and the AM-1000 were based on the Motorola 68000 and succeeding processors, though Alpha Micro swapped several addressing lines to create byte-ordering compatibility with their earlier processor.
Early peripherals included standard computer terminals (such models as Soroc, Hazeltine 1500, and Wyse WY50), Fortran punch card readers, 100 baud rate acoustic coupler modems (later upgraded to 300 baud modems), and 10 MB CDC Hawk hard drives with removable disk packs.
The company's primary claim to fame was selling inexpensive minicomputers that provided multi-user power using a proprietary operating system called AMOS (Alpha Micro Operating System). The operating system on the 68000 machines was called AMOS/L. The operating system had major similarities to the operating system of the DEC DECsystem-10. This may not be coincidental; legend has it that the founders based their operating system on "borrowed" source code from DEC, and DEC, perceiving the same, unsuccessfully tried to sue Alpha Micro over the similarities in 1984.
As Motorola stopped developing their 68000 product, Alpha Micro started to move to the x86 CPU family, used in common PCs. This was initially done with the Falcon cards, allowing standard DOS and later Windows-based PCs to run AMOS applications on the 68000-series CPU on the Falcon card. The work done on AMPC became the fo
|
https://en.wikipedia.org/wiki/%E2%86%92
|
→ or -> may refer to:
one of the arrow symbols, characters of Unicode
one of the arrow keys, on a keyboard
→, >, representing the assignment operator in various programming languages
->, a pointer operator in C and C++ where a->b is synonymous with (*a).b (except when either -> or * has been overridden in C++).
→, goto in the APL programming language
→, representing the direction of a chemical reaction in a chemical equation
→, representing the set of all mathematical functions that map from one set to another in set theory
→, representing a material implication in logic
→, representing morphism in category theory
→, representing a vector in physics and mathematics
the relative direction of right or forward
→, a notation of Conway chained arrow notation for very large integers
"Due to" (and other meanings), in medical notation
the button that starts playback of a recording on a media player
See also
Arrow (disambiguation)
↑ (disambiguation)
↓ (disambiguation)
← (disambiguation)
"Harpoons":
↼
↽
↾
↿
⇀
⇁
⇂
⇃
⇋
⇌
Logic symbols
|
https://en.wikipedia.org/wiki/Congruum
|
In number theory, a congruum (plural congrua) is the difference between successive square numbers in an arithmetic progression of three squares.
That is, if , , and (for integers , , and ) are three square numbers that are equally spaced apart from each other, then the spacing between them, , is called a congruum.
The congruum problem is the problem of finding squares in arithmetic progression and their associated congrua. It can be formalized as a Diophantine equation: find integers , , and such that
When this equation is satisfied, both sides of the equation equal the congruum.
Fibonacci solved the congruum problem by finding a parameterized formula for generating all congrua, together with their associated arithmetic progressions. According to this formula, each congruum is four times the area of a Pythagorean triangle. Congrua are also closely connected with congruent numbers: every congruum is a congruent number, and every congruent number is a congruum multiplied by the square of a rational number.
Examples
As an example, the number 96 is a congruum because it is the difference between adjacent squares in the sequence 4, 100, and 196 (the squares of 2, 10, and 14 respectively).
The first few congrua are:
History
The congruum problem was originally posed in 1225, as part of a mathematical tournament held by Frederick II, Holy Roman Emperor, and answered correctly at that time by Fibonacci, who recorded his work on this problem in his Book of Squares.
Fibonacci was already aware that it is impossible for a congruum to itself be a square, but did not give a satisfactory proof of this fact. Geometrically, this means that it is not possible for the pair of legs of a Pythagorean triangle to be the leg and hypotenuse of another Pythagorean triangle. A proof was eventually given by Pierre de Fermat, and the result is now known as Fermat's right triangle theorem. Fermat also conjectured, and Leonhard Euler proved, that there is no sequence of four squares in
|
https://en.wikipedia.org/wiki/Differential%20entropy
|
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
In terms of measure theory, the differential entropy of a probability measure is the negative relative entropy from that measure to the Lebesgue measure, where the latter is treated as if it were a probability measure, despite being unnormalized.
Definition
Let be a random variable with a probability density function whose support is a set . The differential entropy or is defined as
For probability distributions which do not have an explicit density function expression, but have an explicit quantile function expression, , then can be defined in terms of the derivative of i.e. the quantile density function as
.
As with its discrete analog, the units of differential entropy depend on the base of the logarithm, which is usually 2 (i.e., the units are bits). See logarithmic units for logarithms taken in different bases. Related concepts such as joint, conditional differential entropy, and relative entropy are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure . For example, the differential entropy of a quantity measured in millimeters will be more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of more than the s
|
https://en.wikipedia.org/wiki/Transylvania%20lottery
|
In mathematical combinatorics, the Transylvania lottery is a lottery where players selected three numbers from 1-14 for each ticket, and then three numbers are chosen randomly. A ticket wins if two of the numbers match the random ones. The problem asks how many tickets the player must buy in order to be certain of winning.
An upper bound can be given using the Fano plane with a collection of 14 tickets in two sets of seven. Each set of seven uses every line of a Fano plane, labelled with the numbers 1 to 7, and 8 to 14.
At least two of the three randomly chosen numbers must be in one Fano plane set, and any two points on a Fano plane are on a line, so there will be a ticket in the collection containing those two numbers. There is a (6/13)*(5/12)=5/26 chance that all three randomly chosen numbers are in the same Fano plane set. In this case, there is a 1/5 chance that they are on a line, and hence all three numbers are on one ticket, otherwise each of the three pairs are on three different tickets.
See also
Combinatorial design
Lottery Wheeling
|
https://en.wikipedia.org/wiki/IStock
|
iStock is an online royalty free, international micro stock photography provider based in Calgary, Alberta, Canada. The firm offers millions of photos, illustrations, clip art, videos and audio tracks. Artists, designers and photographers worldwide contribute their work to iStock collections in return for royalties. Nearly half a million new photos, illustrations, videos and audio files, are added each month.
History
The company was founded by Bruce Livingstone in May 2000, as iStockphoto, a free stock imagery website supported by Livingstone's web development firm, Evolvs Media. iStock pioneered the crowd-sourced stock industry and became the original source for user-generated stock photos, vectors and illustrations, and video clips. It began charging money in 2001 and quickly became profitable.
On February 9, 2006, the firm was acquired by Getty Images for $50 million USD. Livingstone promised that the site would continue "functioning independently with the benefits of Getty Images, yet, very importantly for them and us, autonomy."
On September 18, 2006, the site experienced the first benefits of the new ownership: a Controlled vocabulary keyword taxonomy borrowed from Getty Images.
iStockpro closed. iStockpro was a more expensive version of iStockphoto that was never as popular as iStockphoto, and became redundant after the acquisition by Getty Images.
On April 1, 2008, Getty Images disclosed, as part of its agreement to be sold to a private equity firm, that iStockphoto's revenue in 2007 was $71.9 million USD of which $20.9 million (29%) was paid to contributors.
Founder and CEO Livingstone left iStockphoto in 2009. He went on to co-found competitor Stocksy United in 2013.
In 2013, iStockphoto was rebranded as iStock by Getty Images, removing the word 'photo' to convey that the company offers stock media other than just photography, such as vector illustrations, audio, and video.
In 2020, iStock began offering weekly complimentary stock photos from it
|
https://en.wikipedia.org/wiki/Game%20server
|
A game server (also sometimes referred to as a host) is a server which is the authoritative source of events in a multiplayer video game. The server transmits enough data about its internal state to allow its connected clients to maintain their own accurate version of the game world for display to players. They also receive and process each player's input.
Types
Dedicated server
Dedicated servers simulate game worlds without supporting direct input or output, except that required for their administration. Players must connect to the server with separate client programs in order to see and interact with the game.
The foremost advantage of dedicated servers is their suitability for hosting in professional data centers, with all of the reliability and performance benefits that entails. Remote hosting also eliminates the low-latency advantage that would otherwise be held by any player who hosts and connects to a server from the same machine or local network.
Dedicated servers cost money to run, however. Cost is sometimes met by a game's developers (particularly on consoles) and sometimes by clan groups, but in either case, the public is reliant on third parties providing servers to connect to. For this reason, most games which use dedicated servers also provide listen server support. Players of these games will oftentimes host servers for the public and their clans, either by hosting a server instance from their own hardware, or by renting from a game server hosting provider.
Listen server
Listen servers run in the same process as a game client. They otherwise function like dedicated servers, but typically have the disadvantage of having to communicate with remote players over the residential internet connection of the hosting player. Performance is also reduced by the simple fact that the machine running the server is also generating an output image. Furthermore, listen servers grant anyone playing on them directly a large latency advantage over other players an
|
https://en.wikipedia.org/wiki/Loop%20maintenance%20operations%20system
|
The Loop Maintenance Operations System (LMOS) is a telephone company trouble ticketing system to coordinate repairs of local loops (telephone lines). When a problem is reported by a subscriber, it is filed and relayed through the Cross Front End, which is a link from the CRSAB (Centralized Repair Service Answering Bureau) to the LMOS network. The trouble report is then sent to the Front End via the Datakit network, where a Basic Output Report is requested (usually by a screening agent or lineman). The BOR provides line information including past trouble history and MLT (Mechanized Loop Testing) tests. As LMOS is responsible for trouble reports, analysis, and similar related functions, MLT does the actual testing of customer loops. MLT hardware is located in the Repair Service Bureau. Test trunks connect MLT hardware to the telephone exchanges or wire centers, which in turn connect with the subscriber loops.
The LMOS database is a proprietary file system, designed with 11 access methods (variable index, index, hash tree, fixed partition file, etc.). This is highly tuned for the various pieces of data used by LMOS.
LMOS, which was first brought on line as a mainframe application in the 1970s, was one of the first telephone company operations support systems to be ported to the UNIX operating system. The first port of LMOS was to Digital Equipment Corporation's PDP 11/70 machines and was completed in 1981. Later versions used VAX-11/780s. Today, LMOS runs on HP-UX 11i systems.
|
https://en.wikipedia.org/wiki/Video%20Encoded%20Invisible%20Light
|
Video Encoded Invisible Light (VEIL) is a technology for encoding low-bandwidth digital data bitstream in video signal, developed by VEIL Interactive Technologies. VEIL is compatible with multiple formats of video signals, including PAL, SECAM, and NTSC. The technology is based on a steganographically encoded data stream in the luminance of the videosignal.
A recent application of VEIL, the VEIL Rights Assertion Mark (VRAM or V-RAM) is a copy-restriction signal that can be used to ask devices to apply DRM technology. This has been seen as analogous to the broadcast flag. It is also known as "CGMS-A plus VEIL" and "broadcast flag on steroids."
There are two versions of VEIL on the market:
VEIL-I, or VEIL 1, has raw speed of 120 bits per second. It is used for unidirectional communication (TV→devices) with simple devices or toys, and to deliver coupons with TV advertising. It manipulates the luminance of the video signal in ways difficult to perceive to human eye.
VEIL-II, or VEIL 2, has speed of 7200-bit/s and is one of the technologies of choice for interactive television, as it allows communication with VEIL servers through devices equipped with backchannels. VEIL-II-capable set-top boxes can communicate with other devices via WiFi, Bluetooth, or other short-range wireless technologies. VEIL 2 manipulates the average luminance of the alternate lines of the signal, where one is slightly raised and the other one is slightly lowered (or vice versa), encoding a bit in every pair of lines.
The symbols (groups of 4 data bits) transmitted by VEIL-II system are encoded as "PN sequences", sequences of 16 "chips". Groups of 4 chips are encoded in pairs of lines. Each line pair is split to 4 parts, where the luminance is raised or lowered (correspondingly vice versa in the other line). In NTSC, 4-bit symbols are encoded in groups of 8 scan lines. With 224 lines per field this equals 112 bits per field, or 7200 bits per second of broadcast. VEIL-II uses scan lines 34 to
|
https://en.wikipedia.org/wiki/Laplace%20expansion
|
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an matrix as a weighted sum of minors, which are the determinants of some submatrices of . Specifically, for every , the Laplace expansion along the th row is the equality
where is the entry of the th row and th column of , and is the determinant of the submatrix obtained by removing the th row and the th column of . Similarly, the Laplace expansion along the th column is the equality
(Each identity implies the other, since the determinants of a matrix and its transpose are the same.)
The term is called the cofactor of in .
The Laplace expansion is often useful in proofs, as in, for example, allowing recursion on the size of matrices. It is also of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to Gaussian elimination.
Examples
Consider the matrix
The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields:
Laplace expansion along the second column yields the same result:
It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.
Proof
Suppose is an n × n matrix and For clarity we also label the entries of that compose its minor matrix as
for
Consider the terms in the expansion of that have as a factor. Each has the form
for some permutation with , and a unique and evidently related permutation which selects the same minor entries as . Similarly each choice of determines a corresponding i.e. the correspondence is a bijection between and
Using Cauchy's two-line notation, the explicit relation between and can be written as
wh
|
https://en.wikipedia.org/wiki/Logic%20simulation
|
Logic simulation is the use of simulation software to predict the behavior of digital circuits and hardware description languages. Simulation can be performed at varying degrees of physical abstraction, such as at the transistor level, gate level, register-transfer level (RTL), electronic system-level (ESL), or behavioral level.
Use in verification
Logic simulation may be used as part of the verification process in designing hardware.
Simulations have the advantage of providing a familiar look and feel to the user in that it is constructed from the same language and symbols used in design. By allowing the user to interact directly with the design, simulation is a natural way for the designer to get feedback on their design.
Length of simulation
The level of effort required to debug and then verify the design is proportional to the maturity of the design. That is, early in the design's life, bugs and incorrect behavior are usually found quickly. As the design matures, the simulation will require more time and resources to run, and errors will take progressively longer to be found. This is particularly problematic when simulating components for modern-day systems; every component that changes state in a single clock cycle on the simulation will require several clock cycles to simulate.
A straightforward approach to this issue may be to emulate the circuit on a field-programmable gate array instead. Formal verification can also be explored as an alternative to simulation, although a formal proof is not always possible or convenient.
A prospective way to accelerate logic simulation is using distributed and parallel computations.
To help gauge the thoroughness of a simulation, tools exist for assessing code coverage,
functional coverage, finite state machine (FSM) coverage, and many other metrics.
Event simulation versus cycle simulation
Event simulation allows the design to contain simple timing information – the delay needed for a signal to travel from one plac
|
https://en.wikipedia.org/wiki/Functional%20verification
|
Functional verification is the task of verifying that the logic design conforms to specification. Functional verification attempts to answer the question "Does this proposed design do what is intended?" This is complex and takes the majority of time and effort (up to 70% of design and development time) in most large electronic system design projects. Functional verification is a part of more encompassing design verification, which, besides functional verification, considers non-functional aspects like timing, layout and power.
Background
Although the number of transistors increased exponentially according to Moore's law, increasing the number of engineers and time taken to produce the designs only increase linearly. As the transistors' complexity increases, the number of coding errors also increases. Most of the errors in logic coding come from careless coding (12.7%), miscommunication (11.4%), and microarchitecture challenges (9.3%). Thus, electronic design automation (EDA) tools are produced to catch up with the complexity of transistors design. Languages such as Verilog and VHDL are introduced together with the EDA tools.
Functional verification is very difficult because of the sheer volume of possible test-cases that exist in even a simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design – a number that is impossible to achieve in a lifetime. This effort is equivalent to program verification, and is NP-hard or even worse – and no solution has been found that works well in all cases. However, it can be attacked by many methods. None of them are perfect, but each can be helpful in certain circumstances:
Logic simulation simulates the logic before it is built.
Simulation acceleration applies special purpose hardware to the logic simulation problem.
Emulation builds a version of system using programmable logic. This is expensive, and still much slower than the real hardware, but orders of magnitude faster than simu
|
https://en.wikipedia.org/wiki/Solar%20panel
|
A solar panel is a device that converts sunlight into electricity by using photovoltaic (PV) cells. PV cells are made of materials that generate electrons when exposed to light. The electrons flow through a circuit and produce direct current (DC) electricity, which can be used to power various devices or be stored in batteries. Solar panels are also known as solar cell panels, solar electric panels, or PV modules.
Solar panels are usually arranged in groups called arrays or systems. A photovoltaic system consists of one or more solar panels, an inverter that converts DC electricity to alternating current (AC) electricity, and sometimes other components such as controllers, meters, and trackers. A photovoltaic system can be used to provide electricity for off-grid applications, such as remote homes or cabins, or to feed electricity into the grid and earn credits or payments from the utility company. This is called a grid-connected photovoltaic system.
Some advantages of solar panels are that they use a renewable and clean source of energy, reduce greenhouse gas emissions, and lower electricity bills. Some disadvantages are that they depend on the availability and intensity of sunlight, require cleaning, and have high initial costs. Solar panels are widely used for residential, commercial, and industrial purposes, as well as for space and transportation applications.
History
In 1839, the ability of some materials to create an electrical charge from light exposure was first observed by the French physicist Edmond Becquerel. Though these initial solar panels were too inefficient for even simple electric devices, they were used as an instrument to measure light.
The observation by Becquerel was not replicated again until 1873, when the English electrical engineer Willoughby Smith discovered that the charge could be caused by light hitting selenium. After this discovery, William Grylls Adams and Richard Evans Day published "The action of light on selenium" in 1876,
|
https://en.wikipedia.org/wiki/Ostwald%20ripening
|
Ostwald ripening is a phenomenon observed in solid solutions and liquid sols that involves the change of an inhomogeneous structure over time, in that small crystals or sol particles first dissolve and then redeposit onto larger crystals or sol particles.
Dissolution of small crystals or sol particles and the redeposition of the dissolved species on the surfaces of larger crystals or sol particles was first described by Wilhelm Ostwald in 1896. For colloidal systems, Ostwald ripening is also found in water-in-oil emulsions, while flocculation is found in oil-in-water emulsions.
Mechanism
This thermodynamically-driven spontaneous process occurs because larger particles are more energetically favored than smaller particles. This stems from the fact that molecules on the surface of a particle are energetically less stable than the ones in the interior.
Consider a cubic crystal of atoms: all the atoms inside are bonded to 6 neighbours and are quite stable, but atoms on the surface are only bonded to 5 neighbors or fewer, which makes these surface atoms less stable. Large particles are more energetically favorable since, continuing with this example, more atoms are bonded to 6 neighbors and fewer atoms are at the unfavorable surface. As the system tries to lower its overall energy, molecules on the surface of a small particle (energetically unfavorable, with only 3 or 4 or 5 bonded neighbors) will tend to detach from the particle and diffuse into the solution.
Kelvin's equation describes the relationship between the radius of curvature and the chemical potential between the surface and the inner volume:
where corresponds to the chemical potential, to the surface tension, to the atomic volume and to the radius of the particle.
The chemical potential of an ideal solution can also be expressed as a function of the solute’s concentration if liquid and solid phases are in equilibrium.
where corresponds to the Boltzmann Constant, to the temperature and to the s
|
https://en.wikipedia.org/wiki/Salisbury%20screen
|
The Salisbury screen is a way of reducing the reflection of radio waves from a surface. It was one of the first concepts in radar absorbent material, an aspect of "stealth technology", used to prevent enemy radar detection of military vehicles. It was first applied to ship radar cross section (RCS) reduction. The Salisbury screen was invented by American engineer Winfield Salisbury in the early 1940s (see patent filing date). The patent was delayed because of wartime security,.
Method of operation
Salisbury screens operate on the same principle as optical antireflection coatings used on the surface of camera lenses and glasses to prevent them from reflecting light. The easiest to understand Salisbury screen design consists of three layers: a ground plane which is the metallic surface that needs to be concealed, a lossless dielectric of a precise thickness (a quarter of the wavelength of the radar wave to be absorbed), and a thin glossy screen.
When the radar wave strikes the front surface of the dielectric, it is split into two waves.
One wave is reflected from the glossy surface screen. The second wave passes into the dielectric layer, is reflected from the metal surface, and passes back out of the dielectric into the air.
The extra distance the second wave travels causes it to be 180° out of phase with the first wave by the time it emerges from the dielectric surface
When the second wave reaches the surface, the two waves combine and cancel each other out due to the phenomenon of interference. Therefore, there is no wave energy reflected back to the radar receiver.
To understand the cancellation of the waves requires an understanding of the concept of interference. When two electromagnetic waves that are coherent and are traveling in the same space interact, they combine to form a single resultant wave. If the two waves are "in phase" so their peaks coincide, they add, and the output intensity is the sum of the two waves' intensities. However, if t
|
https://en.wikipedia.org/wiki/Backup%20rotation%20scheme
|
A backup rotation scheme is a system of backing up data to computer media (such as tapes) that minimizes, by re-use, the number of media used. The scheme determines how and when each piece of removable storage is used for a backup job and how long it is retained once it has backup data stored on it. Different techniques have evolved over time to balance data retention and restoration needs with the cost of extra data storage media. Such a scheme can be quite complicated if it takes incremental backups, multiple retention periods, and off-site storage into consideration.
Schemes
First in, first out
A first in, first out (FIFO) backup scheme saves new or modified files onto the "oldest" media in the set, i.e. the media that contain the oldest and thus least useful previously backed up data. Performing a daily backup onto a set of 14 media, the backup depth would be 14 days. Each day, the oldest media would be inserted when performing the backup. This is the simplest rotation scheme and is usually the first to come to mind.
This scheme has the advantage that it retains the longest possible tail of daily backups. It can be used when archived data is unimportant (or is retained separately from the short-term backup data) and data before the rotation period is irrelevant.
However, this scheme suffers from the possibility of data loss: suppose, an error is introduced into the data, but the problem is not identified until several generations of backups and revisions have taken place. Thus when the error is detected, all the backup files contain the error. It would then be useful to have at least one older version of the data, as it would not have the error.
Grandfather-father-son
Grandfather-father-son backup (GFS) is a common rotation scheme for backup media, in which there are three or more backup cycles, such as daily, weekly and monthly. The daily backups are rotated on a 3-months basis using a FIFO system as above. The weekly backups are similarly rotated on
|
https://en.wikipedia.org/wiki/Ply%20%28game%20theory%29
|
In two-or-more-player sequential games, a ply is one turn taken by one of the players. The word is used to clarify what is meant when one might otherwise say "turn".
The word "turn" can be a problem since it means different things in different traditions. For example, in standard chess terminology, one move consists of a turn by each player; therefore a ply in chess is a half-move. Thus, after 20 moves in a chess game, 40 plies have been completed—20 by white and 20 by black. In the game of Go, by contrast, a ply is the normal unit of counting moves; so for example to say that a game is 250 moves long is to imply 250 plies.
In poker with n players the word "street" is used for a full betting round consisting of n plies -each dealt card may sometimes also be called a "street". For instance in heads up Texas hold'em a street consists of 2 plies, with possible plays being check/raise/call/fold: the first by the player at the big blind, and the second by the dealer, who posts the small blind; and there are 4 streets: preflop, flop, turn, river -the latter 3 corresponding to community cards. The terms "half-street" and "half-street game" are sometimes used to describe, respectively, a single bet in a heads up game, and a simplified heads up poker game where only a single player bets.
The word "ply" used as a synonym for "layer" goes back to the 15th century. Arthur Samuel first used the term in its game-theoretic sense in his seminal paper on machine learning in checkers in 1959, but with a slightly different meaning: the "ply", in Samuel's terminology, is actually the depth of analysis ("Certain expressions were introduced which we will find useful. These are: Ply, defined as the number of moves ahead, where a ply of two consists of one proposed move by the machine and one anticipated reply by the opponent").
In computing, the concept of a ply is important because one ply corresponds to one level of the game tree. The Deep Blue chess computer which d
|
https://en.wikipedia.org/wiki/Virtual%20Object%20System
|
The Virtual Object System (VOS) is a computer software technology for creating distributed object systems. The sites hosting Vobjects are typically linked by a computer network, such as a local area network or the Internet. Vobjects may send messages to other Vobjects over these network links (remotely) or within the same host site (locally) to perform actions and synchronize state. In this way, VOS may also be called an object-oriented remote procedure call system. In addition, Vobjects may have a number of directed relations to other Vobjects, which allows them to form directed graph data structures.
VOS is patent free, and its implementation is Free Software. The primary application focus of VOS is general purpose, multiuser, collaborative 3D virtual environments or virtual reality. The primary designer and author of VOS is Peter Amstutz.
External links
Interreality.org official site
Groupware
Distributed computing architecture
|
https://en.wikipedia.org/wiki/Christos%20Papadimitriou
|
Christos Charilaos Papadimitriou (; born August 16, 1949) is a Greek theoretical computer scientist and the Donovan Family Professor of Computer Science at Columbia University.
Education
Papadimitriou studied at the National Technical University of Athens, where in 1972 he received his Bachelor of Arts degree in electrical engineering. He then pursued graduate studies at Princeton University, where he received his Ph.D. in electrical engineering and computer science in 1976 after completing a doctoral dissertation titled "The complexity of combinatorial optimization problems."
Career
Papadimitriou has taught at Harvard, MIT, the National Technical University of Athens, Stanford, UCSD, University of California, Berkeley and is currently the Donovan Family Professor of Computer Science at Columbia University.
Papadimitriou co-authored a paper on pancake sorting with Bill Gates, then a Harvard undergraduate. Papadimitriou recalled "Two years later, I called to tell him our paper had been accepted to a fine math journal. He sounded eminently disinterested. He had moved to Albuquerque, New Mexico to run a small company writing code for microprocessors, of all things. I remember thinking: 'Such a brilliant kid. What a waste.'" The company was Microsoft.
Papadimitriou co-authored "The Complexity of Computing a Nash Equilibrium" with his students Constantinos Daskalakis and Paul W. Goldberg, for which they received the 2008 Kalai Game Theory and Computer Science Prize from the Game Theory Society for "the best paper at the interface of game theory and computer science", in particular "for its key conceptual and technical contributions"; and the Outstanding Paper Prize from the Society for Industrial and Applied Mathematics.
In 2001, Papadimitriou was inducted as a Fellow of the Association for Computing Machinery and in 2002 he was awarded the Knuth Prize. Also in 2002, he became a member of the U.S. National Academy of Engineering for contributions to complexity theor
|
https://en.wikipedia.org/wiki/Home%20network
|
A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.
Origin
IPv4 address exhaustion has forced most Internet service providers to grant only a single WAN-facing IP address for each residential account. Multiple devices within a residence or small office are provisioned with internet access by establishing a local area network (LAN) for the local devices with IP addresses reservied for private networks. A network router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN by network address translation.
Infrastructure devices
Certain devices on a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices home-dwellers more directly interact with. Unlike their data center counterparts, these "networking" devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible:
A gateway establishes physical and data link layer connectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain a cable, DSL, or optical modem bound to a network interface controller for Ethernet. Routers are often incorporated into these devices for additional convenience.
A router establishes network layer connectivity between a WAN and the home network. It also performs the key function of network address translation that allows independently add
|
https://en.wikipedia.org/wiki/Centrosymmetry
|
In crystallography, a centrosymmetric point group contains an inversion center as one of its symmetry elements. In such a point group, for every point (x, y, z) in the unit cell there is an indistinguishable point (-x, -y, -z). Such point groups are also said to have inversion symmetry. Point reflection is a similar term used in geometry.
Crystals with an inversion center cannot display certain properties, such as the piezoelectric effect.
The following space groups have inversion symmetry: the triclinic space group 2, the monoclinic 10-15, the orthorhombic 47-74, the tetragonal 83-88 and 123-142, the trigonal 147, 148 and 162-167, the hexagonal 175, 176 and 191-194, the cubic 200-206 and 221-230.
Point groups lacking an inversion center (non-centrosymmetric) can be polar, chiral, both, or neither.
A polar point group is one whose symmetry operations leave more than one common point unmoved. A polar point group has no unique origin because each of those unmoved points can be chosen as one. One or more unique polar axes could be made through two such collinear unmoved points. Polar crystallographic point groups include 1, 2, 3, 4, 6, m, mm2, 3m, 4mm, and 6mm.
A chiral (often also called enantiomorphic) point group is one containing only proper (often called "pure") rotation symmetry. No inversion, reflection, roto-inversion or roto-reflection (i.e., improper rotation) symmetry exists in such point group. Chiral crystallographic point groups include 1, 2, 3, 4, 6, 222, 422, 622, 32, 23, and 432. Chiral molecules such as proteins crystallize in chiral point groups.
The remaining non-centrosymmetric crystallographic point groups , 2m, , m2, 3m are neither polar nor chiral.
See also
Centrosymmetric matrix
Rule of mutual exclusion
|
https://en.wikipedia.org/wiki/Minkowski%20functional
|
In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space.
If is a subset of a real or complex vector space then the or of is defined to be the function valued in the extended real numbers, defined by
where the infimum of the empty set is defined to be positive infinity (which is a real number so that would then be real-valued).
The set is often assumed/picked to have properties, such as being an absorbing disk in that guarantee that will be a real-valued seminorm on
In fact, every seminorm on is equal to the Minkowski functional (that is, ) of any subset of satisfying (where all three of these sets are necessarily absorbing in and the first and last are also disks).
Thus every seminorm (which is a defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm).
These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis.
In particular, through these relationships, Minkowski functionals allow one to "translate" certain properties of a subset of into certain properties of a function on
The Minkowski function is always non-negative (meaning ).
This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values.
However, might not be real-valued since for any given the value is a real number if and only if is not empty.
Consequently, is usually assumed to have properties (such as being absorbing in for instance) that will guarantee that is real-valued.
Definition
Let be a subset of a r
|
https://en.wikipedia.org/wiki/Mortgage%20constant
|
Mortgage constant, also called "mortgage capitalization rate", is the capitalization rate for debt. It is usually computed monthly by dividing the monthly payment by the mortgage principal. An annualized mortgage constant can be found by multiplying the monthly constant by 12 or by dividing the annual debt service by the mortgage principal.
A mortgage constant is a rate that appraisers determine for use in the band of investment approach. It is also used in conjunction with the debt-coverage ratio that many commercial bankers use.
The mortgage constant is commonly denoted as Rm. The Rm is higher than the interest rate for an amortizing loan because the Rm includes consideration of the principal as well as the interest. The Rm could be lower than the interest for a negatively amortizing loan.
Formula
Where:
i = Interest
n = Total number of months required to pay off the loan.
m = Number of payment months in a year (12).
example:
(0.055/12)/(1-(1/(POWER(1+(0.055/12),360))))*12 for MS Excel
|
https://en.wikipedia.org/wiki/Strike%20and%20dip
|
In geology, strike and dip is a measurement convention used to describe the plane orientation or attitude of a planar geologic feature. A feature's strike is the azimuth of an imagined horizontal line across the plane, and its dip is the angle of inclination (or depression angle) measured downward from horizontal. They are used together to measure and document a structure's characteristics for study or for use on a geologic map. A feature's orientation can also be represented by dip and dip direction, using the azimuth of the dip rather than the strike value. Linear features are similarly measured with trend and plunge, where "trend" is analogous to dip direction and "plunge" is the dip angle.
Strike and dip are measured using a compass and a clinometer. A compass is used to measure the feature's strike by holding the compass horizontally against the feature. A clinometer measures the features dip by recording the inclination perpendicular to the strike. These can be done separately, or together using a tool such as a Brunton transit or a Silva compass.
Any planar feature can be described by strike and dip, including sedimentary bedding, fractures, faults, joints, cuestas, igneous dikes and sills, metamorphic foliation and fabric, etc. Observations about a structure's orientation can lead to inferences about certain parts of an area's history, such as movement, deformation, or tectonic activity.
Elements
When measuring or describing the attitude of an inclined feature, two quantities are needed. The angle the slope descends, or dip, and the direction of descent, which can be represented by strike or dip direction.
Dip
Dip is the inclination of a given feature, and is measured from the steepest angle of descent of a tilted bed or feature relative to a horizontal plane. True dip is always perpendicular to the strike. It is written as a number (between 0° and 90°) indicating the angle in degrees below horizontal. It can be accompanied with the rough direction o
|
https://en.wikipedia.org/wiki/%C5%98e%C5%BE
|
Řež () is a village and administrative part of Husinec in the Central Bohemian Region of the Czech Republic.
Řež is the site of a nuclear research centre and a chemical factory. In August 2002 there was a serious flood which damaged the site.
Řež has a railway connection by Prague - Kralupy nad Vltavou line. The stop is located on the opposite (left) bank of the Vltava River and is accessible by a pedestrian bridge.
On 19 June 2022 the highest ever temperature during the month of June in the Czech Republic was recorded here at 39.0 °C.
Further reading
1995. 40 Years on: Rez Institute Underpins Czech Programme. "Nuclear Engineering International". no. 491: 46.
|
https://en.wikipedia.org/wiki/Christmas%20pickle
|
The Christmas pickle is a German-American Christmas tradition. A decoration in the shape of a pickle is hidden on a Christmas tree, with the finder receiving either a reward or good fortune for the next year. There are a number of different origin stories attributed to the tradition, including one originating in Germany. This theory has since been discounted, and it is now thought to be a German-American tradition created in the late 19th century. In fact, the New York Times reported that out of 2,057 Germans polled, YouGov determined 91% were unaware of the legend.
Description
In the tradition, an ornamental pickle is placed on a Christmas tree as one of the Christmas decorations. On Christmas morning, the first person to find the pickle on the tree would receive an extra present from Santa Claus or would be said to have a year of good fortune.
Berrien Springs, Michigan, which billed itself as the Christmas pickle capital of the world, held a pickle parade from 1992 until 2005. The Pickle Festival and parade returned in 2021 after a 16-year hiatus.
Origins
This tradition is commonly believed by Americans to come from Germany and be referred to as a Weihnachtsgurke, but this is probably apocryphal. It has been suggested that the origin of the Christmas pickle may have been developed in the 1890s to coincide with the importation of glass Christmas tree decorations from Germany. Woolworths was the first company to import these types of decorations into the United States in 1890, and glass blown decorative vegetables were imported from France from 1892 onwards. Despite the evidence showing that the tradition did not originate in Germany, the concept of Christmas pickles has since been imported from the United States and they are now on sale in the country traditionally associated with it.
One suggested origin has been that the tradition came from Camp Sumter during the American Civil War. The Bavarian-born Private John C. Lower had enlisted in the 103rd Pennsylvan
|
https://en.wikipedia.org/wiki/Ethoxyquin
|
Ethoxyquin (EMQ) is a quinoline-based antioxidant used as a food preservative in certain countries and originally to control scald on pears after harvest (under commercial names such as "Stop-Scald"). It is used as a preservative in some pet foods to slow the development of rancidity of fats. Ethoxyquin is also used in some spices to prevent color loss due to oxidation of the natural carotenoid pigments.
Regulation
Ethoxyquin was developed by Monsanto in the 1950s. Ethoxyquin was initially registered as a pesticide in 1965 as an antioxidant used as a deterrent of scald in pears through post-harvest indoor application via a drench and/or impregnated wrap.
As an antioxidant to control the browning of pears, ethoxyquin is approved in the United States and in the European Union.
In the United States, it is approved for use as an animal feed additive and is limited as a food additive to use only in the spices chili powder, paprika, and ground chili. Ethoxyquin is not permitted for use as food additive in Australia nor within the European Union.
Ethoxyquin is allowed in the fishing industry in Norway and France as a feed stabilizer, so is commonly used in food pellets fed to farmed salmon.
Norway made this practice illegal when the EU suspended authorization in 2017 and in accordance with the suspension utilized a transition period which allowed the sale of feed containing ethoxyquin until December 31st 2019, after this date it was illegal to sell feed containing ethoxyquin. Feed containing ethoxyquin had to be used by June 20th 2020.
Ethoxyquin is used in pellets fed to chickens on chicken farms.
In 2017 the EU suspended authorization for use as a feed additive, with various dates between 2017 and 2019 for final allowance of sale of goods so that alternatives may be phased in.
Safety
Some speculation exists that ethoxyquin in pet foods might be responsible for multiple health problems. To date, the U.S. Food and Drug Administration has only found a verifiable
|
https://en.wikipedia.org/wiki/Angiogenin
|
Angiogenin (ANG) also known as ribonuclease 5 is a small 123 amino acid protein that in humans is encoded by the ANG gene. Angiogenin is a potent stimulator of new blood vessels through the process of angiogenesis. Ang hydrolyzes cellular RNA, resulting in modulated levels of protein synthesis and interacts with DNA causing a promoter-like increase in the expression of rRNA. Ang is associated with cancer and neurological disease through angiogenesis and through activating gene expression that suppresses apoptosis.
Function
Angiogenin is a key protein implicated in angiogenesis in normal and tumor growth. Angiogenin interacts with endothelial and smooth muscle cells resulting in cell migration, invasion, proliferation and formation of tubular structures. Ang binds to actin of both smooth muscle and endothelial cells to form complexes that activate proteolytic cascades which upregulate the production of proteases and plasmin that degrade the laminin and fibronectin layers of the basement membrane. Degradation of the basement membrane and extracellular matrix allows the endothelial cells to penetrate and migrate into the perivascular tissue. Signal transduction pathways activated by Ang interactions at the cellular membrane of endothelial cells produce extracellular signal-related kinase1/2 (ERK1/2) and protein kinase B/Akt. Activation of these proteins leads to invasion of the basement membrane and cell proliferation associated with further angiogenesis. The most important step in the angiogenesis process is the translocation of Ang to the cell nucleus. Once Ang has been translocated to the nucleus, it enhances rRNA transcription by binding to the CT-rich (CTCTCTCTCTCTCTCTCCCTC) angiogenin binding element (ABE) within the upstream intergenic region of rDNA, which subsequently activates other angiogenic factors that induce angiogenesis.
However, angiogenin is unique among the many proteins that are involved in angiogenesis in that it is also an enzyme with an amino
|
https://en.wikipedia.org/wiki/Theory%20%28mathematical%20logic%29
|
In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
General theories (as expressed in formal language)
When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate.
The construction of a theory begins by specifying a definite non-empty conceptual class , the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them.
A theory is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to are called the elementary theorems of and are said to be true. In this way, a theory can be seen as a way of designating a subset of that only contain statements that are true.
This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to . Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.
Subtheories and extensions
A theory is
|
https://en.wikipedia.org/wiki/ArchiveGrid
|
ArchiveGrid is a collection of over five million archival material descriptions, including MARC records from WorldCat and finding aids harvested from the web. It contains archival collections held by thousands of libraries, museums, historical societies, and archives. Contribution to the system is available to any institution. Most of the contributions are from United States based institutions, but many other countries are represented, including Canada, Australia, and the United Kingdom. ArchiveGrid is associated with OCLC Research and helps to advance their goals of making archival collections and materials easier to find. ArchiveGrid is described as "the ultimate destination for searching through family histories, political papers, and historical records held in archives around the world."
History
Research Libraries Group (RLG) was founded in 1974 by three universities (Columbia, Harvard, and Yale) and The New York Public Library. In 1998, RLG launched the RLG Archival Resources database, which offered online access to the holdings of archival collections. RLG began to redesign the database in 2004 in order to make it more useful for researchers. As a result of this redesign, RLG launched ArchiveGrid in March 2006. As a result of a grant, ArchiveGrid was freely accessible until May 31, 2006.
RLG/OCLC Partnership
In 2006, the RLG and the Online Computer Learning Center, Inc. (OCLC) announced the combining of the two organizations. RLG Programs was formed on July 1, 2006 and became part of the OCLC Programs and Research division. ArchiveGrid was offered as an OCLC subscription-based discovery service from 2006 until it was discontinued in 2012. In 2009, RLG Programs became known as RLG Partnership. The OCLC Research Library Partnership replaced the RLG Partnership in 2011. The five-year period of successfully integrating the RLG Partnership into OCLC was completed 30 June 2011. In 2012, ArchiveGrid became a free system, while remaining a part of the
|
https://en.wikipedia.org/wiki/Lead%20shielding
|
Lead shielding refers to the use of lead as a form of radiation protection to shield people or objects from radiation so as to reduce the effective dose. Lead can effectively attenuate certain kinds of radiation because of its high density and high atomic number; principally, it is effective at stopping gamma rays and x-rays.
Operation
Lead's high density is caused by the combination of its high atomic number and the relatively short bond lengths and atomic radius. The high atomic number means that more electrons are needed to maintain a neutral charge and the short bond length and a small atomic radius means that many atoms can be packed into a particular lead structure.
Because of lead's density and large number of electrons, it is well suited to scattering x-rays and gamma-rays. These rays form photons, a type of boson, which impart energy onto electrons when they come into contact. Without a lead shield, the electrons within a person's body would be affected, which could damage their DNA. When the radiation attempts to pass through lead, its electrons absorb and scatter the energy. Eventually though, the lead will degrade from the energy to which it is exposed. However, lead is not effective against all types of radiation. High energy electrons (including beta radiation) incident on lead may create bremsstrahlung radiation, which is potentially more dangerous to tissue than the original radiation. Furthermore, lead is not a particularly effective absorber of neutron radiation.
Types
Lead is used for shielding in x-ray machines, nuclear power plants, labs, medical facilities, military equipment, and other places where radiation may be encountered. There is great variety in the types of shielding available both to protect people and to shield equipment and experiments. In gamma-spectroscopy for example, lead castles are constructed to shield the probe from environmental radiation. Personal shielding includes lead aprons (such as the familiar garment used d
|
https://en.wikipedia.org/wiki/Clarence%20F.%20Stephens
|
Clarence Francis Stephens (July 24, 1917 – March 5, 2018) was the ninth African American to receive a Ph.D. in mathematics. He is credited with inspiring students and faculty at SUNY Potsdam to form the most successful United States undergraduate mathematics degree programs in the past century. Stephens was recognized by Mathematically Gifted & Black as a Black History Month 2018 Honoree.
Early life
The fifth of six children, he was orphaned at the age of eight. For his early education, he attended Harbison Agricultural and Industrial Institute, a boarding school for African-Americans in Irmo, South Carolina under Dean R. W. Bouleware and later President Rev John G. Porter.
Stephens graduated from Johnson C. Smith University in 1938 with a B.S. degree in mathematics. He received his M.S. (1939) and his Ph.D. (1944) from the University of Michigan. He was the 9th African American to receive a Ph.D in mathematics––for a thesis on Non-Linear Difference Equations Analytic in a Parameter under James Nyswander.
After serving in the U.S. Navy (1942–1946) as a Teaching Specialist, Dr. Stephens joined the mathematics faculty of Prairie View A&M University. The next year (1947) he was invited to join the mathematics faculty at Morgan State University.
From research to teaching
As a Mathematics Association of America (MAA) biography explains, “Dr. Stephens' focus was on being a research mathematician, so he accepted the position in part because he would be near a research library at Johns Hopkins University. While at Morgan State University, Dr. Stephens became appalled at what a poor job was being done in general to teach and inspire students to learn mathematics. He changed his focus from being a researcher to achieving excellence, with desirable results, in teaching mathematics.
In 1953, he received a one-year Ford Fellowship to study at the Institute for Advanced Study in Princeton, New Jersey.
Dr. Stephens remained at Morgan State until 1962, where is credited with
|
https://en.wikipedia.org/wiki/Aluminium%20gallium%20phosphide
|
Aluminium gallium phosphide, , a phosphide of aluminium and gallium, is a semiconductor material. It is an alloy of aluminium phosphide and gallium phosphide. It is used to manufacture light-emitting diodes emitting green light.
See also
Aluminium gallium indium phosphide
External links
Light-Emitting Diode - An Introduction, Structure, and Applications of LEDs
Aluminium compounds
Gallium compounds
Phosphides
III-V semiconductors
III-V compounds
Zincblende crystal structure
|
https://en.wikipedia.org/wiki/Gallium%20arsenide%20phosphide
|
Gallium arsenide phosphide () is a semiconductor material, an alloy of gallium arsenide and gallium phosphide. It exists in various composition ratios indicated in its formula by the fraction x.
Gallium arsenide phosphide is used for manufacturing red, orange and yellow light-emitting diodes. It is often grown on gallium phosphide substrates to form a GaP/GaAsP heterostructure. In order to tune its electronic properties, it may be doped with nitrogen (GaAsP:N).
See also
Gallium arsenide
Gallium indium arsenide antimonide phosphide
Gallium phosphide
Indium gallium arsenide phosphide
Indium gallium phosphide
|
https://en.wikipedia.org/wiki/Lambert%20quadrilateral
|
In geometry, a Lambert quadrilateral (also known as Ibn al-Haytham–Lambert quadrilateral), is a quadrilateral in which three of its angles are right angles. Historically, the fourth angle of a Lambert quadrilateral was of considerable interest since if it could be shown to be a right angle, then the Euclidean parallel postulate could be proved as a theorem. It is now known that the type of the fourth angle depends upon the geometry in which the quadrilateral exists. In hyperbolic geometry the fourth angle is acute, in Euclidean geometry it is a right angle and in elliptic geometry it is an obtuse angle.
A Lambert quadrilateral can be constructed from a Saccheri quadrilateral by joining the midpoints of the base and summit of the Saccheri quadrilateral. This line segment is perpendicular to both the base and summit and so either half of the Saccheri quadrilateral is a Lambert quadrilateral.
Lambert quadrilateral in hyperbolic geometry
In hyperbolic geometry a Lambert quadrilateral AOBF where the angles are right, and F is opposite O , is an acute angle , and the curvature = -1 the following relations hold:
Where are hyperbolic functions
Examples
See also
Non-Euclidean geometry
Notes
|
https://en.wikipedia.org/wiki/Depolarization-induced%20suppression%20of%20inhibition
|
Depolarization-induced suppression of inhibition is the classical and original electrophysiological example of endocannabinoid function in the central nervous system. Prior to the demonstration that depolarization-induced suppression of inhibition was dependent on the cannabinoid CB1 receptor function, there was no way of producing an in vitro endocannabinoid mediated effect.
Depolarization-induced suppression of inhibition is classically produced in a brain slice experiment (i.e. a 300-400 µm slice of brain, with intact axons and synapses) where a single neuron is "depolarized" (the normal −70 mV potential across the neuronal membrane is reduced, usually to −30 to 0 mV) for a period of 1 to 10 seconds. After the depolarization, inhibitory GABA mediated neurotransmission is reduced. This has been demonstrated to be caused by the release of endogenous cannabinoids from the depolarized neuron which diffuses to nearby neurons, and binds and activates CB1 receptors, which act presynaptically to reduce neurotransmitter release.
History
Depolarization-induced suppression of inhibition was discovered in 1992 by Vincent et al., (1992) working in purkinje cells of the cerebellum then confirmed in the hippocampus by Pitler & Alger, 1992.
These groups were studying the responses of large pyramidal projection neurons to GABA, the main inhibitory neurotransmitter in the central nervous system. GABA is typically released by small interneurons in many regions of the brain, where its job is to inhibit the activity of primary neurons, such as the CA1 pyramidal neurons of the hippocampus or the Purkinje cells of the cerebellum. Activation of GABA receptors on these cells, whether they are ionotropic or metabotropic, typically results in the influx of chloride ions into that target cell. This build-up of negative charge from the chloride ions results in the hyperpolarization of the target cell, making it less likely to fire an action potential. Accordingly, any ionic current that
|
https://en.wikipedia.org/wiki/NetCDF%20Operators
|
NCO (netCDF Operators) is a suite of programs designed to facilitate manipulation and analysis of self-describing data stored in the netCDF format.
Program Suite
ncap2 netCDF arithmetic processor
ncatted netCDF attribute editor
ncbo netCDF binary operator (includes addition, multiplication and others)
ncclimo netCDF climatology generator
nces netCDF ensemble statistics
ncecat netCDF ensemble concatenator
ncflint netCDF file interpolator
ncks netCDF kitchen sink
ncpdq netCDF permute dimensions quickly, pack data quietly
ncra netCDF record averager
ncrcat netCDF record concatenator
ncremap netCDF remaper
ncrename netCDF renamer
ncwa netCDF weighted averager
|
https://en.wikipedia.org/wiki/Nigoda
|
In Jainism cosmology, the Nigoda is a realm existing in which the lowest forms of invisible life reside in endless numbers, and without any hope of release by self-effort. Jain scriptures describe nigodas which are microorganisms living in large clusters, having only one sense, having a very short life and are said to pervade each and every part of universe, even in tissues of plants and flesh of animals. The Nigoda exists in contrast to the Supreme Abode, also located at the Siddhashila (top of the universe) where liberated souls exist in omnisciencent and eternal bliss. According to Jain tradition, it is said that when a human being achieves liberation (Moksha) or if a human would be born as a Nigoda due to karma, another from the Nigoda is given the potential of self-effort and hope.
Characteristics
The life in Nigoda is that of a sub-microscopic organism possessing only one sense, i.e., of touch.
Notes
|
https://en.wikipedia.org/wiki/Pseudo-range%20multilateration
|
Pseudo-range multilateration, often simply multilateration (MLAT) when in context, is a technique for determining the position of an unknown point, such as a vehicle, based on measurement of the times of arrival (TOAs) of energy waves traveling between the unknown point and multiple stations at known locations. When the waves are transmitted by the vehicle, MLAT is used for surveillance; when the waves are transmitted by the stations, MLAT is used for navigation (hyperbolic navigation). In either case, the stations' clocks are assumed synchronized but the vehicle's clock is not.
Prior to computing a solution, the common time of transmission (TOT) of the waves is unknown to the receiver(s), either on the vehicle (one receiver, navigation) or at the stations (multiple receivers, surveillance). Consequently, also unknown is the wave times of flight (TOFs) the ranges of the vehicle from the stations divided by the wave propagation speed. Each pseudo-range is the corresponding TOA multiplied by the propagation speed with the same arbitrary constant added (representing the unknown TOT).
In navigation applications, the vehicle is often termed the "user"; in surveillance applications, the vehicle may be termed the "target". For a mathematically exact solution, the ranges must not change during the period the signals are received (between first and last to arrive at a receiver). Thus, for navigation, an exact solution requires a stationary vehicle; however, multilateration is often applied to the navigation of moving vehicles whose speed is much less than the wave propagation speed.
If is the number of physical dimensions being considered (thus, vehicle coordinates sought) and is the number of signals received (thus, TOAs measured), it is required that . Then, the fundamental set of measurement equations is:
TOAs ( measurements) = TOFs ( unknown variables embedded in expressions) + TOT (one unknown variable replicated times).
Processing is usually required to extr
|
https://en.wikipedia.org/wiki/International%20Union%20of%20Biochemistry%20and%20Molecular%20Biology
|
The International Union of Biochemistry and Molecular Biology (IUBMB) is an international non-governmental organisation concerned with biochemistry and molecular biology. Formed in 1955 as the International Union of Biochemistry (IUB), the union has presently 79 member countries and regions (as of 2020). The Union is devoted to promoting research and education in biochemistry and molecular biology throughout the world, and gives particular attention to localities where the subject is still in its early development.
History
The first Congress of Biochemistry was held in 1949 in Cambridge, UK, and was inspired by German-born British biochemist Sir Hans Adolf Krebs as a means of bringing together biochemists who had been separated by World War II from collaborating. At the time, biochemistry was blossoming as a discipline and was seeking its own recognition as a Union within the International Council for Science (ICSU). The Congress was a first step to recognize Biochemistry as a separate discipline and entity. At the final session of this congress, the International Committee of Biochemistry was set up with 20 members from 14 countries with the goal obtaining from the ICSU ‘recognition as the international body representative of biochemistry, with a view to the formal constitution of an International Union of Biochemistry as soon as possible’. Discussions continued over the next few years, and by the third Congress of Biochemistry, which took place in Brussels in 1955, the International Union of Biochemistry (IUB) was formed and officially admitted to the ICSU. In 1991, the IUB changed its name to the International Union of Biochemistry and Molecular Biology (IUBMB).
Members
The IUBMB unites biochemists and molecular biologists in 75 countries that belong to the IUBMB as an Adhering Body or Associate Adhering Body represented by a biochemical society, a national research council or an academy of sciences. It also represents the regional organizations, Federatio
|
https://en.wikipedia.org/wiki/Acer%20PICA
|
The M6100 PICA is a system logic chipset designed by Acer Laboratories introduced in 1993. PICA stands for Performance-enhanced Input-output and CPU Architecture. It was based on the Jazz architecture developed by Microsoft and supported the MIPS Technologies R4000 or R4400 microprocessors. The chipset was designed for computers that run Windows NT, and therefore used ARC firmware to boot Windows NT. The chipset consisted of six chips: a CPU and secondary cache controller, a buffer, a I/O cache and bus controller, a memory controller, and two data buffers.
PICA was used by Acer in its Formula 4000 personal workstation, which NEC sold under the OEM name RISCstation Image.
|
https://en.wikipedia.org/wiki/Roller%20Coaster%20DataBase
|
Roller Coaster DataBase (RCDB) is a roller coaster and amusement park database begun in 1996 by Duane Marden. It has grown to feature statistics and pictures of over 10,000 roller coasters from around the world.
Publications that have mentioned RCDB include The New York Times, Los Angeles Times, Toledo Blade, Orlando Sentinel, Time, Forbes, Mail & Guardian, and Chicago Sun-Times.
History
RCDB was started in 1996 by Duane Marden, a computer programmer from Brookfield, Wisconsin. The website is run off web servers in Marden's basement and a location in St. Louis.
Content
Each roller coaster entry includes any of the following information for the ride: current amusement park location, type, status (existing, standing but not operating (SBNO), defunct), opening date, make/model, cost, capacity, length, height, drop, number of inversions, speed, duration, maximum vertical angle, trains, and special notes. Entries may also feature reader-contributed photos and/or press releases.
The site also categorizes the rides into special orders, including a list of the tallest coasters, a list of the fastest coasters, a list of the most inversions on a coaster, a list of the parks with the most inversions, etc., each sortable by steel, wooden, or both. Each roller coaster entry links back to a page which lists all of that park's roller coasters, past and present, and includes a brief history and any links to fan web pages saluting the park.
Languages
The site is available in ten languages: English, German, French, Spanish, Dutch, Portuguese, Italian, Swedish, Japanese and Simplified Chinese.
|
https://en.wikipedia.org/wiki/Markovian%20discrimination
|
Within the probability theory Markov model, Markovian discrimination in spam filtering is a method used in CRM114 and other spam filters to model the statistical behaviors of spam and nonspam more accurately than in simple Bayesian methods. A simple Bayesian model of written text contains only the dictionary of legal words and their relative probabilities. A Markovian model adds the relative transition probabilities that given one word, predict what the next word will be. It is based on the theory of Markov chains by Andrey Markov, hence the name. In essence, a Bayesian filter works on single words alone, while a Markovian filter works on phrases or entire sentences.
There are two types of Markov models; the visible Markov model, and the hidden Markov model or HMM.
The difference is that with a visible Markov model, the current word is considered to contain the entire state of the language model, while a hidden Markov model hides the state and presumes only that the current word is probabilistically related to the actual internal state of the language.
For example, in a visible Markov model the word "the" should predict with accuracy the following word, while in
a hidden Markov model, the entire prior text implies the actual state and predicts the following words, but does
not actually guarantee that state or prediction. Since the latter case is what's encountered in spam filtering,
hidden Markov models are almost always used. In particular, because of storage limitations, the specific type
of hidden Markov model called a Markov random field is particularly applicable, usually with a clique size of
between four and six tokens.
See also
Maximum-entropy Markov model
|
https://en.wikipedia.org/wiki/Gummel%E2%80%93Poon%20model
|
The Gummel–Poon model is a model of the bipolar junction transistor. It was first described in an article published by Hermann Gummel and H. C. Poon at Bell Labs in 1970.
The Gummel–Poon model and modern variants of it are widely used in popular circuit simulators such as SPICE. A significant effect that the Gummel–Poon model accounts for is the variation of the transistor and values with the direct current level. When certain parameters are omitted, the Gummel–Poon model reduces to the simpler Ebers–Moll model.
Model parameters
Spice Gummel–Poon model parameters
See also
Gummel plot
|
https://en.wikipedia.org/wiki/Global%20Sea%20Level%20Observing%20System
|
Established in 1985, The Global Sea Level Observing System (GLOSS) is an Intergovernmental Oceanographic Commission (IOC) program whose purpose is to measure sea level globally for long-term climate change studies. The program's purpose has changed since the 2004 Indian Ocean earthquake and the program now collects real time measurements of sea level. The project is currently upgrading the over 290 stations it currently runs, so that they can send real time data via satellite to newly set up national tsunami centres. They are also fitting the stations with solar panels so they can continue to operate even if the mains power supply is interrupted by severe weather. The Global Sea Level Observing System does not compete with Deep-ocean Assessment and Reporting of Tsunamis as most GLOSS transducers are located close to land masses while DART's transducers are far out in the ocean.
The concept for GLOSS was proposed to the IOC by oceanographers David Pugh and Klaus Wyrtki in order to develop the Permanent Service for Mean Sea Level (PSMSL) data bank. The PSMSL states that "GLOSS provides oversight and coordination for global and regional sea level networks in support of, and with direction from, the oceanographic and climate research communities."
The Global Sea Level Observation System utilizes 290 tide gauge stations and watches over 90 countries and territories to have a global coverage.
The research that is provided by GLOSS is important for many things including research into sea level change and ocean circulation, coastal protection during events such as storm surges, providing flood warning and monitoring tsunamis, tide tables for port operations, fisherman, and recreation, to define datums for national or state boundaries.
GLOSS Core Network
The operation and maintenance of the GLOSS Core Network fulfills a range of research and operational requirements for the GLOSS Network. The goal of this network is to be 100% effective. Each gauge that is placed may di
|
https://en.wikipedia.org/wiki/Transistor%20model
|
Transistors are simple devices with complicated behavior. In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models. There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design.
Models for device design
The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation, impurity diffusion, oxide growth, annealing, and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device "geometry" to the device simulator. "Geometry" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing.
With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close
|
https://en.wikipedia.org/wiki/Lucas%20cell
|
A Lucas cell is a type of scintillation counter. It is used to acquire a gas sample, filter out the radioactive particulates through a special filter and then count the radioactive decay. The inside of the gas chamber is coated with ZnS(Ag) - a chemical that emits light when struck by alpha particles. A photomultiplier tube at the top of the chamber counts the photons and sends the count to a data logger.
Radon measurement
A Lucas cell can be used to measure radon gas concentrations.
Radon itself is an inert gas. Its danger lies in the fact that it undergoes radioactive decay. The radon decay products may lodge in the lungs and bombard them with alpha and beta particles, thus increasing the risk of lung cancer.
See also
Geiger counter
Counting efficiency
|
https://en.wikipedia.org/wiki/Dogs%20%28Fouling%20of%20Land%29%20Act%201996
|
The Dogs (Fouling of Land) Act 1996 is an Act of the Parliament of the United Kingdom. The purpose of the Act was to create a criminal offence if a dog defecates at any time on designated land and a person who is in charge of the dog at that time fails to remove the faeces from the land forthwith.
It was repealed by Clean Neighbourhoods and Environment Act 2005 section 65, and replaced by similar legislation in the same act. The Act applied only in England and Wales. It was not regulated in Scotland until the passing of the Dog Fouling (Scotland) Act 2003.
Some exemptions are in place for land beside a major road, agricultural land or forestry. Local authorities were to be responsible for policing the Act, and are able to appoint officers to enforce the regulations. Conviction would lead to a fine.
See also
Dogs Act
|
https://en.wikipedia.org/wiki/Stress%20migration
|
Stress migration is a failure mechanism that often occurs in integrated circuit metallization (aluminum, copper). Voids form as result of vacancy migration driven by the hydrostatic stress gradient. Large voids may lead to open circuit or unacceptable resistance increase that impedes the IC performance. 'Stress migration is often referred as stress voiding, stress induced voiding or SIV.
High temperature processing of copper dual damascene structures leaves the copper with a large tensile stress due to a mismatch in coefficient of thermal expansion of the materials involved. The stress can relax with time through the diffusion of vacancies leading to the formation of voids and ultimately open circuit failures.
|
https://en.wikipedia.org/wiki/Clue%20cell
|
Clue cells are epithelial cells of the vagina that get their distinctive stippled appearance by being covered with bacteria. The etymology behind the term "clue" cell derives from the original research article from Gardner and Dukes describing the characteristic cells. The name was chosen for its brevity in describing the sine qua non of bacterial vaginosis.
They are a medical sign of bacterial vaginosis, particularly that caused by Gardnerella vaginalis, a group of Gram-variable bacteria. This bacterial infection is characterized by a foul, fishy smelling, thin gray vaginal discharge, and an increase in vaginal pH from around 4.5 to over 5.5.
|
https://en.wikipedia.org/wiki/Pharyngeal%20groove
|
A pharyngeal groove (or branchial groove, or pharyngeal cleft) is made up of ectoderm unlike its counterpart the pharyngeal pouch on the endodermal side.
The first pharyngeal groove produces the external auditory meatus (ear canal). The rest (2, 3, and 4) are overlapped by the growing 2nd pharyngeal arch, and form the floor of the depression termed the cervical sinus, which opens ventrally, and is finally obliterated.
See also
Branchial cleft cyst
Collaural fistula
|
https://en.wikipedia.org/wiki/Pharyngeal%20pouch%20%28embryology%29
|
In the embryonic development of vertebrates, pharyngeal pouches form on the endodermal side between the pharyngeal arches. The pharyngeal grooves (or clefts) form the lateral ectodermal surface of the neck region to separate the arches.
Specific pouches
First pouch
The endoderm lines the future auditory tube (Pharyngotympanic Eustachian tube), middle ear, mastoid antrum, and inner layer of the tympanic membrane. Derivatives of this pouch are supplied by Mandibular nerve.
Second pouch
Contributes the middle ear, palatine tonsils, supplied by the facial nerve.
Third pouch
The third pouch possesses Dorsal and Ventral wings. Derivatives of the dorsal wings include the inferior parathyroid glands, while the ventral wings fuse to form the cytoreticular cells of the thymus. The main nerve supply to the derivatives of this pouch is Cranial Nerve IX, glossopharyngeal nerve.
Fourth pouch
Derivatives include:
superior parathyroid glands and ultimobranchial body which forms the parafollicular C-Cells of the thyroid gland.
Musculature and cartilage of larynx (along with the sixth pharyngeal arch).
Nerve supplying these derivatives is Superior laryngeal nerve.
Fifth pouch
Rudimentary structure, becomes part of the fourth pouch contributing to thyroid C-cells.
Sixth pouch
The fourth and sixth pouches contribute to the formation of the musculature and cartilage of the larynx. Nerve supply is by the recurrent laryngeal nerve.
See also
Pharyngeal arch (often called branchial arch although this is more specifically a fish structure)
DiGeorge syndrome
List of human cell types derived from the germ layers
|
https://en.wikipedia.org/wiki/Posterior%20inferior%20cerebellar%20artery
|
The posterior inferior cerebellar artery (PICA) is the largest branch of the vertebral artery. It is one of the three main arteries that supply blood to the cerebellum, a part of the brain. Blockage of the posterior inferior cerebellar artery can result in a type of stroke called lateral medullary syndrome.
Supply
The PICA supplies blood to the medulla oblongata; the choroid plexus and tela choroidea of the fourth ventricle; the tonsils; the inferior vermis, and the inferior parts of the cerebellum.
Course
It winds backward around the upper part of the medulla oblongata, passing between the origins of the vagus nerve and the accessory nerve, over the inferior cerebellar peduncle to the undersurface of the cerebellum, where it divides into two branches.
The medial branch continues backward to the notch between the two hemispheres of the cerebellum; while the lateral supplies the under surface of the cerebellum, as far as its lateral border, where it anastomoses with the anterior inferior cerebellar and the superior cerebellar branches of the basilar artery.
Branches from this artery supply the choroid plexus of the fourth ventricle.
Clinical significance
A disrupted blood supply to posterior inferior cerebellar artery due to a thrombus or embolus can result in a stroke and lead to lateral medullary syndrome. Severe occlusion of this artery or to vertebral arteries could lead to Horner's Syndrome as well.
|
https://en.wikipedia.org/wiki/Melvin%20Hochster
|
Melvin Hochster (born August 2, 1943) is an American mathematician working in commutative algebra. He is currently the Jack E. McLaughlin Distinguished University Professor Emeritus of Mathematics at the University of Michigan.
Education
Hochster attended Stuyvesant High School, where he was captain of the Math Team, and received a B.A. from Harvard University. While at Harvard, he was a Putnam Fellow in 1960. He earned his Ph.D. in 1967 from Princeton University, where he wrote a dissertation under Goro Shimura characterizing the prime spectra of commutative rings.
Career
He held positions at the University of Minnesota and Purdue University before joining the faculty at Michigan in 1977.
Hochster's work is primarily in commutative algebra, especially the study of modules over local rings. He has established classic theorems concerning Cohen–Macaulay rings, invariant theory and homological algebra. For example, the Hochster–Roberts theorem states that the invariant ring of a linearly reductive group acting on a regular ring is Cohen–Macaulay. His best-known work is on the homological conjectures, many of which he established for local rings containing a field, thanks to his proof of the existence of big Cohen–Macaulay modules and his technique of reduction to prime characteristic. His most recent work on tight closure, introduced in 1986 with Craig Huneke, has found unexpected applications throughout commutative algebra and algebraic geometry.
He has had more than 40 doctoral students, and the Association for Women in Mathematics has pointed out his outstanding role in mentoring women students pursuing a career in mathematics. He served as the chair of the department of Mathematics at the University of Michigan from 2008 to 2017.
Awards
Hochster shared the 1980 Cole Prize with Michael Aschbacher, received a Guggenheim Fellowship in 1981, and has been a member of both the National Academy of Sciences and the American Academy of Arts and Sciences since 19
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.