source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Journal%20of%20Bioscience%20and%20Bioengineering
The Journal of Bioscience and Bioengineering is a monthly peer-reviewed scientific journal. The editor-in-chief is Noriho Kamiya (Kyushu University). It is published by The Society for Biotechnology, Japan and distributed outside Japan by Elsevier. It was founded in 1923 as a Japanese-language journal and took its current title in 1999. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.0.15.
https://en.wikipedia.org/wiki/Big%20Jake%20%28horse%29
Big Jake (March 2001 – June 2021) was a red flaxen Belgian gelding horse noted for his extreme height. He stood at tall and weighed . According to the Guinness World Records, Big Jake broke the record for the world's tallest living horse when he was measured in 2010, and he held that record for the remainder of his life. After Sampson at (foaled 1846, in Toddington Mills, Bedfordshire, England), he is the second-tallest horse on record. Big Jake was born in 2001 in the U.S. state of Nebraska, weighing approximately , which is about heavier than is typical for his breed. His parents were normal-sized, and he was tall as a foal, but not exceptionally so. Big Jake was purchased by a relative of his eventual owner Jerry Gilbert, who took ownership when it became apparent that the horse would become very large and require special accommodation. Gilbert kept Big Jake at Smokey Hollow Farm, near Poynette, Wisconsin, feeding him two to three buckets of grain and a whole bale of hay daily. His stall was almost twice the size of that for a regular horse and he was transported in semi-trailers due to his size. Big Jake competed in draft horse showing competitions before retiring in 2013, and made regular appearances at the Wisconsin State Fair. Visitors to the farm were offered barn tours, which included meeting Big Jake. Big Jake's death was announced by Smokey Hollow Farm on June 27, 2021, with Gilbert's wife stating that the death had taken place approximately two weeks prior but declining to give the media an exact date. Jerry Gilbert hailed Big Jake as a "gentle giant", and stated that he intended to keep his stall empty as a memorial. Explanatory notes
https://en.wikipedia.org/wiki/Nichols%20plot
The Nichols plot is a plot used in signal processing and control design, named after American engineer Nathaniel B. Nichols. Use in control design Given a transfer function, with the closed-loop transfer function defined as, the Nichols plots displays versus . Loci of constant and are overlaid to allow the designer to obtain the closed loop transfer function directly from the open loop transfer function. Thus, the frequency is the parameter along the curve. This plot may be compared to the Bode plot in which the two inter-related graphs - versus and versus ) - are plotted. In feedback control design, the plot is useful for assessing the stability and robustness of a linear system. This application of the Nichols plot is central to the quantitative feedback theory (QFT) of Horowitz and Sidi, which is a well known method for robust control system design. In most cases, refers to the phase of the system's response. Although similar to a Nyquist plot, a Nichols plot is plotted in a Cartesian coordinate system while a Nyquist plot is plotted in a Polar coordinate system. See also Hall circles Bode plot Nyquist plot Transfer function
https://en.wikipedia.org/wiki/Context-aware%20pervasive%20systems
Context-aware computing refers to a general class of mobile systems that can sense their physical environment, and adapt their behavior accordingly. Three important aspects of context are: where you are; who you are with; and what resources are nearby. Although location is a primary capability, location-aware does not necessarily capture things of interest that are mobile or changing. Context-aware in contrast is used more generally to include nearby people, devices, lighting, noise level, network availability, and even the social situation, e.g., whether you are with your family or a friend from school. History The concept emerged from ubiquitous computing research at Xerox PARC and elsewhere in the early 1990s. The term 'context-aware' was first used by Schilit and Theimer in their 1994 paper Disseminating Active Map Information to Mobile Hosts where they describe a model of computing in which users interact with many different mobile and stationary computers and classify a context-aware systems as one that can adapt according to its location of use, the collection of nearby people and objects, as well as the changes to those objects over time over the course of the day. See also Ambient intelligence Context awareness Differentiated service (design pattern) Locative Media
https://en.wikipedia.org/wiki/Invariant%20%28mathematics%29
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class. Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects. Examples A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting. An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change. The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication. Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios a
https://en.wikipedia.org/wiki/Recursive%20filter
In signal processing, a recursive filter is a type of filter which reuses one or more of its outputs as an input. This feedback typically results in an unending impulse response (commonly referred to as infinite impulse response (IIR)), characterised by either exponentially growing, decaying, or sinusoidal signal output components. However, a recursive filter does not always have an infinite impulse response. Some implementations of moving average filter are recursive filters but with a finite impulse response. Non-recursive Filter Example: y[n] = 0.5x[n − 1] + 0.5x[n]. Recursive Filter Example: y[n] = 0.5y[n − 1] + 0.5x[n]. Examples of recursive filters Kalman filter Signal processing Weblinks IIR Filter Design auf Google Play Store
https://en.wikipedia.org/wiki/Pregnancy%20over%20age%2050
Pregnancy over the age of 50 has become possible for more women due to advances in assisted reproductive technology, in particular egg donation. Typically, a woman's fecundity ends with menopause, which, by definition, is 12 consecutive months without having had any menstrual flow at all. During perimenopause, the menstrual cycle and the periods become irregular and eventually stop altogether. The female biological clock can vary greatly from woman to woman. A woman's individual level of fertility can be tested through a variety of methods. In the United States, between 1997 and 1999, 539 births were reported among mothers over age 50 (four per 100,000 births), with 194 being over 55. The oldest recorded mother to date to conceive was 73 years. According to statistics from the Human Fertilisation and Embryology Authority, in the UK more than 20 babies are born to women over age 50 per year through in vitro fertilization with the use of donor oocytes (eggs). Maria del Carmen Bousada de Lara formerly held the record of oldest verified mother; she was aged 66 years 358 days when she gave birth to twins; she was 130 days older than Adriana Iliescu, who gave birth in 2005 to a baby girl. In both cases, the children were conceived through IVF with donor eggs. The oldest verified mother to conceive naturally (listed currently in the Guinness Records) is Dawn Brooke (Guernsey); she conceived a son at the age of 59 in 1997. Erramatti Mangamma currently holds the record for being the oldest living mother who gave birth at the age of 73 through in-vitro fertilisation via caesarean section in the city of Hyderabad, India. She delivered twin baby girls, making her also the oldest mother to give birth to twins. The previous record for being the oldest living mother was held by Daljinder Kaur Gill from Amritsar, India who gave birth to a baby boy at age 72 through in-vitro fertilisation. Age considerations Menopause typically occurs between 44 and 58 years of age. DNA testi
https://en.wikipedia.org/wiki/List%20of%20limits
This is a list of limits for common functions such as elementary functions. In this article, the terms a, b and c are constants with respect to x. Limits for general functions Definitions of limits and related concepts if and only if This is the (ε, δ)-definition of limit. The limit superior and limit inferior of a sequence are defined as and . A function, , is said to be continuous at a point, c, if Operations on a single known limit If then: if L is not equal to 0. if n is a positive integer if n is a positive integer, and if n is even, then L > 0. In general, if g(x) is continuous at L and then Operations on two known limits If and then: Limits involving derivatives or infinitesimal changes In these limits, the infinitesimal change is often denoted or . If is differentiable at , . This is the definition of the derivative. All differentiation rules can also be reframed as rules involving limits. For example, if g(x) is differentiable at x, . This is the chain rule. . This is the product rule. If and are differentiable on an open interval containing c, except possibly c itself, and , L'Hôpital's rule can be used: Inequalities If for all x in an interval that contains c, except possibly c itself, and the limit of and both exist at c, then If and for all x in an open interval that contains c, except possibly c itself, This is known as the squeeze theorem. This applies even in the cases that f(x) and g(x) take on different values at c, or are discontinuous at c. Polynomials and functions of the form xa Polynomials in x if n is a positive integer In general, if is a polynomial then, by the continuity of polynomials, This is also true for rational functions, as they are continuous on their domains. Functions of the form xa In particular, . In particular, Exponential functions Functions of the form ag(x) , due to the continuity of Functions of the form xg(x) Functions of the form f(x)g(x) . This limit can be deriv
https://en.wikipedia.org/wiki/ClearSpeed
ClearSpeed Technology Ltd was a semiconductor company, formed in 2002 to develop enhanced SIMD processors for use in high-performance computing and embedded systems. Based in Bristol, UK, the company has been selling its processors since 2005. Its current 192-core CSX700 processor was released in 2008, but a lack of sales has forced the company to downsize and it has since delisted from the London stock exchange. Products The CSX700 processor consists of two processing arrays, each with 96 processing elements. The processing elements each contain a 32/64-bit floating point multiplier, a 32/64-bit floating point adder, 6 KB of SRAM, an integer arithmetic logic unit, and a 16-bit integer multiply–accumulate unit. It currently sells its CSX700 processor on a PCI Express expansion card with 2 GB of memory, called the Advance e710. The card is supplied with the ClearSpeed Software Development Kit and application libraries. Related multi-core architectures include Ambric, PicoChip, Cell BE, Texas Memory Systems, and GPGPU stream processors such as AMD FireStream and Nvidia Tesla. ClearSpeed competes with AMD and Nvidia in the hardware acceleration market, where computationally intensive applications offload tasks to the accelerator. As of 2009, only the ClearSpeed e710 performs 64-bit arithmetic at its peak computational rate. History In November 2003 ClearSpeed demonstrated the CS301, with 64 processing elements running at 200 MHz, and peak 25.6 FP32 GFLOPS. In June 2005 ClearSpeed demonstrated the CSX600, with 96 processing elements running at 210 MHz, capable of 40 GFLOPS. In September 2005 John Gustafson joined ClearSpeed as CTO of high performance computing. In November 2005 ClearSpeed made its first significant sale of CSX600 processors to the Tokyo Institute of Technology using X620 Advance cards. In November 2006 ClearSpeed X620 Advance cards helped place the Tsubame cluster 7th in the TOP500 list of supercomputers. The cards continue to be used in 2009.
https://en.wikipedia.org/wiki/Girsanov%20theorem
In probability theory, the Girsanov theorem tells how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure, which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values, to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying. History Results of this type were first proved by Cameron-Martin in the 1940s and by Igor Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977). Significance Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is a measure that is absolutely continuous with respect to P then every P-semimartingale is a Q-semimartingale. Statement of theorem We state the theorem first for the special case when the underlying stochastic process is a Wiener process. This special case is sufficient for risk-neutral pricing in the Black–Scholes model. Let be a Wiener process on the Wiener probability space . Let be a measurable process adapted to the natural filtration of the Wiener process ; we assume that the usual conditions have been satisfied. Given an adapted process define where is the stochastic exponential of X with respect to W, i.e. and denotes the quadratic variation of the process X. If is a martingale then a probability measure Q can be defined on such that Radon–Nikodym derivative Then for each t the measure Q restricted to the unaugmented sigma fields is equivalent to P restricted to Furthermore if is a local martingale under P then the process is a Q local martingale on the filtered probability space . Corollary If X is a continuous process and W is Brownian motion under measure P then is Brownian motion
https://en.wikipedia.org/wiki/List%20of%20dualities
– Mathematics In mathematics, a duality, generally speaking, translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of A is B, then the dual of B is A. Alexander duality Alvis–Curtis duality Artin–Verdier duality Beta-dual space Coherent duality De Groot dual Dual abelian variety Dual basis in a field extension Dual bundle Dual curve Dual (category theory) Dual graph Dual group Dual object Dual pair Dual polygon Dual polyhedron Dual problem Dual representation Dual q-Hahn polynomials Dual q-Krawtchouk polynomials Dual space Dual topology Dual wavelet Duality (optimization) Duality (order theory) Duality of stereotype spaces Duality (projective geometry) Duality theory for distributive lattices Dualizing complex Dualizing sheaf Eckmann–Hilton duality Esakia duality Fenchel's duality theorem Hodge dual Jónsson–Tarski duality Lagrange duality Langlands dual Lefschetz duality Local Tate duality Opposite category Poincaré duality Twisted Poincaré duality Poitou–Tate duality Pontryagin duality S-duality (homotopy theory) Schur–Weyl duality Series-parallel duality Serre duality Spanier–Whitehead duality Stone's duality Tannaka–Krein duality Verdier duality Grothendieck local duality Philosophy and religion Dualism (philosophy of mind) Epistemological dualism Dualistic cosmology Soul dualism Yin and yang Engineering Duality (electrical circuits) Duality (mechanical engineering) Observability/Controllability in control theory Physics Complementarity (physics) Dual resonance model Duality (electricity and magnetism) Englert–Greenberger duality relation Holographic duality Kramers–Wannier duality Mirror symmetry 3D mirror symmetry Montonen–Olive duality Mysterious duality (M-theory) Seiberg duality String duality S-duality T-duality U-duality Wave–par
https://en.wikipedia.org/wiki/Outline%20of%20logic
Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct (or valid) and incorrect (or fallacious) inferences. Logicians study the criteria for the evaluation of arguments. Foundations of logic Philosophy of logic Analytic-synthetic distinction Antinomy A priori and a posteriori Definition Description Entailment Identity (philosophy) Inference Logical form Logical implication Logical truth Logical consequence Name Necessity Material conditional Meaning (linguistic) Meaning (non-linguistic) Paradox  (list) Possible world Presupposition Probability Quantification Reason Reasoning Reference Semantics Strict conditional Syntax (logic) Truth Truth value Validity Branches of logic Affine logic Alethic logic Aristotelian logic Boolean logic Buddhist logic Bunched logic Categorical logic Classical logic Computability logic Deontic logic Dependence logic Description logic Deviant logic Doxastic logic Epistemic logic First-order logic Formal logic Free logic Fuzzy logic Higher-order logic Infinitary logic Informal logic Intensional logic Intermediate logic Interpretability logic Intuitionistic logic Linear logic Many-valued logic Mathematical logic Metalogic Minimal logic Modal logic Non-Aristotelian logic Non-classical logic Noncommutative logic Non-monotonic logic Ordered logic Paraconsistent logic Philosophical logic Predicate logic Propositional logic P
https://en.wikipedia.org/wiki/List%20of%20formulas%20in%20elementary%20geometry
This is a short list of some common mathematical shapes and figures and the formulas that describe them. Two-dimensional shapes Sources: Three-dimensional shapes This is a list of volume formulas of basic shapes: Cone – , where is the base's radius Cube – , where is the side's length; Cuboid – , where , , and are the sides' length; Cylinder – , where is the base's radius and is the cone's height; Ellipsoid – , where , , and are the semi-major and semi-minor axes' length; Sphere – , where is the radius; Parallelepiped – , where , , and are the sides' length,, and , , and are angles between the two sides; Prism – , where is the base's area and is the prism's height; Pyramid – , where is the base's area and is the pyramid's height; Tetrahedron – , where is the side's length. Sphere The basic quantities describing a sphere (meaning a 2-sphere, a 2-dimensional surface inside 3-dimensional space) will be denoted by the following variables is the radius, is the circumference (the length of any one of its great circles), is the surface area, is the volume. Surface area: Volume: Radius: Circumference: See also
https://en.wikipedia.org/wiki/Honeywell%20JetWave
Honeywell's JetWave is a piece of satellite communications hardware produced by Honeywell that enables global in-flight internet connectivity. Its connectivity is provided using Inmarsat’s GX Aviation network. The JetWave platform is used in business and general aviation, as well as defense and commercial airline users. History In 2012, Honeywell announced it would provide Inmarsat with the hardware for its GX Ka-band in-flight connectivity network. The Ka-band (pronounced either "kay-ay band" or "ka band") is a portion of the microwave part of the electromagnetic spectrum defined as frequencies in the range 27.5 to 31 gigahertz (GHz). In satellite communications, the Ka-band allows higher bandwidth communication. In 2017, after five years and more than 180 flight hours and testing, JetWave was launched as part of GX Aviation with Lufthansa Group. Honeywell’s JetWave was the exclusive terminal hardware option for the Inmarsat GX Aviation network; however, the exclusivity clause in that contract has expired. In July 2019, the United States Air Force selected Honeywell’s JetWave satcom system for 70 of its C-17 Globemaster III cargo planes. In December 2019, it was reported that six AirAsia aircraft had been fitted with Inmarsat’s GX Aviation Ka-band connectivity system and is slated to be implemented fleetwide across AirAsia’s Airbus A320 and A330 models in 2020, requiring installation of JetWave atop AirAsia’s fuselages. Today, Honeywell’s JetWave hardware is installed on over 1,000 aircraft worldwide. In August 2021, the Civil Aviation Administration of China approved a validation of Honeywell’s MCS-8420 JetWave satellite connectivity system for Airbus 320 aircraft. In December 2021, Honeywell, SES, and Hughes Network Systems demonstrated multi-orbit high-speed airborne connectivity for military customers using Honeywell’s JetWave MCX terminal with a Hughes HM-series modem, and SES satellites in both medium Earth orbit (MEO) and geostationary orbit (GEO). T
https://en.wikipedia.org/wiki/Spectrogram
A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays. Spectrograms are used extensively in the fields of music, linguistics, sonar, radar, speech processing, seismology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals. A spectrogram can be generated by an optical spectrometer, a bank of band-pass filters, by Fourier transform or by a wavelet transform (in which case it is also known as a scaleogram or scalogram). A spectrogram is usually depicted as a heat map, i.e., as an image with the intensity shown by varying the colour or brightness. Format A common format is a graph with two geometric dimensions: one axis represents time, and the other axis represents frequency; a third dimension indicating the amplitude of a particular frequency at a particular time is represented by the intensity or color of each point in the image. There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as a waterfall plot where the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be either linear or logarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably in decibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships. Generation Spectrograms of light may be created directly using an optical spectrometer over time. Spectrograms may be created from a time-domain signal in one of two ways: approximated as a filterbank that results from a series of band-pass filter
https://en.wikipedia.org/wiki/List%20of%20microorganisms%20used%20in%20food%20and%20beverage%20preparation
List of Useful Microorganisms Used In preparation Of Food And Beverage See also Fermentation (food) Food microbiology
https://en.wikipedia.org/wiki/Measurement%20Studio
NI Measurement Studio is a set of test and measurement components built by National Instruments, that integrates into the Microsoft Visual Studio environment. It includes extensive support for accessing instrumentation hardware. It has drivers and abstraction layers for many different types of instruments and buses are included or are available for inclusion. Measurement Studio includes a suite of analysis functions, including curve fitting, spectral analysis, Fast fourier transforms (FFT) and digital filters, and visualization. It also includes the ability to share variables and pass data over the internet with network shared variables. History Measurement Studio was introduced in February 2000 by National Instruments to combine its text-based programming tools, specifically: LabWindows/CVI, Component Works ++, Component Works. Measurement Studio 7.0 adopted support for .NET and allowed for native .NET controls and classes to integrate into Visual Studio. As of Measurement Studio 8.0.1, support for Visual Studio 2005 and .NET 2.0 framework have been included, with support for Windows Vista first adopted in version 8.1.1. Current version of Measurement Studio drops support for multiple versions of Visual Studio including 2008, 2005, .NET 2003 and 6.0. Measurement Studio includes a variety of examples, to illustrate how common GPIB, VISA, DAQMX, analysis, and DataSocket applications can be accessed. Related software National Instruments also offers a product called LabVIEW, which offers many of the test, measurement and control capabilities of Measurement Studio. National Instruments also offers LabWindows/CVI. as an alternative for ANSI C programmers. See also Dataflow programming Virtual instrumentation Comparison of numerical analysis software Fourth-generation programming language
https://en.wikipedia.org/wiki/Census%20of%20Antarctic%20Marine%20Life
The Census of Antarctic Marine Life (CAML) is a field project of the Census of Marine Life that researches the marine biodiversity of Antarctica, how it is affected by climate change, and how this change is altering the ecosystem of the Southern Ocean. The program started in 2005 as a 5-year initiative with the scientific goal being to study the evolution of life in Antarctic waters, to determine how this has influenced the diversity of the present biota, and use these observations to predict how it might respond to future change. However, due to modern and extravagant changes within technology, we are able to witness and influence biodiversity reproduction and development. This enables us to gain further insight toward characteristics that allow such biodiversity to flourish within this barren desert referred to as the Arctic and Antarctic. CAML has collected its data from 18 Antarctic research vessels during the International Polar Year, which is freely accessible at Scientific Committee on Antarctic Research Marine Biodiversity Information Network (SCAR-MarBIN). The Register of Antarctic Marine Species has 9,350 verified species (16,500 taxa) in 17 phyla, from microbes to whales. For 1500 species the DNA barcode is available. The information from CAML is a robust baseline against which future change may be measured.
https://en.wikipedia.org/wiki/Shridhar%20Chillal
Shridhar Chillal (born 29 January 1937) is an Indian man from the city of Pune, who held the world record for the longest fingernails ever reached on a single hand, with a combined length of 909.6 centimeters (358.1 inches). Chillal's longest single nail is his thumbnail, measuring 197.8 centimeters (77.87 inches). He stopped cutting his nails in 1952. Although proud of his record-breaking nails, Chillal has faced increasing difficulties due to the weight of his finger nails, including disfigurement of his fingers and loss of function in his left hand. He claims that nerve damage to his left arm from the nails' immense weight has also caused deafness in his left ear. Chillal has appeared in films and television displaying his nails, such as Jackass 2.5. On 11 July 2018, Chillal had his fingernails cut with a power tool at the Ripley's Believe It or Not! museum in New York City, where the nails will be put on display. A technician wearing protective gear cut the nails during a "nail clipping ceremony". See also Lee Redmond, who held the record for the longest fingernails on both hands.
https://en.wikipedia.org/wiki/Carbon%20nanotubes%20in%20interconnects
In nanotechnology, carbon nanotube interconnects refer to the proposed use of carbon nanotubes in the interconnects between the elements of an integrated circuit. Carbon nanotubes (CNTs) can be thought of as single atomic layer graphite sheets rolled up to form seamless cylinders. Depending on the direction on which they are rolled, CNTs can be semiconducting or metallic. Metallic carbon nanotubes have been identified as a possible interconnect material for the future technology generations and to replace copper interconnects. Electron transport can go over long nanotube lengths, 1 μm, enabling CNTs to carry very high currents (i.e. up to a current density of 109 A∙cm−2) with essentially no heating due to nearly one dimensional electronic structure. Despite the current saturation in CNTs at high fields, the mitigation of such effects is possible due to encapsulated nanowires. Carbon nanotubes for interconnects application in Integrated chips have been studied since 2001, however the extremely attractive performances of individual tubes are difficult to reach when they are assembled in large bundles necessary to make real via or lines in integrated chips. Two proposed approaches to overcome the to date limitations are either to make very tiny local connections that will be needed in future advanced chips or to make carbon metal composite structure that will be compatible with existing microelectronic processes. Hybrid interconnects that employ CNT vias in tandem with copper interconnects may offer advantages in reliability and thermal-management. In 2016, the European Union has funded a four million euro project over three years to evaluate manufacturability and performance of composite interconnects employing both CNT and copper interconnects. The project named CONNECT (CarbON Nanotube compositE InterconneCTs) involves the joint efforts of seven European research and industry partners on fabrication techniques and processes to enable reliable carbon nanotubes for
https://en.wikipedia.org/wiki/Additive%20combinatorics
Additive combinatorics is an area of combinatorics in mathematics. One major area of study in additive combinatorics are inverse problems: given the size of the sumset A + B is small, what can we say about the structures of and ? In the case of the integers, the classical Freiman's theorem provides a partial answer to this question in terms of multi-dimensional arithmetic progressions. Another typical problem is to find a lower bound for in terms of and . This can be viewed as an inverse problem with the given information that is sufficiently small and the structural conclusion is then of the form that either or is the empty set; however, in literature, such problems are sometimes considered to be direct problems as well. Examples of this type include the Erdős–Heilbronn Conjecture (for a restricted sumset) and the Cauchy–Davenport Theorem. The methods used for tackling such questions often come from many different fields of mathematics, including combinatorics, ergodic theory, analysis, graph theory, group theory, and linear algebraic and polynomial methods. History of additive combinatorics Although additive combinatorics is a fairly new branch of combinatorics (in fact the term additive combinatorics was coined by Terence Tao and Van H. Vu in their book in 2000's), an extremely old problem Cauchy–Davenport theorem is one of the most fundamental results in this field. Cauchy–Davenport theorem Suppose that A and B are finite subsets of the cyclic group for a prime , then the following inequality holds. Vosper's theorem Now we have the inequality for the cardinality of the sum set , it is natural to ask the inverse problem, namely under what conditions on and does the equality hold? Vosper's theorem answers this question. Suppose that (that is, barring edge cases) and then and are arithmetic progressions with the same difference. This illustrates the structures that are often studied in additive combinatorics: the combinatorial structure of a
https://en.wikipedia.org/wiki/Barcode%20reader
A barcode reader or barcode scanner is an optical scanner that can read printed barcodes, decode the data contained in the barcode to a computer. Like a flatbed scanner, it consists of a light source, a lens and a light sensor for translating optical impulses into electrical signals. Additionally, nearly all barcode readers contain decoder circuitry that can analyse the barcode's image data provided by the sensor and send the barcode's content to the scanner's output port. Types of barcode scanners Technology Barcode readers can be differentiated by technologies as follows: Pen-type readers Pen-type readers consist of a light source and photodiode that are placed next to each other in the tip of a pen. To read a barcode, the person holding the pen must move the tip of it across the bars at a relatively uniform speed. The photodiode measures the intensity of the light reflected back from the light source as the tip crosses each bar and space in the printed code. The photodiode generates a waveform that is used to measure the widths of the bars and spaces in the barcode. Dark bars in the barcode absorb light and white spaces reflect light so that the voltage waveform generated by the photodiode is a representation of the bar and space pattern in the barcode. This waveform is decoded by the scanner in a manner similar to the way Morse code dots and dashes are decoded. Laser scanners Laser scanners direct the laser beam back and forth across the barcode. As with the pen-type reader, a photo-diode is used to measure the intensity of the light reflected back from the barcode. In both pen readers and laser scanners, the light emitted by the reader is rapidly varied in brightness with a data pattern and the photo-diode receive circuitry is designed to detect only signals with the same modulated pattern. CCD readers (also known as LED scanners) Charge-coupled device (CCD) readers use an array of hundreds of tiny light sensors lined up in a row in the head of the r
https://en.wikipedia.org/wiki/Generic%20Array%20Logic
The Generic Array Logic (also known as GAL and sometimes as gate array logic) device was an innovation of the PAL and was invented by Lattice Semiconductor. The GAL was an improvement on the PAL because one device type was able to take the place of many PAL device types or could even have functionality not covered by the original range of PAL devices. Its primary benefit, however, was that it was erasable and re-programmable, making prototyping and design changes easier for engineers. A similar device called a PEEL (programmable electrically erasable logic) was introduced by the International CMOS Technology (ICT) corporation. See also Programmable logic device (PLD) Complex programmable logic device (CPLD) Erasable programmable logic device (EPLD) GAL22V10
https://en.wikipedia.org/wiki/Modified%20Wigner%20distribution%20function
Note: the Wigner distribution function is abbreviated here as WD rather than WDF as used at Wigner distribution function A Modified Wigner distribution function is a variation of the Wigner distribution function (WD) with reduced or removed cross-terms. The Wigner distribution (WD) was first proposed for corrections to classical statistical mechanics in 1932 by Eugene Wigner. The Wigner distribution function, or Wigner–Ville distribution (WVD) for analytic signals, also has applications in time frequency analysis. The Wigner distribution gives better auto term localisation compared to the smeared out spectrogram (SP). However, when applied to a signal with multi frequency components, cross terms appear due to its quadratic nature. Several methods have been proposed to reduce the cross terms. For example, in 1994 L. Stankovic proposed a novel technique, now mostly referred to as S-method, resulting in the reduction or removal of cross terms. The concept of the S-method is a combination between the spectrogram and the Pseudo Wigner Distribution (PWD), the windowed version of the WD. The original WD, the spectrogram, and the modified WDs all belong to the Cohen's class of bilinear time-frequency representations : where is Cohen's kernel function, which is often a low-pass function, and normally serves to mask out the interference in the original Wigner representation. Mathematical definition Wigner distribution Cohen's kernel function : Spectrogram where is the short-time Fourier transform of . Cohen's kernel function : which is the WD of the window function itself. This can be verified by applying the convolution property of the Wigner distribution function. The spectrogram cannot produce interference since it is a positive-valued quadratic distribution. Modified form I Can't solve the cross term problem, however it can solve the problem of 2 components time difference larger than window size B. Modified form II Modified form III (Pseudo L-Wign
https://en.wikipedia.org/wiki/VyOS
VyOS is an open source network operating system based on Debian. VyOS provides a free routing platform that competes directly with other commercially available solutions from well-known network providers. Because VyOS is run on standard amd64 systems, it can be used as a router and firewall platform for cloud deployments. History After Brocade Communications stopped development of the Vyatta Core Edition of the Vyatta Routing software, a small group of enthusiasts in 2013 took the last Community Edition, and worked on building an Open Source fork to live on in place of the end of life VC. Features BGP (IPv4 and IPv6), OSPF (v2 and v3), RIP and RIPng, policy-based routing. IPsec, VTI, VXLAN, L2TPv3, L2TP/IPsec and PPTP servers, tunnel interfaces (GRE, IPIP, SIT), OpenVPN in client, server, or site-to-site modes, WireGuard. Stateful firewall, zone-based firewall, all types of source and destination NAT (one to one, one to many, many to many). DHCP and DHCPv6 server and relay, IPv6 RA, DNS forwarding, TFTP server, web proxy, PPPoE access concentrator, NetFlow/sFlow sensor, QoS. VRRP for IPv4 and IPv6, ability to execute custom health checks and transition scripts; ECMP, stateful load balancing. Built-in versioning. Releases VyOS version 1.0.0 (Hydrogen) was released on December 22, 2013. On October 9, 2014, version 1.1.0 (Helium) was released. All versions released thus far have been based on Debian 6.0 (Squeeze), and are available as a 32-bit images and 64-bit images for both physical and virtual machines. On January 28, 2019, version 1.2.0 (Crux) was released. Version 1.2.0 is based on Debian 8 (Jessie). While version 1.0 and 1.1 were named after elements, a new naming scheme based on constellations is used from version 1.2. Release History VMware Support The VyOS OVA image for VMware was released with the February 3, 2014 maintenance release. It allows a convenient setup of VyOS on a VMware platform and includes all of the VMware tools and paravirtual
https://en.wikipedia.org/wiki/Lobachevsky%20%28song%29
"Lobachevsky" is a humorous song by Tom Lehrer, referring to the mathematician Nikolai Ivanovich Lobachevsky. According to Lehrer, the song is "not intended as a slur on [Lobachevsky's] character" and the name was chosen "solely for prosodic reasons". In the introduction, Lehrer describes the song as an adaptation of a routine that Danny Kaye did to honor the Russian actor Constantin Stanislavski. (The Danny Kaye routine is sung from the perspective of a famous Russian actor who learns and applies Stanislavski's secret to method acting: "Suffer.") Lehrer sings the song from the point of view of an eminent Russian mathematician who learns from Lobachevsky that plagiarism is the secret of success in mathematics ("only be sure always to call it please 'research'"). The narrator later uses this strategy to get a paper published ahead of a rival, then to write a book and earn a fortune selling the movie rights. Lehrer wrote that he did not know Russian. In the song he quotes two "book reviews" in Russian; the first is a long sentence that he then translates succinctly as "It stinks". The second, a different but equally long sentence, is also translated as "It stinks." The actual text of these sentences bear no relation to academics: the first phrase quotes Mussorgsky's "Song of the Flea": The second references a Russian joke: [the bathroom]. The song was first performed as part of The Physical Revue, a 1951–1952 musical revue by Lehrer and a few other professors. It is track 6 on Songs by Tom Lehrer, which was re-released as part of Songs & More Songs by Tom Lehrer and The Remains of Tom Lehrer. In this early version, Ingrid Bergman is named to star in the role of "the Hypotenuse" in The Eternal Triangle, a film purportedly based on the narrator's book. It was recorded again for Revisited (Tom Lehrer album), with Brigitte Bardot as the Hypotenuse. A third recording is included in Tom Lehrer Discovers Australia (And Vice Versa), a live album recorded in Australia, f
https://en.wikipedia.org/wiki/DOPIPE
DOPIPE parallelism is a method to perform loop-level parallelism by pipelining the statements in a loop. Pipelined parallelism may exist at different levels of abstraction like loops, functions and algorithmic stages. The extent of parallelism depends upon the programmers' ability to make best use of this concept. It also depends upon factors like identifying and separating the independent tasks and executing them parallelly. Background The main purpose of employing loop-level parallelism is to search and split sequential tasks of a program and convert them into parallel tasks without any prior information about the algorithm. Parts of data that are recurring and consume significant amount of execution time are good candidates for loop-level parallelism. Some common applications of loop-level parallelism are found in mathematical analysis that uses multiple-dimension matrices which are iterated in nested loops. There are different kind of parallelization techniques which are used on the basis of data storage overhead, degree of parallelization and data dependencies. Some of the known techniques are: DOALL, DOACROSS and DOPIPE. DOALL: This technique is used where we can parallelize each iteration of the loop without any interaction between the iterations. Hence, the overall run-time gets reduced from N * T (for a serial processor, where T is the execution time for each iteration) to only T (since all the N iterations are executed in parallel). DOACROSS: This technique is used wherever there is a possibility for data dependencies. Hence, we parallelize tasks in such a manner that all the data independent tasks are executed in parallel, but the dependent ones are executed sequentially. There is a degree of synchronization used to sync the dependent tasks across parallel processors. Description DOPIPE is a pipelined parallelization technique that is used in programs where each element produced during each iteration is consumed in the later iteration. The followin
https://en.wikipedia.org/wiki/Shriek%20map
In category theory, a branch of mathematics, certain unusual functors are denoted and with the exclamation mark used to indicate that they are exceptional in some way. They are thus accordingly sometimes called shriek maps, with "shriek" being slang for an exclamation mark, though other terms are used, depending on context. Usage Shriek notation is used in two senses: To distinguish a functor from a more usual functor or accordingly as it is covariant or contravariant. To indicate a map that goes "the wrong way" – a functor that has the same objects as a more familiar functor, but behaves differently on maps and has the opposite variance. For example, it has a pull-back where one expects a push-forward. Examples In algebraic geometry, these arise in image functors for sheaves, particularly Verdier duality, where is a "less usual" functor. In algebraic topology, these arise particularly in fiber bundles, where they yield maps that have the opposite of the usual variance. They are thus called wrong way maps, Gysin maps, as they originated in the Gysin sequence, or transfer maps. A fiber bundle with base space B, fiber F, and total space E, has, like any other continuous map of topological spaces, a covariant map on homology and a contravariant map on cohomology However, it also has a covariant map on cohomology, corresponding in de Rham cohomology to "integration along the fiber", and a contravariant map on homology, corresponding in de Rham cohomology to "pointwise product with the fiber". The composition of the "wrong way" map with the usual map gives a map from the homology of the base to itself, analogous to a unit/counit of an adjunction; compare also Galois connection. These can be used in understanding and proving the product property for the Euler characteristic of a fiber bundle. Notes Mathematical notation Algebraic geometry Algebraic topology
https://en.wikipedia.org/wiki/MAVLink
MAVLink or Micro Air Vehicle Link is a protocol for communicating with small unmanned vehicle. It is designed as a header-only message marshaling library. MAVLink was first released early 2009 by Lorenz Meier under the LGPL license. Applications It is used mostly for communication between a Ground Control Station (GCS) and Unmanned vehicles, and in the inter-communication of the subsystem of the vehicle. It can be used to transmit the orientation of the vehicle, its GPS location and speed. Packet Structure In version 1.0 the packet structure is the following: After Version 2, the packet structure was expanded into the following: CRC field To ensure message integrity a cyclic redundancy check (CRC) is calculated to every message into the last two bytes. Another function of the CRC field is to ensure the sender and receiver both agree in the message that is being transferred. It is computed using an ITU X.25/SAE AS-4 hash of the bytes in the packet, excluding the Start-of-Frame indicator (so 6+n+1 bytes are evaluated, the extra +1 is the seed value). Additionally a seed value is appended to the end of the data when computing the CRC. The seed is generated with every new message set of the protocol, and it is hashed in a similar way as the packets from each message specifications. Systems using the MAVLink protocol can use a precomputed array to this purpose. The CRC algorithm of MAVLink has been implemented in many languages, like Python and Java. Messages The payload from the packets described above are MAVLink messages. Every message is identifiable by the ID field on the packet, and the payload contains the data from the message. An XML document in the MAVlink source has the definition of the data stored in this payload. Below is the message with ID 24 extracted from the XML document. <message id="24" name="GPS_RAW_INT"> <description>The global position, as returned by the Global Positioning System (GPS). This is NOT the global position estimate of
https://en.wikipedia.org/wiki/Cyphal
Cyphal is a lightweight protocol designed for reliable intra-vehicle communications using various communications transports, originally destined for CAN bus, but targeting various network types in subsequent revisions. OpenCyphal is an open-source project that aims to provide MIT-licensed implementations of the Cyphal protocol. The project was known as UAVCAN (Uncomplicated Application-level Vehicular Computing and Networking) prior to rebranding in March 2022. History The first RFC broadly outlining the general ideas that would later form the core design principles of Cyphal (branded UAVCAN at the time) was published in early 2014. It was a response to the perceived lack of adequate technology that could facilitate robust real-time intra-vehicular data exchange between distributed components of modern intelligent vehicles (primarily unmanned aircraft). Since the original RFC, the protocol has been through three major design iterations, which culminated in the release of the first long-term stable revision in 2020 (6 years later) labelled UAVCAN v1.0. In the meantime, the protocol has been deployed in numerous diverse systems including unmanned aerial vehicles, spacecraft, underwater robots, racing cars, general robotic systems, and micromobility vehicles. In 2022, the protocol was rebranded as Cyphal. Cyphal is positioned by its developers as a highly deterministic, safety-oriented alternative to high-level publish-subscribe frameworks such as DDS or the computation graph of ROS, which is sufficiently compact and simple to be usable in deeply embedded high-integrity applications. Cyphal has been shown to be usable with bare metal microcontrollers equipped with as little as 32K ROM and 8K RAM. The protocol is open and can be reused freely without approval or licensing fees. The development of the core standard and its reference implementations is conducted in an open manner, coordinated via the public discussion forum. As of 2020, the project is supported by sev
https://en.wikipedia.org/wiki/Fibre%20Channel%20frame
In computer networking, a Fibre Channel frame is the frame of the Fibre Channel protocol. The basic building blocks of an FC connection are the frames. They contain the information to be transmitted (payload), the address of the source and destination ports and link control information. Frames are broadly categorized as Data frames Link_control frames Data frames may be used as Link_Data frames and Device_Data frames, link control frames are classified as Acknowledge (ACK) and Link_Response (Busy and Reject) frames. The primary function of the Fabric is to receive the frames from the source port and route them to the destination port. It is the FC-2 layer's responsibility to break the data to be transmitted into frame size, and reassemble the frames. Each frame begins and ends with a frame delimiter. The frame header immediately follows the Start of Frame (SOF) delimiter. The frame header is used to control link applications, control device protocol transfers, and detect missing or out of order frames. Optional headers may contain further link control information. A maximum 2048 byte long field (payload) contains the information to be transferred from a source N_Port to a destination N_Port. The 4 byte Cyclic Redundancy Check (CRC) precedes the End of Frame (EOF) delimiter. The CRC is used to detect transmission errors. The maximum total frame length is 2148 bytes. Between successive frames a sequence of (at least) six primitives must be transmitted, sometimes called interframe gap.
https://en.wikipedia.org/wiki/N-topological%20space
In mathematics, an N-topological space is a set equipped with N arbitrary topologies. If τ1, τ2, ..., τN are N topologies defined on a nonempty set X, then the N-topological space is denoted by (X,τ1,τ2,...,τN). For N = 1, the structure is simply a topological space. For N = 2, the structure becomes a bitopological space introduced by J. C. Kelly. Example Let X = {x1, x2, ...., xn} be any finite set. Suppose Ar = {x1, x2, ..., xr}. Then the collection τ1 = {φ, A1, A2, ..., An = X} will be a topology on X. If τ1, τ2, ..., τm be m such topologies (chain topologies) defined on X, then the structure (X, τ1, τ2, ..., τm) is an ''m''-topological space.
https://en.wikipedia.org/wiki/Network%20search%20engine
Computer networks are connected together to form larger networks such as campus networks, corporate networks, or the Internet. Routers are network devices that may be used to connect these networks (e.g., a home network connected to the network of an Internet service provider). When a router interconnects many networks or handles much network traffic, it may become a bottleneck and cause network congestion (i.e., traffic loss). A number of techniques have been developed to prevent such problems. One of them is the network search engine (NSE), also known as network search element. This special-purpose device helps a router perform one of its core and repeated functions very fast: address lookup. Besides routing, NSE-based address lookup is also used to keep track of network service usage for billing purposes, or to look up patterns of information in the data passing through the network for security reasons . Network search engines are often available as ASIC chips to be interfaced with the network processor of the router. Content-addressable memory and Trie are two techniques commonly used when implementing NSEs.
https://en.wikipedia.org/wiki/Service%20account
A service account or application account is a digital identity used by an application software or service to interact with other applications or the operating system. They are often used for machine to machine communication (M2M), for example for application programming interfaces (API). The service account may be a privileged identity within the context of the application. Updating passwords Local service accounts can interact with various components of the operating system, which makes coordination of password changes difficult. In practice this causes passwords for service accounts to rarely be changed, which poses a considerable security risk for an organization. Some types of service accounts do not have a password. Wide access Service accounts are often used by applications for access to databases, running batch jobs or scripts, or for accessing other applications. Such privileged identities often have extensive access to an organization's underlying data stores laying in applications or databases. Passwords for such accounts are often built and saved in plain textfiles, which is a vulnerability which may be replicated across several servers to provide fault tolerance for applications. This vulnerability poses a significant risk for an organization since the application often hosts the type of data which is interesting to advanced persistent threats. Service accounts are non-personal digital identities and can be shared. Misuse Google Cloud lists several possibilities for misuse of service accounts: Privilege escalation: Someone impersonates the service account Spoofing: Someone impersonates the service account to hide their identity Non-repudiation: Performing actions on their behalf with a service account in cases where it is not possible to trace the actions of the abuser Information disclosure: Unauthorized persons extract information about infrastructure, applications or processes See also Kerberos Service Account, a service account in Ker
https://en.wikipedia.org/wiki/Degranulation
Degranulation is a cellular process that releases antimicrobial cytotoxic or other molecules from secretory vesicles called granules found inside some cells. It is used by several different cells involved in the immune system, including granulocytes (neutrophils, basophils, and eosinophils) and mast cells. It is also used by certain lymphocytes such as natural killer (NK) cells and cytotoxic T cells, whose main purpose is to destroy invading microorganisms. Mast cells Degranulation in mast cells is part of an inflammatory response, and substances such as histamine are released. Granules from mast cells mediate processes such as "vasodilation, vascular homeostasis, innate and adaptive immune responses, angiogenesis, and venom detoxification." Antigens interact with IgE molecules already bound to high affinity Fc receptors on the surface of mast cells to induce degranulation, via the activation of tyrosine kinases within the cell. The mast cell releases a mixture of compounds, including histamine, proteoglycans, serotonin, and serine proteases from its cytoplasmic granules. Eosinophils In a similar mechanism, activated eosinophils release preformed mediators such as major basic protein, and enzymes such as peroxidase, following interaction between their Fc receptors and IgE molecules that are bound to large parasites like helminths. Neutrophils Degranulation in neutrophils can occur in response to infection, and the resulting granules are released in order to protect against tissue damage. Excessive degranulation of neutrophils, sometimes triggered by bacteria, is associated with certain inflammatory disorders, such as asthma and septic shock. Four kinds of granules exist in neutrophils that display differences in content and regulation. Secretory vesicles are the most likely to release their contents by degranulation, followed by gelatinase granules, specific granules, and azurophil granules. Cytotoxic T cells and NK cells Cytotoxic T cells and NK cells rele
https://en.wikipedia.org/wiki/Atomic%20and%20molecular%20astrophysics
Atomic astrophysics is concerned with performing atomic physics calculations that will be useful to astronomers and using atomic data to interpret astronomical observations. Atomic physics plays a key role in astrophysics as astronomers' only information about a particular object comes through the light that it emits, and this light arises through atomic transitions. Molecular astrophysics, developed into a rigorous field of investigation by theoretical astrochemist Alexander Dalgarno beginning in 1967, concerns the study of emission from molecules in space. There are 110 currently known interstellar molecules. These molecules have large numbers of observable transitions. Lines may also be observed in absorption—for example the highly redshifted lines seen against the gravitationally lensed quasar PKS1830-211. High energy radiation, such as ultraviolet light, can break the molecular bonds which hold atoms in molecules. In general then, molecules are found in cool astrophysical environments. The most massive objects in our galaxy are giant clouds of molecules and dust known as giant molecular clouds. In these clouds, and smaller versions of them, stars and planets are formed. One of the primary fields of study of molecular astrophysics is star and planet formation. Molecules may be found in many environments, however, from stellar atmospheres to those of planetary satellites. Most of these locations are relatively cool, and molecular emission is most easily studied via photons emitted when the molecules make transitions between low rotational energy states. One molecule, composed of the abundant carbon and oxygen atoms, and very stable against dissociation into atoms, is carbon monoxide (CO). The wavelength of the photon emitted when the CO molecule falls from its lowest excited state to its zero energy, or ground, state is 2.6mm, or 115 gigahertz. This frequency is a thousand times higher than typical FM radio frequencies. At these high frequencies, molecules in th
https://en.wikipedia.org/wiki/Trillium%20Digital%20Systems
Trillium Digital Systems, Inc. developed and licensed standards-based communications source code software to telecommunications equipment manufacturers for the wireless, broadband, Internet and telephone network infrastructure. Trillium was an early company to license source code. The Trillium Digital Systems business entity no longer exists, but the Trillium communications software is still developed and licensed. Trillium software is used in the network infrastructure as well as associated service platforms, clients and devices. Company history Trillium Trillium was founded in February 1988 in Los Angeles, California. The co-founders were Jeff Lawrence and Larisa Chistyakov. Giorgio Propersi joined in September 1989. The initial capitalization of Trillium when it was incorporated was $1,000. The name Trillium came about because of a mistake. Jeff and Larisa asked for company name suggestions from family and friends. Someone suggested a character named Trillian from the book Hitchhiker's Guide to the Galaxy by Douglas Adams. They thought the suggestion was supposed to be trillium, a flower in the lily family. They liked the sound and symbolism of the name Trillium so they used it. Trillium was started as a consulting company. Its first consulting jobs were to develop communications software for bisynchronous, asynchronous and multiprotocol PAD products. Consulting continued through the end of 1990. While consulting, the co-founders decided there was an opportunity to develop and license portable source code software for communications protocols. Towards the end of 1990 Trillium became focused on developing its own products. Source code is a symbolic language (e.g., the C programming language) which is run through a compiler to generate binary code which can run on a particular microprocessor. Communications systems have a variety of hardware and software architectures, use a variety of microprocessors and use a variety of software development environments. It
https://en.wikipedia.org/wiki/The%20Bridges%20Organization
The Bridges Organization is an organization that was founded in Kansas, United States, in 1998 with the goal of promoting interdisciplinary work in mathematics and art. The Bridges Conference is an annual conference on connections between art and mathematics. The conference features papers, educational workshops, an art exhibition, a mathematical poetry reading, and a short movie festival. List of Bridges conferences
https://en.wikipedia.org/wiki/Bauer%20maximum%20principle
Bauer's maximum principle is the following theorem in mathematical optimization: Any function that is convex and continuous, and defined on a set that is convex and compact, attains its maximum at some extreme point of that set. It is attributed to the German mathematician Heinz Bauer. Bauer's maximum principle immediately implies the analogue minimum principle: Any function that is concave and continuous, and defined on a set that is convex and compact, attains its minimum at some extreme point of that set. Since a linear function is simultaneously convex and concave, it satisfies both principles, i.e., it attains both its maximum and its minimum at extreme points. Bauer's maximization principle has applications in various fields, for example, differential equations and economics.
https://en.wikipedia.org/wiki/Alpha%20strike%20%28engineering%29
Alpha strike is a term referring to the event when an alpha particle, a composite charged particle composed of two protons and two neutrons, enters a computer and modifies the data or operation of a component in the computer. Alpha strikes can disturb the silicon substrate of the transistors in a computer through their electronic stopping power, causing the transistor to flip states if the charge imparted by the strike crosses a critical threshold (QCrit). This, in turn, can corrupt the information stored by that transistor and create a cascading effect on the operation of the component that encases it. History The first widely recognized radiation-generated error in a computer was the appearance of random errors in the Intel 4k 2107 DRAM in the late 1970s. This problem was investigated by Timothy C. Mays and Murray H. Woods, who (in 1979) reported that the errors were caused by alpha decay from trace amounts of uranium and thorium induced in the seminal paper surrounding the chip. Since then, there have been multiple incidents of computer errors due to radiation, including error reports from computers onboard spacecraft, corrupted data from voting machines, and crashes on computers onboard aircraft. According to a study from Hughes Aircraft Company, anomalies in satellite communication attributed to galactic cosmic radiation is on the order of (3.1×10−3) transistors per year. This rate is an estimate of the number of noticeable cascading errors in communication between satellites per satellite. Modern Impact Alpha strikes are limiting the computing capabilities of computers onboard high-altitude vehicles as the energy an alpha particle imparts on the transistors of a computer is far more consequential for smaller transistors. As a result, computers with smaller transistors and higher computing capability are more prone to errors and crashes than computers with larger transistors. One potential solution for optimizing the performance of computers onboard sp
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Bacon%20number
A person's Erdős–Bacon number is the sum of one's Erdős number—which measures the "collaborative distance" in authoring academic papers between that person and Hungarian mathematician Paul Erdős—and one's Bacon number—which represents the number of links, through roles in films, by which the person is separated from American actor Kevin Bacon. The lower the number, the closer a person is to Erdős and Bacon, which reflects a small world phenomenon in academia and entertainment. To have a defined Erdős–Bacon number, it is necessary to have both appeared in a film and co-authored an academic paper, although this in and of itself is not sufficient as one's co-authors must have a known chain leading to Paul Erdős, and one's film must have actors eventually leading to Kevin Bacon. Academic scientists Mathematician Daniel Kleitman has an Erdős–Bacon number of 3. He co-authored papers with Erdős and has a Bacon number of 2 via Minnie Driver in Good Will Hunting; Driver and Bacon appeared together in Sleepers. Like Kleitman, mathematician Bruce Reznick has co-authored a paper with Erdős and has a Bacon number of 2, via Roddy McDowall in the film Pretty Maids All in a Row, giving him an Erdős–Bacon number of 3 as well. Physicist Nicholas Metropolis has an Erdős number of 2, and also a Bacon number of 2, giving him an Erdős–Bacon number of 4. Metropolis and Richard Feynman both worked on the Manhattan Project at Los Alamos Laboratory. Via Metropolis, Feynman has an Erdős number of 3 and, from having appeared in the film Anti-Clock alongside Tony Tang, Feynman also has a Bacon number of 3. Richard Feynman thus has an Erdős–Bacon number of 6. Theoretical physicist Stephen Hawking has an Erdős–Bacon number of 6: his Bacon number of 2 (via his appearance alongside John Cleese in Monty Python Live (Mostly), who acted alongside Kevin Bacon in The Big Picture) is lower than his Erdős number of 4. Similarly to Stephen Hawking, scientist Carl Sagan has an Erdős–Bacon number of 6
https://en.wikipedia.org/wiki/Wavelet
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing. For example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly one tenth of a second. If this wavelet were to be convolved with a signal created from the recording of a melody, then the resulting signal would be useful for determining when the Middle C note appeared in the song. Mathematically, a wavelet correlates with a signal if a portion of the signal is similar. Correlation is at the core of many practical wavelet applications. As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including but not limited to audio signals and images. Sets of wavelets are needed to analyze data fully. "Complementary" wavelets decompose a signal without gaps or overlaps so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet-based compression/decompression algorithms, where it is desirable to recover the original information with minimal loss. In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square-integrable functions. This is accomplished through coherent states. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a lase
https://en.wikipedia.org/wiki/List%20of%20mathematical%20societies
This article provides a list of mathematical societies. International African Mathematical Union Association for Women in Mathematics Circolo Matematico di Palermo European Mathematical Society European Women in Mathematics Foundations of Computational Mathematics International Association for Cryptologic Research International Association of Mathematical Physics International Linear Algebra Society International Mathematical Union International Statistical Institute International Society for Analysis, its Applications and Computation International Society for Mathematical Sciences Kurt Gödel Society Mathematical Council of the Americas (MCofA) Mathematical Society of South Eastern Europe (MASSEE) Mathematical Optimization Society Maths Society Ramanujan Mathematical Society Quaternion Society Society for Industrial and Applied Mathematics Southeast Asian Mathematical Society (SEAMS) Spectra (mathematical association) Unión Matemática de América Latina y el Caribe (UMALCA) Young Mathematicians Network Honor societies Kappa Mu Epsilon Mu Alpha Theta Pi Mu Epsilon National and subnational Arranged as follows: Society name in English (Society name in home-language; Abbreviation if used), Country and/or subregion/city if not specified in name. This list is sorted by continent. Africa Algeria Mathematical Society Gabon Mathematical Society South African Mathematical Society Asia Bangladesh Mathematical Society Calcutta Mathematical Society (CalMathSoc), Kolkata, India Chinese Mathematical Society Indian Mathematical Society Iranian Mathematical Society Israel Mathematical Union Jadavpur University Mathematical Society (JMS), Jadavpur, India Kerala Mathematical Association, Kerala State, India Korean Mathematical Society, South Korea Mathematical Society of Japan Mathematical Society of the Philippines Pakistan Mathematical Society Turkish Mathematical Society Europe Albanian Mathematical Association Armenian Mathematic
https://en.wikipedia.org/wiki/List%20of%20factorial%20and%20binomial%20topics
This is a list of factorial and binomial topics in mathematics. See also binomial (disambiguation). Abel's binomial theorem Alternating factorial Antichain Beta function Bhargava factorial Binomial coefficient Pascal's triangle Binomial distribution Binomial proportion confidence interval Binomial-QMF (Daubechies wavelet filters) Binomial series Binomial theorem Binomial transform Binomial type Carlson's theorem Catalan number Fuss–Catalan number Central binomial coefficient Combination Combinatorial number system De Polignac's formula Difference operator Difference polynomials Digamma function Egorychev method Erdős–Ko–Rado theorem Euler–Mascheroni constant Faà di Bruno's formula Factorial Factorial moment Factorial number system Factorial prime Gamma distribution Gamma function Gaussian binomial coefficient Gould's sequence Hyperfactorial Hypergeometric distribution Hypergeometric function identities Hypergeometric series Incomplete beta function Incomplete gamma function Jordan–Pólya number Kempner function Lah number Lanczos approximation Lozanić's triangle Macaulay representation of an integer Mahler's theorem Multinomial distribution Multinomial coefficient, Multinomial formula, Multinomial theorem Multiplicities of entries in Pascal's triangle Multiset Multivariate gamma function Narayana numbers Negative binomial distribution Nörlund–Rice integral Pascal matrix Pascal's pyramid Pascal's simplex Pascal's triangle Permutation List of permutation topics Pochhammer symbol (also falling, lower, rising, upper factorials) Poisson distribution Polygamma function Primorial Proof of Bertrand's postulate Sierpinski triangle Star of David theorem Stirling number Stirling transform Stirling's approximation Subfactorial Table of Newtonian series Taylor series Trinomial expansion Vandermonde's identity Wilson prime Wilson's theorem Wolstenholme prime Factorial and binomial topics
https://en.wikipedia.org/wiki/Food%20psychology
Food psychology is the psychological study of how people choose the food they eat (food choice), along with food and eating behaviors. Food psychology is an applied psychology, using existing psychological methods and findings to understand food choice and eating behaviors. Factors studied by food psychology include food cravings, sensory experiences of food, perceptions of food security and food safety, price, available product information such as nutrition labeling and the purchasing environment (which may be physical or online). Food psychology also encompasses broader sociocultural factors such as cultural perspectives on food, public awareness of "what constitutes a sustainable diet", and food marketing including "food fraud" where ingredients are intentionally motivated for economic gain as opposed to nutritional value. These factors are considered to interact with each other along with an individual's history of food choices to form new food choices and eating behaviors. The development of food choice is considered to fall into three main categories: properties of the food, individual differences and sociocultural influences. Food psychology studies psychological aspects of individual differences, although due to the interaction between factors and the variance in definitions, food psychology is often studied alongside other aspects of food choice including nutrition psychology. , there are no specific journals for food psychology, with research being published in both nutrition and psychology journals. Eating behaviors which are analysed by food psychology include disordered eating, behavior associated with food neophobia, and the public broadcasting/streaming of eating (mukbang). Food psychology has been studied extensively using theories of cognitive dissonance and fallacious reasoning. COVID-19 Food psychology has been used to examine how eating behaviors have been globally affected by the COVID-19 pandemic. Changed food preferences due to COVID-19
https://en.wikipedia.org/wiki/NE1000
The NE1000 and NE2000 are members of an early line of low cost Ethernet network cards introduced by Novell in 1987. Its popularity had a significant impact on the pervasiveness of networks in computing. They are based on a National Semiconductor prototype design using their 8390 Ethernet chip. History In the late 1980s, Novell was looking to shed its hardware server business and transform its flagship NetWare product into a PC-based server operating system that was agnostic and independent of the physical network implementation and topology (Novell even referred to NetWare as a NOS, or network operating system). To do this, Novell needed networking technology in general — and networking cards in particular — to become a commodity, so that the server operating system and protocols would become the differentiating technology. Most of the key pieces of this strategy were already in place: Ethernet and Token Ring (among others) had been codified by the IEEE 802 standards committee — the draft was not formally adopted until 1990, but was already in widespread use, and cards from one vendor were, on the whole, wire-compatible with cards complying with the same 802 working group. However, networking hardware vendors in general, and industry leaders 3Com and IBM in particular, were charging high prices for their hardware. To combat this, Novell decided to develop its own line of cards. In order to create these at minimal R&D, engineering and production costs, Novell based their board on DP839EB, a reference design created by National Semiconductor using the 8390 Ethernet chip. Compared to the reference design, Novell used Programmed I/O instead of the slower ISA DMA. Novell’s design also didn’t map the card’s internal buffer RAM into the host’s address space. The original card, the NE1000 (8-bit ISA; announced as "E-Net adapter" in February 1987 for ) The "NE" prefix stood for "Novell Ethernet". NE2000 The NE2000, using the 16-bit ISA bus of the PC AT followed in 19
https://en.wikipedia.org/wiki/Hardware%20acceleration
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both. To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput and reduced energy consumption. Typical advantages of focusing on software may include greater versatility, more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, times to market, and need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in a GPU, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs). Hardware acceleration is advantageous for performance, and practical when the functions are fixed so updates are not as ne
https://en.wikipedia.org/wiki/Photobiology
Photobiology is the scientific study of the beneficial and harmful interactions of light (technically, non-ionizing radiation) in living organisms. The field includes the study of photophysics, photochemistry, photosynthesis, photomorphogenesis, visual processing, circadian rhythms, photomovement, bioluminescence, and ultraviolet radiation effects. The division between ionizing radiation and non-ionizing radiation is typically considered to be a photon energy greater than 10 eV, which approximately corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen at about 14 eV. When photons come into contact with molecules, these molecules can absorb the energy in photons and become excited. Then they can react with molecules around them and stimulate "photochemical" and "photophysical" changes of molecular structures. Photophysics This area of Photobiology focuses on the physical interactions of light and matter. When molecules absorb photons that matches their energy requirements they promote a valence electron from a ground state to an excited state and they become a lot more reactive. This is an extremely fast process, but very important for different processes. Photochemistry This area of Photobiology studies the reactivity of a molecule when it absorbs energy that comes from light. It also studies what happens with this energy, it could be given off as heat or fluorescence so the molecule goes back to ground state. There are 3 basic laws of photochemistry: 1) First Law of Photochemistry: This law explains that in order for photochemistry to happen, light has to be absorbed. 2) Second Law of Photochemistry: This law explains that only one molecule will be activated by each photon that is absorbed. 3) Bunsen-Roscoe Law of Reciprosity: This law explains that the energy in the final products of a photochemical reaction will be directly proportional to the total energy that was initially absorbed by the system. Plant Photo
https://en.wikipedia.org/wiki/North%20American%20Network%20Operators%27%20Group
The North American Network Operators' Group (NANOG) is an educational and operational forum for the coordination and dissemination of technical information related to backbone/enterprise networking technologies and operational practices. It runs meetings, talks, surveys, and an influential mailing list for Internet service providers. The main method of communication is the NANOG mailing list (known informally as nanog-l), a free mailing list to which anyone may subscribe or post. Meetings NANOG meetings are held three times each year, and include presentations, tutorials, and BOFs (Birds of a Feather meetings). There are also 'lightning talks', where speakers can submit brief presentations (no longer than 10 minutes), on a very short term. The meetings are informal, and membership is open. Conference participants typically include senior engineering staff from tier 1 and tier 2 ISPs. Participating researchers present short summaries of their work for operator feedback. In addition to the conferences, NANOG On the Road events offer single-day professional development and networking events touching on current NANOG discussion topics. Organization NANOG meetings are organized by NewNOG, Inc., a Delaware non-profit organization, which took over responsibility for NANOG from the Merit Network in February 2011. Meetings are hosted by NewNOG and other organizations from the U.S. and Canada. Overall leadership is provided by the NANOG Steering Committee, established in 2005, and a Program Committee. History NANOG evolved from the NSFNET "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and with the Merit engineering staff. At the February 1994 regional techs meeting in San Diego, the group revised its charter to include a broader base of network service providers, and subsequently adopted NANOG as its new name. NANOG was organized by Merit Network, a non-profit Michigan org
https://en.wikipedia.org/wiki/4D%20vector
In computer science, a 4D vector is a 4-component vector data type. Uses include homogeneous coordinates for 3-dimensional space in computer graphics, and red green blue alpha (RGBA) values for bitmap images with a color and alpha channel (as such they are widely used in computer graphics). They may also represent quaternions (useful for rotations) although the algebra they define is different. Computer hardware support Some microprocessors have hardware support for 4D vectors with instructions dealing with 4 lane single instruction, multiple data (SIMD) instructions, usually with a 128-bit data path and 32-bit floating point fields. Specific instructions (e.g., 4 element dot product) may facilitate the use of one 128-bit register to represent a 4D vector. For example, in chronological order: Hitachi SH4, PowerPC VMX128 extension, and Intel x86 SSE4. Some 4-element vector engines (e.g., the PS2 vector units) went further with the ability to broadcast components as multiply sources, and cross product support. Earlier generations of graphics processing unit (GPU) shader pipelines used very long instruction word (VLIW) instruction sets tailored for similar operations. Software support SIMD use for 4D vectors can be conveniently wrapped in a vector maths library (commonly implemented in C or C++) commonly used in video game development, along with 4×4 matrix support. These are distinct from more general linear algebra libraries in other domains focussing on matrices of arbitrary size. Such libraries sometimes support 3D vectors padded to 4D or loading 3D data into 4D registers, with arithmetic mapped efficiently to SIMD operations by per platform intrinsic function implementations. There is choice between AOS and SOA approaches given the availability of 4 element registers, versus SIMD instructions that are usually tailored toward homogenous data. Shading languages for graphics processing unit (GPU) programming usually have a 4D datatypes (along with 2D, 3D) w
https://en.wikipedia.org/wiki/Latin%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning. Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted. Aa A represents: the first point of a triangle the digit "10" in hexadecimal and other positional numeral systems with a radix of 11 or greater the unit ampere for electric current in physics the area of a figure the mass number or nucleon number of an element in chemistry the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature a vector potential, in electromagnetics it can refer to the magnetic vector potential an Abelian group in abstract algebra the Glaisher–Kinkelin constant atomic weight, denoted by Ar work in classical mechanics the pre-exponential factor in the Arrhenius Equation electron affinity represents the algebraic numbers or affine space in algebraic geometry. A blood type A spectral type a represents: the first side of a triangle (opposite point A) the scale factor of the expanding universe in cosmology the acceleration in mechanics equations the first constant in a linear equation a constant in a polynomial the unit are for area (100 m2) the unit prefix atto (10−18) the first term in a sequence or series Reflectance Bb B represents: the digit "11" in hexadecimal and other positional numeral systems with a radix of 12 or greater the second point of a triangle a ball (also denoted by ℬ () or ) a basis of a vector space or of a filter (both also denoted by ℬ ()) in econometrics and time-series statistics it is often used for the backshift or lag operator, the formal parameter of the lag polynomial the magnetic field, denoted
https://en.wikipedia.org/wiki/Significand
The significand (also mantissa or coefficient, sometimes also argument, or ambiguously fraction or characteristic) is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction. Example The number 123.45 can be represented as a decimal floating-point number with the integer 12345 as the significand and a 10−2 power term, also called characteristics, where −2 is the exponent (and 10 is the base). Its value is given by the following arithmetic: 123.45 = 12345 × 10−2. The same value can also be represented in normalized form with 1.2345 as the fractional coefficient, and +2 as the exponent (and 10 as the base): 123.45 = 1.2345 × 10+2. Schmid, however, called this representation with a significand ranging between 1.0 and 10 a modified normalized form. For base 2, this 1.xxxx form is also called a normalized significand. Finally, the value can be represented in the format given by the Language Independent Arithmetic standard and several programming language standards, including Ada, C, Fortran and Modula-2, as 123.45 = 0.12345 × 10+3. Schmid called this representation with a significand ranging between 0.1 and 1.0 the true normalized form. For base 2, this 0.xxxx form is also called a normed significand. Significands and the hidden bit For a normalized number, the most significant digit is always non-zero. When working in binary, this constraint uniquely determines this digit to always be 1; as such, it does not need to be explicitly stored, being called the hidden bit. The significand is characterized by its width in (binary) digits, and depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double-precision format is commonly described as having either a 53-bit significand, including the hidden bit, or a 52-bit s
https://en.wikipedia.org/wiki/Twelfth%20root%20of%20two
The twelfth root of two or (or equivalently ) is an algebraic irrational number, approximately equal to 1.0594631. It is most important in Western music theory, where it represents the frequency ratio (musical interval) of a semitone () in twelve-tone equal temperament. This number was proposed for the first time in relationship to musical tuning in the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones). A semitone itself is divided into 100 cents (1 cent = ). Numerical value The twelfth root of two to 20 significant figures is . Fraction approximations in increasing order of accuracy include , , , , and . , its numerical value has been computed to at least twenty billion decimal digits. The equal-tempered chromatic scale A musical interval is a ratio of frequencies and the equal-tempered chromatic scale divides the octave (which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 2 times that of the one below it. Applying this value successively to the tones of a chromatic scale, starting from A above middle C (known as A4) with a frequency of 440 Hz, produces the following sequence of pitches: The final A (A5: 880 Hz) is exactly twice the frequency of the lower A (A4: 440 Hz), that is, one octave higher. Other tuning scales Other tuning scales use slightly different interval ratios: The just or Pythagorean perfect fifth is 3/2, and the difference between the equal tempered perfect fifth and the just is a grad, the twelfth root of the Pythagorean comma (). The equal tempered Bohlen–Pierce scale uses the interval of the thirteenth root of three (). Stockhausen's Studie II (1954) makes use of the twenty-fifth root of five (), a compound major third divided into 5×5 parts. The
https://en.wikipedia.org/wiki/Limosilactobacillus
Limosilactobacillus is a thermophilic and heterofermentative genus of lactic acid bacteria created in 2020 by splitting from Lactobacillus. The name is derived from the Latin "slimy", referring to the property of most strains in the genus to produce exopolysaccharides from sucrose. The genus currently includes 31 species or subspecies, most of these were isolated from the intestinal tract of humans or animals. Limosilactobacillus reuteri has been used as a model organism to evaluate the host-adaptation of lactobacilli to the human and animal intestine and for the recruitment of intestinal lactobacilli for food fermentations. Limosilactobacilli are heterofermentative and produce lactate, CO2, and acetate or ethanol from glucose; several limosilactobacilli, particularly strains of Lm. reuteri convert glycerol or 1,2-propanediol to 1,3 propanediol or propanol, respectively. Most strains do not grow in presence of oxygen, or in de Man, Rogosa Sharpe (MRS) medium, the standard medium for cultivation of lactobacilli. Addition of maltose, cysteine and fructose to MRS is usually sufficient for cultivation of limosilactobacilli. Species The genus Limosilactobacillus comprises the following species: Limosilactobacillus agrestis Li et al. 2021 Limosilactobacillus albertensis Li et al. 2021 Limosilactobacillus alvi Zheng et al. 2020 Limosilactobacillus antri (Roos et al. 2005) Zheng et al. 2020 Limosilactobacillus balticus Li et al. 2021 Limosilactobacillus caviae (Killer et al. 2017) Zheng et al. 2020 Limosilactobacillus coleohominis (Nikolaitchouk et al. 2001) Zheng et al. 2020 Limosilactobacillus equigenerosi (Endo et al. 2008) Zheng et al. 2020 Limosilactobacillus fastidiosus Li et al. 2021 Limosilactobacillus fermentum (Beijerinck 1901) Zheng et al. 2020 Limosilactobacillus frumenti (Müller et al. 2000) Zheng et al. 2020 Limosilactobacillus gastricus (Roos et al. 2005) Zheng et al. 2020 Limosilactobacillus gorillae (Tsuchida et al. 2014) Zheng et al. 2020
https://en.wikipedia.org/wiki/Sensor%20node
A sensor node (also known as a mote in North America), consists of an individual node from a sensor network that is capable of performing a desired action such as gathering, processing or communicating information with other connected nodes in a network. History Although wireless sensor networks have existed for decades and used for diverse applications such as earthquake measurements or warfare, the modern development of small sensor nodes dates back to the 1998 Smartdust project and the NASA. Sensor Web One of the objectives of the Smartdust project was to create autonomous sensing and communication within a cubic millimeter of space, though this project ended early on, it led to many more research projects and major research centres such as The Berkeley NEST and CENS. The researchers involved in these projects coined the term mote to refer to a sensor node. The equivalent term in the NASA Sensor Webs Project for a physical sensor node is pod, although the sensor node in a Sensor Web can be another Sensor Web itself. Physical sensor nodes have been able to increase their effectiveness and its capability in conjunction with Moore's Law. The chip footprint contains more complex and lower powered microcontrollers. Thus, for the same node footprint, more silicon capability can be packed into it. Nowadays, motes focus on providing the longest wireless range (dozens of km), the lowest energy consumption (a few uA) and the easiest development process for the user. Components The main components of a sensor node usually involve a microcontroller, transceiver, external memory, power source and one or more sensors. Sensors Sensors are used by wireless sensor nodes to capture data from their environment. They are hardware devices that produce a measurable response to a change in a physical condition like temperature or pressure. Sensors measure physical data of the parameter to be monitored and have specific characteristics such as accuracy, sensitivity etc. The cont
https://en.wikipedia.org/wiki/Compass%20%28drawing%20tool%29
A compass, more accurately known as a pair of compasses, is a technical drawing instrument that can be used for inscribing circles or arcs. As dividers, it can also be used as a tool to mark out distances, in particular, on maps. Compasses can be used for mathematics, drafting, navigation and other purposes. Prior to computerization, compasses and other tools for manual drafting were often packaged as a set with interchangeable parts. By the mid-twentieth century, circle templates supplemented the use of compasses. Today those facilities are more often provided by computer-aided design programs, so the physical tools serve mainly a didactic purpose in teaching geometry, technical drawing, etc. Construction and parts Compasses are usually made of metal or plastic, and consist of two "legs" connected by a hinge which can be adjusted to allow changing of the radius of the circle drawn. Typically one leg has a spike at its end for anchoring, and the other leg holds a drawing tool, such as a pencil, a short length of just pencil lead or sometimes a pen. Handle The handle, a small knurled rod above the hinge, is usually about half an inch long. Users can grip it between their pointer finger and thumb. Legs There are two types of leg in a pair of compasses: the straight or the steady leg and the adjustable one. Each has a separate purpose; the steady leg serves as the basis or support for the needle point, while the adjustable leg can be altered in order to draw different sizes of circles. Hinge The screw through the hinge holds the two legs in position. The hinge can be adjusted, depending on desired stiffness; the tighter the hinge-screw, the more accurate the compass's performance. The better quality compass, made of plated metal, is able to be finely adjusted via a small, serrated wheel usually set between the legs (see the "using a compass" animation shown above) and it has a (dangerously powerful) spring encompassing the hinge. This sort of compass is often kno
https://en.wikipedia.org/wiki/Otis%20Boykin
Otis Frank Boykin (August 29, 1920March 26, 1982) was an American inventor and engineer. His inventions include electrical resistors used in computing, missile guidance, and pacemakers. Early life and education Otis Boykin was born on August 29, 1920, in Dallas, Texas. His father, Walter B. Boykin, was a carpenter, and later became a preacher. His mother, Sarah, was a maid, who died of heart failure when Otis was a year old. This inspired him to help improve the pacemaker. Boykin attended Booker T. Washington High School in Dallas, where he was the valedictorian, graduating in 1938. He attended Fisk University on a scholarship, worked as a laboratory assistant at the university's nearby aerospace laboratory, and left in 1941. Boykin then moved to Chicago, where he found work as a clerk at Electro Manufacturing Company. He was subsequently hired as a laboratory assistant for the Majestic Radio and Television Corporation; at that company, he rose to become foreman of their factory. By 1944, he was working for the P.J. Nilsen Research Labs. In 1946–1947, he studied at Illinois Institute of Technology, but dropped out after two years; some sources say it was because he could not afford his tuition, but he later stated he left for an employment opportunity and did not have time to return to finish his degree. One of his mentors was Dr. Denton Deere, an engineer and inventor with his own laboratory. Another mentor was Dr. Hal F. Fruth, with whom he collaborated on several experiments, including a more effective way to test automatic pilot control units in airplanes. The two men later went into business, opening an electronics research lab in the late 1940s. In the 1950s, Boykin and Fruth worked together at the Monson Manufacturing Corporation; Boykin was the company's chief engineer. In the early 1960s, Boykin was a senior project engineer at the Chicago Telephone Supply Corporation, later known as CTS Labs. It was here that he did much of his pacemaker research. But
https://en.wikipedia.org/wiki/Pyrrolizidine%20alkaloid%20sequestration
Pyrrolizidine alkaloid sequestration by insects is a strategy to facilitate defense and mating. Various species of insects have been known to use molecular compounds from plants for their own defense and even as their pheromones or precursors to their pheromones. A few Lepidoptera have been found to sequester chemicals from plants which they retain throughout their life and some members of Erebidae are examples of this phenomenon. Starting in the mid-twentieth century researchers investigated various members of Arctiidae, and how these insects sequester pyrrolizidine alkaloids (PAs) during their life stages, and use these chemicals as adults for pheromones or pheromone precursors. PAs are also used by members of the Arctiidae for defense against predators throughout the life of the insect. Overview Pyrrolizidine alkaloids are a group of chemicals produced by plants as secondary metabolites, all of which contain a pyrrolizidine nucleus. This nucleus is made up of two pyrrole rings bonded by one carbon and one nitrogen. There are two forms in which PAs can exist and will readily interchange between: a pro-toxic free base form, also called a tertiary amine, or in a non-toxic form of N-oxide. Researchers have collected data that strongly suggests that PAs can be registered by taste receptors of predators, acting as a deterrent from being ingested. Taste receptors are also used by the various moth species that sequester PAs, which often stimulates them to feed. As of 2005, all of the PA sequestering insects that have been studied have all evolved a system to keep concentrations of the PA pro-toxic form low within the insect's tissues. Researchers have found a number of Arctiidae that use PAs for protection and for male pheromones or precursors of the male pheromones, and some studies have found evidence suggesting PAs have behavioral and developmental effects. Estigmene acrea, Cosmosoma myrodora, Utetheisa ornatrix, Creatonotos gangis and Creatonotos transiens are all
https://en.wikipedia.org/wiki/Immunofluorescence
Immunofluorescence is a technique used for light microscopy with a fluorescence microscope and is used primarily on biological samples. This technique uses the specificity of antibodies to their antigen to target fluorescent dyes to specific biomolecule targets within a cell, and therefore allows visualization of the distribution of the target molecule through the sample. The specific region an antibody recognizes on an antigen is called an epitope. There have been efforts in epitope mapping since many antibodies can bind the same epitope and levels of binding between antibodies that recognize the same epitope can vary. Additionally, the binding of the fluorophore to the antibody itself cannot interfere with the immunological specificity of the antibody or the binding capacity of its antigen. Immunofluorescence is a widely used example of immunostaining (using antibodies to stain proteins) and is a specific example of immunohistochemistry (the use of the antibody-antigen relationship in tissues). This technique primarily makes use of fluorophores to visualise the location of the antibodies. Immunofluorescence can be used on tissue sections, cultured cell lines, or individual cells, and may be used to analyze the distribution of proteins, glycans, and small biological and non-biological molecules. This technique can even be used to visualize structures such as intermediate-sized filaments. If the topology of a cell membrane has yet to be determined, epitope insertion into proteins can be used in conjunction with immunofluorescence to determine structures. Immunofluorescence can also be used as a "semi-quantitative" method to gain insight into the levels and localization patterns of DNA methylation since it is a more time-consuming method than true quantitative methods and there is some subjectivity in the analysis of the levels of methylation. Immunofluorescence can be used in combination with other, non-antibody methods of fluorescent staining, for example, use of
https://en.wikipedia.org/wiki/Bob%20Pease
Robert Allen Pease (August 22, 1940 – June 18, 2011) was an electronics engineer known for analog integrated circuit (IC) design, and as the author of technical books and articles about electronic design. He designed several very successful "best-seller" ICs, many of them in continuous production for multiple decades.These include LM331 voltage-to-frequency converter, and the LM337 adjustable negative voltage regulator (complement to the LM317). Life and career Pease was born on August 22, 1940, in Rockville, Connecticut. He attended Northfield Mount Hermon School in Massachusetts, and subsequently obtained a Bachelor of Science in Electrical Engineering (BSEE) degree from Massachusetts Institute of Technology in 1961. He started work in the early 1960s at George A. Philbrick Researches (GAP-R). GAP-R pioneered the first reasonable-cost, mass-produced operational amplifier (op-amp), the K2-W. At GAP-R, Pease developed many high-performance op-amps, built with discrete solid-state components. In 1976, Pease moved to National Semiconductor Corporation (NSC) as a Design and Applications Engineer, where he began designing analog monolithic ICs, as well as design reference circuits using these devices. He had advanced to Staff Engineer by the time of his departure in 2009. During his tenure at NSC, he began writing a popular continuing monthly column called "Pease Porridge" in Electronic Design about his experiences in the world of electronic design and application. The last project Pease worked on was the THOR-LVX (photo-nuclear) microtron Advanced Explosives contraband Detection System: "A Dual-Purpose Ion-Accelerator for Nuclear-Reaction-Based Explosives-and SNM-Detection in Massive Cargo". Pease was the author of eight books, including Troubleshooting Analog Circuits, and he held 21 patents. Although his name was listed as "Robert A. Pease" in formal documents, he preferred to be called "Bob Pease" or to use his initials "RAP" in his magazine columns. His other
https://en.wikipedia.org/wiki/List%20of%20knot%20theory%20topics
Knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined so that it cannot be undone. In precise mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. History Knots, links, braids Knot (mathematics) gives a general introduction to the concept of a knot. Two classes of knots: torus knots and pretzel knots Cinquefoil knot also known as a (5, 2) torus knot. Figure-eight knot (mathematics) the only 4-crossing knot Granny knot (mathematics) and Square knot (mathematics) are a connected sum of two Trefoil knots Perko pair, two entries in a knot table that were later shown to be identical. Stevedore knot (mathematics), a prime knot with crossing number 6 Three-twist knot is the twist knot with three-half twists, also known as the 52 knot. Trefoil knot A knot with crossing number 3 Unknot Knot complement, a compact 3 manifold obtained by removing an open neighborhood of a proper embedding of a tame knot from the 3-sphere. Knots and graphs general introduction to knots with mention of Reidemeister moves Notation used in knot theory: Conway notation Dowker–Thistlethwaite notation (DT notation) Gauss code (see also Gauss diagrams) continued fraction regular form General knot types 2-bridge knot Alternating knot; a knot that can be represented by an alternating diagram (i.e. the crossing alternate over and under as one traverses the knot). Berge knot a class of knots related to Lens space surgeries and defined in terms of their properties with respect to a genus 2 Heegaard surface. Cable knot, see Sate
https://en.wikipedia.org/wiki/Through-silicon%20via
In electronic engineering, a through-silicon via (TSV) or through-chip via is a vertical electrical connection (via) that passes completely through a silicon wafer or die. TSVs are high-performance interconnect techniques used as an alternative to wire-bond and flip chips to create 3D packages and 3D integrated circuits. Compared to alternatives such as package-on-package, the interconnect and device density is substantially higher, and the length of the connections becomes shorter. Classification Dictated by the manufacturing process, there exist three different types of TSVs: via-first TSVs are fabricated before the individual component (transistors, capacitors, resistors, etc.) are patterned (front end of line, FEOL), via-middle TSVs are fabricated after the individual component are patterned but before the metal layers (back-end-of-line, BEOL), and via-last TSVs are fabricated after (or during) the BEOL process. Via-middle TSVs are currently a popular option for advanced 3D ICs as well as for interposer stacks. TSVs through the front end of line (FEOL) have to be carefully accounted for during the EDA and manufacturing phases. That is because TSVs induce thermo-mechanical stress in the FEOL layer, thereby impacting the transistor behaviour. Applications Image sensors CMOS image sensors (CIS) were among the first applications to adopt TSV(s) in volume manufacturing. In initial CIS applications, TSVs were formed on the backside of the image sensor wafer to form interconnects, eliminate wire bonds, and allow for reduced form factor and higher-density interconnects. Chip stacking came about only with the advent of backside illuminated (BSI) CIS, and involved reversing the order of the lens, circuitry, and photodiode from traditional front-side illumination so that the light coming through the lens first hits the photodiode and then the circuitry. This was accomplished by flipping the photodiode wafer, thinning the backside, and then bonding it on top of the re
https://en.wikipedia.org/wiki/Wavelet%20transform
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. Definition A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space of square integrable functions. The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of , for integers . If under the standard inner product on , this family is orthonormal, it is an orthonormal system: where is the Kronecker delta. Completeness is satisfied if every function may be expanded in the basis as with convergence of the series understood to be convergence in norm. Such a representation of f is known as a wavelet series. This implies that an orthonormal wavelet is self-dual. The integral wavelet transform is the integral transform defined as The wavelet coefficients are then given by Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position. Principle The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape. This is achieved by choosing suitable basis functions that allow for this. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing, where represents time and angular frequency (, where is ordinary frequency). The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of . When is large, Bad time resolution Good frequency resolution Low frequency, large scaling factor When is small Good time
https://en.wikipedia.org/wiki/Icophone
The icophone is an instrument of speech synthesis conceived by Émile Leipp in 1964 and used for synthesizing the French language. The two first icophones were made in the laboratory of physical mechanics of Saint-Cyr-l'École. The principle of the icophone is the representation of the sound by a spectrograph. The spectrogram analyzes a word, a phrase, or more generally a sound, and shows the distribution of the different frequencies with their relative intensities. The first machines to synthesize words were made by displaying the form of the spectrogram on a transparent tape, which controls a series of oscillators following the presence or absence of a black mark on the tape. Leipp succeeded in decomposing the segments of a spoken sound phenomenon, and in synthesizing them from a very simplified display.
https://en.wikipedia.org/wiki/List%20of%20algebraic%20topology%20topics
This is a list of algebraic topology topics. Homology (mathematics) Simplex Simplicial complex Polytope Triangulation Barycentric subdivision Simplicial approximation theorem Abstract simplicial complex Simplicial set Simplicial category Chain (algebraic topology) Betti number Euler characteristic Genus Riemann–Hurwitz formula Singular homology Cellular homology Relative homology Mayer–Vietoris sequence Excision theorem Universal coefficient theorem Cohomology List of cohomology theories Cocycle class Cup product Cohomology ring De Rham cohomology Čech cohomology Alexander–Spanier cohomology Intersection cohomology Lusternik–Schnirelmann category Poincaré duality Fundamental class Applications Jordan curve theorem Brouwer fixed point theorem Invariance of domain Lefschetz fixed-point theorem Hairy ball theorem Degree of a continuous mapping Borsuk–Ulam theorem Ham sandwich theorem Homology sphere Homotopy theory Homotopy Path (topology) Fundamental group Homotopy group Seifert–van Kampen theorem Pointed space Winding number Simply connected Universal cover Monodromy Homotopy lifting property Mapping cylinder Mapping cone (topology) Wedge sum Smash product Adjunction space Cohomotopy Cohomotopy group Brown's representability theorem Eilenberg–MacLane space Fibre bundle Möbius strip Line bundle Canonical line bundle Vector bundle Associated bundle Fibration Hopf bundle Classifying space Cofibration Homotopy groups of spheres Plus construction Whitehead theorem Weak equivalence Hurewicz theorem H-space Further developments Künneth theorem De Rham cohomology Obstruction theory Characteristic class Chern class Chern–Simons form Pontryagin class Pontryagin number Stiefel–Whitney class Poincaré conjecture Cohomology operation Steenrod algebra Bott periodicity theorem K-theory Topological K-theory Adams operation Algebraic K-theory Whitehead torsion Twisted K-theory Cobordism Thom space Suspension functor Stable homotopy theory Spectrum (homotopy theory) Morava K-the
https://en.wikipedia.org/wiki/Reference%20model
A reference model—in systems, enterprise, and software engineering—is an abstract framework or domain-specific ontology consisting of an interlinked set of clearly defined concepts produced by an expert or body of experts to encourage clear communication. A reference model can represent the component parts of any consistent idea, from business functions to system components, as long as it represents a complete set. This frame of reference can then be used to communicate ideas clearly among members of the same community. Reference models are often illustrated as a set of concepts with some indication of the relationships between the concepts. Overview According to OASIS (Organization for the Advancement of Structured Information Standards) a reference model is "an abstract framework for understanding significant relationships among the entities of some environment, and for the development of consistent standards or specifications supporting that environment. A reference model is based on a small number of unifying concepts and may be used as a basis for education and explaining standards to a non-specialist. A reference model is not directly tied to any standards, technologies or other concrete implementation details, but it does seek to provide a common semantics that can be used unambiguously across and between different implementations." There are a number of concepts rolled up into that of a 'reference model.' Each of these concepts is important: Abstract: a reference model is abstract. It provides information about environments of a certain kind. A reference model describes the type or kind of entities that may occur in such an environment, not the particular entities that actually do occur in a specific environment. For example, when describing the architecture of a particular house (which is a specific environment of a certain kind), an actual exterior wall may have dimensions and materials, but the concept of a wall (type of entity) is part of the
https://en.wikipedia.org/wiki/Bare%20machine%20computing
Bare Machine Computing (BMC) is a computer architecture based on bare machines. In the BMC paradigm, applications run without the support of any operating system (OS) or centralized kernel i.e., no intermediary software is loaded on the bare machine prior to running applications. The applications, which are called bare machine applications or simply BMC applications, do not use any persistent storage or a hard disk, and instead are stored on detachable mass storage such as a USB flash drive. A BMC program consists of a single application or a small set of applications (application suite) that runs as a single executable within one address space. BMC applications have direct access to the necessary hardware resources. They are self-contained, self-managed and self-controlled entities that boot, load and run without using any other software components or external software. BMC applications have inherent security due to their design. There are no OS-related vulnerabilities, and each application only contains the necessary (minimal) functionality. There is no privileged mode in a BMC system since applications only run in user mode. Also, application code is statically compiled-there is no means to dynamically alter BMC program flow during execution. History In the early days of computing, computer applications directly communicated to the hardware and there was no operating system. As applications grew larger encompassing various domains, OSes were invented. They served as middleware providing hardware abstractions to applications. OSes have grown immensely in their size and complexity resulting in attempts to reduce OS overhead and improve performance including Microkernel, Exokernel, Tiny-OS, OS-Kit, Palacios and Kitten, IO_Lite, bare-metal Linux, IBM-Libra and other lean kernels. In addition to the above approaches, in embedded systems such as smart phones, a small and dedicated portion of an OS and a given set of applications are closely integrated with the hardw
https://en.wikipedia.org/wiki/National%20Law%20Enforcement%20System
The National Law Enforcement System, better known as the Wanganui Computer, was a database set up in 1976 by the State Services Commission in Wanganui, New Zealand. It held information which could be accessed by New Zealand Police, Land Transport Safety Authority and the justice department. The Wanganui computer was a Sperry mainframe computer built to hold records such as criminal convictions and car and gun licences. At the time it was deemed ground-breaking, with Minister of Police, Allan McCready, describing it as "probably the most significant crime-fighting weapon ever brought to bear against lawlessness in this country". Seen by many as a Big Brother initiative, the database was controversial, attracting numerous protests from libertarians with concerns over privacy. The most notable event was in 1982, when self-described anarchist punk Neil Roberts, aged 22, detonated a home-made gelignite bomb upon his person at the gates of the centre, making him New Zealand's highest-profile suicide bomber. The blast was large enough to be heard around Wanganui, and Roberts was killed instantly, being later identified by his unique chest tattoo bearing the words "This punk won't see 23. No future." The centre survived this and other protests until the 1990s when the operation was transferred to Auckland, although this new system has retained its Wanganui moniker. The original database, having lasted 30 years and growing increasingly outdated, was finally shut down in June 2005, with the responsibility being successfully handed over to Auckland at the National Intelligence Application (also known as NIA). The building, known as 'Wairere House' was later occupied by the National Library of New Zealand and contained newspaper archives. See also INCIS
https://en.wikipedia.org/wiki/Tillie%20the%20All-Time%20Teller
Tillie the All-Time Teller was one of the first ATMs, run by the First National Bank of Atlanta and considered to be one of the most successful ATMs in the banking industry. Tillie the All-Time Teller had a picture of a smiling blonde girl on the front of the machine to suggest it was user-friendly, had an apparent personality, and could greet people by name. Many banks hired women dressed as this person to show their customers how to use Tillie the All-Time Teller. History It was introduced by the First National Bank of Atlanta on May 15, 1974. It started out at only eleven locations. They were in commerce starting May 20, 1974. Starting 1977, other banks purchased rights to use Tillie the All-Time Teller as their ATM system. By March 21, 1981, they were available at 70 locations, including on a college campus. On October 15, 2013, Susan Bennett revealed that she played the voice for Tillie the All-Time Teller, noting that she "started [her] life as a machine quite young." Appearance Tillie the All-Time Teller machines were red and gold to make them look more attractive. On the bottom left was the place to enter an "access card," which featured a cartoon character. Above that was a place to enter a "secret code" that the customer chose. On the bottom center was a picture of a cartoon blonde girl with china-blue eyes and a red hat. Above that was the place it handed out cash and coins. On the top right was the place to enter a desired amount of money. How it worked Customers could use Tillie the All-Time Teller by following these steps: Inserting an "Alltime Tellercard" Following instructions presented on its TV screen Entering a "secret code" and entering a desired amount of money on the "money keyboard" ($200 was the limit) The machine would automatically hand out the desired amount of money. Entering a transaction envelope into the deposit slot Advertising There were a variety of advertisements made by the First National Bank of Atlanta in order to pr
https://en.wikipedia.org/wiki/Content%20delivery%20platform
A content delivery platform (CDP) is a software as a service (SaaS) content service, similar to a content management system (CMS), that utilizes embedded software code to deliver web content. Instead of the installation of software on client servers, a CDP feeds content through embedded code snippets, typically via JavaScript widget, Flash widget or server-side Ajax. Content delivery platforms are not content delivery networks, which are utilized for large web media and do not depend on embedded software code. A CDP is utilized for all types of web content, even text-based content. Alternatively, a content delivery platform can be utilized to import a variety of syndicated content into one central location and then re-purposed for web syndication. The term content delivery platform was coined by Feed.Us software architect John Welborn during a presentation to the Chicago Web Developers Association. In late 2007, two blog comment services launched utilizing CDP-based services. Intense Debate and Disqus both employ JavaScript widgets to display and collect blog comments on websites. See also Web content management system Viddler, YouTube, Ustream embeddable streaming video
https://en.wikipedia.org/wiki/Hardware%20Trojan
A Hardware Trojan (HT) is a malicious modification of the circuitry of an integrated circuit. A hardware Trojan is completely characterized by its physical representation and its behavior. The payload of an HT is the entire activity that the Trojan executes when it is triggered. In general, Trojans try to bypass or disable the security fence of a system: for example, leaking confidential information by radio emission. HTs also could disable, damage or destroy the entire chip or components of it. Hardware Trojans may be introduced as hidden "Front-doors" that are inserted while designing a computer chip, by using a pre-made application-specific integrated circuit (ASIC) semiconductor intellectual property core (IP Core) that have been purchased from a non-reputable source, or inserted internally by a rogue employee, either acting on their own, or on behalf of rogue special interest groups, or state sponsored spying and espionage. One paper published by IEEE in 2015 explains how a hardware design containing a Trojan could leak a cryptographic key leaked over an antenna or network connection, provided that the correct "easter egg" trigger is applied to activate the data leak. In high security governmental IT departments, hardware Trojans are a well known problem when buying hardware such as: a KVM switch, keyboards, mice, network cards, or other network equipment. This is especially the case when purchasing such equipment from non-reputable sources that could have placed hardware Trojans to leak keyboard passwords, or provide remote unauthorized entry. Background In a diverse global economy, outsourcing of production tasks is a common way to lower a product's cost. Embedded hardware devices are not always produced by the firms that design and/or sell them, nor in the same country where they will be used. Outsourced manufacturing can raise doubt about the evidence for the integrity of the manufactured product (i.e., one's certainty that the end-product has no desig
https://en.wikipedia.org/wiki/List%20of%20scientific%20constants%20named%20after%20people
This is a list of physical and mathematical constants named after people. Eponymous constants and their influence on scientific citations have been discussed in the literature. Apéry's constant – Roger Apéry Archimedes' constant (, pi) – Archimedes Avogadro constant – Amedeo Avogadro Balmer's constant – Johann Jakob Balmer Belphegor's prime – Belphegor (demon) Bohr magneton – Niels Bohr Bohr radius – Niels Bohr Boltzmann constant – Ludwig Boltzmann Brun's constant – Viggo Brun Cabibbo angle – Nicola Cabibbo Chaitin's constant – Gregory Chaitin Champernowne constant – D. G. Champernowne Chandrasekhar limit – Subrahmanyan Chandrasekhar Copeland–Erdős constant – Paul Erdős and Peter Borwein Coulomb constant (electric force constant, electrostatic constant, ) – Charles-Augustin de Coulomb Eddington number – Arthur Stanley Eddington Dunbar's number – Robin Dunbar Embree–Trefethen constant Erdős–Borwein constant Euler–Mascheroni constant () – Leonhard Euler and Lorenzo Mascheroni Euler's number () – Leonhard Euler Faraday constant – Michael Faraday Feigenbaum constants – Mitchell Feigenbaum Fermi coupling constant – Enrico Fermi Gauss's constant – Carl Friedrich Gauss Graham's number – Ronald Graham Hartree energy – Douglas Hartree Hubble constant – Edwin Hubble Josephson constant – Brian David Josephson Kaprekar's constant – D. R. Kaprekar Kerr constant – John Kerr Khinchin's constant – Aleksandr Khinchin Landau–Ramanujan constant – Edmund Landau and Srinivasa Ramanujan Legendre's constant (one, 1) – Adrien-Marie Legendre Loschmidt constant – Johann Josef Loschmidt Ludolphsche Zahl – Ludolph van Ceulen Mean of Phidias (golden ratio, , phi) – Phidias Meissel–Mertens constant Moser's number Newtonian constant of gravitation (gravitational constant, ) – Sir Isaac Newton Planck constant () – Max Planck Reduced Planck constant or Dirac constant (-bar, ) – Max Planck, Paul Dirac Ramanujan–Soldner constant – Srinivasa Ramanujan and Jo
https://en.wikipedia.org/wiki/Mathematical%20knowledge%20management
Mathematical knowledge management (MKM) is the study of how society can effectively make use of the vast and growing literature on mathematics. It studies approaches such as databases of mathematical knowledge, automated processing of formulae and the use of semantic information, and artificial intelligence. Mathematics is particularly suited to a systematic study of automated knowledge processing due to the high degree of interconnectedness between different areas of mathematics. See also OMDoc QED manifesto Areas of mathematics MathML External links www.nist.gov/mathematical-knowledge-management, NIST's MKM page The MKM Interest Group (archived) 9th International Conference on MKM, Paris, France, 2010 Big Proof Conference , a programme at the Isaac Newton Institute directed at the challenges of bringing proof technology into mainstream mathematical practice. Big Proof Two Mathematics and culture Information science
https://en.wikipedia.org/wiki/Networking%20hardware
Networking hardware, also known as network equipment or computer networking devices, are electronic devices that are required for communication and interaction between devices on a computer network. Specifically, they mediate data transmission in a computer network. Units which are the last receiver or generate data are called hosts, end systems or data terminal equipment. Range Networking devices includes a broad range of equipment which can be classified as core network components which interconnect other network components, hybrid components which can be found in the core or border of a network and hardware or software components which typically sit on the connection point of different networks. The most common kind of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices. Other networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery. Taking a wider view, mobile phones, tablet computers and devices associated with the internet of things may also be considered networking hardware. As technology advances and IP-based networks are integrated into building infrastructure and household utilities, network hardware will become an ambiguous term owing to the vastly increasing number of network-capable endpoints. Specific devices Network hardware can be classified by its location and role in the network. Core Core network components interconnect other network components. Gateway: an interface providing a compatibility between networks by converting transmission speeds, protocols, codes, or security measures. Router: a networking device that forwards data packets between computer networks. Routers perform the "traffic directing" fun
https://en.wikipedia.org/wiki/CodeSynthesis%20XSD/e
CodeSynthesis XSD/e is a validating XML parser/serializer and C++ XML Data Binding generator for Mobile and Embedded systems. It is developed by Code Synthesis and dual-licensed under the GNU GPL and a proprietary license. Given an XML instance specification (XML Schema), XSD/e can produce three kinds of C++ mappings: Embedded C++/Parser for event-driven XML parsing, Embedded C++/Serializer for event-driven XML serialization, and Embedded C++/Hybrid which provides a light-weight, in-memory object model on top of the other two mappings. The C++/Hybrid mapping generates C++ classes for types defined in XML Schema as well as parsing and serialization code. The C++ classes represent the data stored in XML as a statically-typed, tree-like object model and support fully in-memory as well as partially in-memory/partially event-driven XML processing. The C++/Parser mapping generates validating C++ parser skeletons for data types defined in XML Schema. One can then implement these parser skeletons to build a custom in-memory representation or perform immediate processing as parts of the XML documents become available. Similarly, the Embedded C++/Serializer mapping generates validating C++ serializer skeletons for types defined in XML Schema which can be used to serialize application data to XML. CodeSynthesis XSD/e itself is written in C++ and supports a number of embedded targets include Embedded Linux, VxWorks, QNX, LynxOS, iPhone OS and Windows CE.
https://en.wikipedia.org/wiki/List%20of%20particles
This is a list of known and hypothesized particles. Standard Model elementary particles Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally. Fermions Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei. Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons. Quarks Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except th
https://en.wikipedia.org/wiki/Mathematical%20object
A mathematical object is an abstract concept arising in mathematics. In the usual language of mathematics, an object is anything that has been (or could be) formally defined, and with which one may do deductive reasoning and mathematical proofs. Typically, a mathematical object can be a value that can be assigned to a variable, and therefore can be involved in formulas. Commonly encountered mathematical objects include numbers, sets, functions, expressions, geometric objects, transformations of other mathematical objects, and spaces. Mathematical objects can be very complex; for example, theorems, proofs, and even theories are considered as mathematical objects in proof theory. The ontological status of mathematical objects has been the subject of much investigation and debate by philosophers of mathematics. List of mathematical objects by branch Number theory numbers, operations Combinatorics permutations, derangements, combinations Set theory sets, set partitions functions, and relations Geometry points, lines, line segments, polygons (triangles, squares, pentagons, hexagons, ...), circles, ellipses, parabolas, hyperbolas, polyhedra (tetrahedrons, cubes, octahedrons, dodecahedrons, icosahedrons), spheres, ellipsoids, paraboloids, hyperboloids, cylinders, cones. Graph theory graphs, trees, nodes, edges Topology topological spaces and manifolds. Linear algebra scalars, vectors, matrices, tensors. Abstract algebra groups, rings, modules, fields, vector spaces, group-theoretic lattices, and order-theoretic lattices. Categories are simultaneously homes to mathematical objects and mathematical objects in their own right. In proof theory, proofs and theorems are also mathematical objects. See also Abstract object Mathematical structure
https://en.wikipedia.org/wiki/Back-and-forth%20method
In mathematical logic, especially set theory and model theory, the back-and-forth method is a method for showing isomorphism between countably infinite structures satisfying specified conditions. In particular it can be used to prove that any two countably infinite densely ordered sets (i.e., linearly ordered in such a way that between any two members there is another) without endpoints are isomorphic. An isomorphism between linear orders is simply a strictly increasing bijection. This result implies, for example, that there exists a strictly increasing bijection between the set of all rational numbers and the set of all real algebraic numbers. any two countably infinite atomless Boolean algebras are isomorphic to each other. any two equivalent countable atomic models of a theory are isomorphic. the Erdős–Rényi model of random graphs, when applied to countably infinite graphs, almost surely produces a unique graph, the Rado graph. any two many-complete recursively enumerable sets are recursively isomorphic. Application to densely ordered sets As an example, the back-and-forth method can be used to prove Cantor's isomorphism theorem, although this was not Georg Cantor's original proof. This theorem states that two unbounded countable dense linear orders are isomorphic. Suppose that (A, ≤A) and (B, ≤B) are linearly ordered sets; They are both unbounded, in other words neither A nor B has either a maximum or a minimum; They are densely ordered, i.e. between any two members there is another; They are countably infinite. Fix enumerations (without repetition) of the underlying sets: A = { a1, a2, a3, ... }, B = { b1, b2, b3, ... }. Now we construct a one-to-one correspondence between A and B that is strictly increasing. Initially no member of A is paired with any member of B. (1) Let i be the smallest index such that ai is not yet paired with any member of B. Let j be some index such that bj is not yet paired with any member of A and ai can be paired wi
https://en.wikipedia.org/wiki/Correlation%20coefficient
A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two components of a multivariate random variable with a known distribution. Several types of correlation coefficient exist, each with their own definition and own range of usability and characteristics. They all assume values in the range from −1 to +1, where ±1 indicates the strongest possible agreement and 0 the strongest possible disagreement. As tools of analysis, correlation coefficients present certain problems, including the propensity of some types to be distorted by outliers and the possibility of incorrectly being used to infer a causal relationship between the variables (for more, see Correlation does not imply causation). Types There are several different measures for the degree of correlation in data, depending on the kind of data: principally whether the data is a measurement, ordinal, or categorical. Pearson The Pearson product-moment correlation coefficient, also known as , , or Pearson's , is a measure of the strength and direction of the linear relationship between two variables that is defined as the covariance of the variables divided by the product of their standard deviations. This is the best-known and most commonly used type of correlation coefficient. When the term "correlation coefficient" is used without further qualification, it usually refers to the Pearson product-moment correlation coefficient. Intra-class Intraclass correlation (ICC) is a descriptive statistic that can be used, when quantitative measurements are made on units that are organized into groups; it describes how strongly units in the same group resemble each other. Rank Rank correlation is a measure of the relationship between the rankings of two variables, or two rankings of the same variable: Spearman's rank correlation coefficient is
https://en.wikipedia.org/wiki/Motronic
Motronic is the trade name given to a range of digital engine control units developed by Robert Bosch GmbH (commonly known as Bosch) which combined control of fuel injection and ignition in a single unit. By controlling both major systems in a single unit, many aspects of the engine's characteristics (such as power, fuel economy, drivability, and emissions) can be improved. Motronic 1.x Motronic M1.x is powered by various i8051 derivatives made by Siemens, usually SAB80C515 or SAB80C535. Code/data is stored in DIL or PLCC EPROM and ranges from 32k to 128k. 1.0 Often known as "Motronic basic", Motronic ML1.x was one of the first digital engine-management systems developed by Bosch. These early Motronic systems integrated the spark timing element with then-existing Jetronic fuel injection technology. It was originally developed and first used in the BMW 7 Series, before being implemented on several Volvo and Porsche engines throughout the 1980s. The components of the Motronic ML1.x systems for the most part remained unchanged during production, although there are some differences in certain situations. The engine control module (ECM) receives information regarding engine speed, crankshaft angle, coolant temperature and throttle position. An air flow meter also measures the volume of air entering the induction system. If the engine is naturally aspirated, an air temperature sensor is located in the air flow meter to work out the air mass. However, if the engine is turbocharged, an additional charge air temperature sensor is used to monitor the temperature of the inducted air after it has passed through the turbocharger and intercooler, in order to accurately and dynamically calculate the overall air mass. Main system characteristics Fuel delivery, ignition timing, and dwell angle incorporated into the same control unit. Crank position and engine speed is determined by a pair of sensors reading from the flywheel. Separate constant idle speed system monitors and re
https://en.wikipedia.org/wiki/Timeout%20%28computing%29
In telecommunications and related engineering (including computer networking and programming), the term timeout or time-out has several meanings, including: A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time. A specified period of time that will be allowed to elapse in a system before a specified event is to take place, unless another specified event occurs first; in either case, the period is terminated when either event takes place. Note: A timeout condition can be canceled by the receipt of an appropriate time-out cancellation signal. An event that occurs at the end of a predetermined period of time that began at the occurrence of another specified event. The timeout can be prevented by an appropriate signal. Timeouts allow for more efficient usage of limited resources without requiring additional interaction from the agent interested in the goods that cause the consumption of these resources. The basic idea is that in situations where a system must wait for something to happen, rather than waiting indefinitely, the waiting will be aborted after the timeout period has elapsed. This is based on the assumption that further waiting is useless, and some other action is necessary. Examples Specific examples include: In the Microsoft Windows and ReactOS command-line interfaces, the timeout command pauses the command processor for the specified number of seconds. In POP connections, the server will usually close a client connection after a certain period of inactivity (the timeout period). This ensures that connections do not persist forever, if the client crashes or the network goes down. Open connections consume resources, and may prevent other clients from accessing the same mailbox. In HTTP persistent connections, the web server saves opened connections (which consume CPU time and memory). The web client does not have to send an "end of requests series" signal. Connections are closed
https://en.wikipedia.org/wiki/Resource%20%28biology%29
In biology and ecology, a resource is a substance or object in the environment required by an organism for normal growth, maintenance, and reproduction. Resources box can be consumed by one organism and, as a result, become unavailable to another organism. For plants key resources are light, nutrients, water, and place to grow. For animals key resources are food, water, and territory. Key resources for plants Terrestrial plants require particular resources for photosynthesis and to complete their life cycle of germination, growth, reproduction, and dispersal: Carbon dioxide Microsite (ecology) Nutrients Pollination Seed dispersal Soil Water Key resources for animals Animals require particular resources for metabolism and to complete their life cycle of gestation, birth, growth, and reproduction: Foraging Territory Water Resources and ecological processes Resource availability plays a central role in ecological processes: Carrying capacity Biological competition Liebig's law of the minimum Niche differentiation See also Abiotic component Biotic component Community ecology Ecology Population ecology Plant ecology size-asymmetric competition
https://en.wikipedia.org/wiki/Memory%20management%20controller%20%28Nintendo%29
Multi-memory controllers or memory management controllers (MMC) are different kinds of special chips designed by various video game developers for use in Nintendo Entertainment System (NES) cartridges. These chips extend the capabilities of the original console and make it possible to create NES games with features the original console cannot offer alone. The basic NES hardware supports only 40KB of ROM total, up to 32KB PRG and 8KB CHR, thus only a single tile and sprite table are possible. This limit was rapidly reached within the Famicom's first two years on the market and game developers began requesting a way to expand the console's capabilities. In the emulation community these chips are also known as mappers. List of MMC chips CNROM Manufacturer: Nintendo Games: Gradius, Ghostbusters, Gyruss, Arkanoid CNROM is the earliest banking hardware introduced on the Famicom, appearing in early 1986. It consists of a single 7400 series discrete logic chip. CNROM supports a single fixed PRG bank and up to eight CHR banks for 96KB total ROM. Some third party variations supported additional capabilities. Many CNROM games store the game level data in the CHR ROM and blank the screen while reading it. UNROM Manufacturer: Nintendo Games: Pro Wrestling, Ikari Warriors, Mega Man, Contra, Castlevania Early NES mappers are composed of 7400 series discrete logic chips. UNROM appeared in late 1986. It supports a single fixed 16KB PRG bank, the rest of the PRG being switchable. Instead of a dedicated ROM chip to hold graphics data (called CHR by Nintendo), games using UNROM store graphics data on the program ROM and copy it to a RAM on the cartridge at run time. MMC1 Manufacturer: Nintendo Games: The Legend of Zelda, Mega Man 2, Metroid, Godzilla: Monster of Monsters, Teenage Mutant Ninja Turtles, and more. The MMC1 is Nintendo's first custom MMC integrated circuit to incorporate support for saved games and multi-directional scrolling configurations. The chip comes in
https://en.wikipedia.org/wiki/Biology%20by%20Team
Biology by Team in German Biologie im Team - is the first Austrian biology contest for upper secondary schools. Students at upper secondary schools who are especially interested in biology can deepen their knowledge and broaden their competence in experimental biology within the framework of this contest. Each year, a team of teachers choose modules of key themes on which students work in the form of a voluntary exercise. The evaluation focuses in particular on the practical work, and, since the school year 2004/05, also on teamwork. In April, a two-day closing competition takes place, in which six groups of students from participating schools are given various problems to solve. A jury (persons from the science and corporate communities) evaluate the results and how they are presented. The concept was developed by a team of teachers in co-operation with the AHS (Academic Secondary Schools) - Department of the Pedagogical Institute in Carinthia. Since 2008 it is situated at the Science departement of the University College of Teacher Training Carinthia. The first contest in the school year 2002/03 took place under the motto: Hell is loose in the ground under us. Other themes included Beautiful but dangerous, www-worldwide water 1 and 2, Expedition forest, Relationship boxes, Mole's view, Biological timetravel, Biology at the University, Ecce Homo, Biodiversity, Death in tin cans, Sex sells, Without a trace, Biologists see more, Quo vadis biology? , Biology without limits?, Diversity instead of simplicity, Grid square, Diversity instead of simplicity 0.2, www-worldwide water 3.The theme for the year 2023/24 is I hear something you don't see. Till now the following schools were participating: BG/BRG Mössingerstraße Klagenfurt Ingeborg-Bachmann-Gymnasium, Klagenfurt BG/BRG St. Martinerstraße Villach BG/BRG Peraustraße Villach International school Carinthia, Velden Österreichisches Gymnasium Prag Europagymnasium Klagenfurt BRG Viktring Klagenfurt BORG Wo
https://en.wikipedia.org/wiki/Locus%20suicide%20recombination
Locus suicide recombination (LSR) constitutes a variant form of class switch recombination that eliminates all immunoglobulin heavy chain constant genes. It thus terminates immunoglobulin and B-cell receptor (BCR) expression in B-lymphocytes and results in B-cell death since survival of such cells requires BCR expression. This process is initiated by the enzyme activation-induced deaminase upon B-cell activation. LSR is thus one of the pathways that can result into activation-induced cell death in the B-cell lineage.
https://en.wikipedia.org/wiki/List%20of%20common%20coordinate%20transformations
This is a list of some of the most commonly used coordinate transformations. 2-dimensional Let be the standard Cartesian coordinates, and the standard polar coordinates. To Cartesian coordinates From polar coordinates From log-polar coordinates By using complex numbers , the transformation can be written as That is, it is given by the complex exponential function. From bipolar coordinates From 2-center bipolar coordinates From Cesàro equation To polar coordinates From Cartesian coordinates Note: solving for returns the resultant angle in the first quadrant (). To find one must refer to the original Cartesian coordinate, determine the quadrant in which lies (for example, (3,−3) [Cartesian] lies in QIV), then use the following to solve for The value for must be solved for in this manner because for all values of , is only defined for , and is periodic (with period ). This means that the inverse function will only give values in the domain of the function, but restricted to a single period. Hence, the range of the inverse function is only half a full circle. Note that one can also use From 2-center bipolar coordinates Where 2c is the distance between the poles. To log-polar coordinates from Cartesian coordinates Arc-length and curvature In Cartesian coordinates In polar coordinates 3-dimensional Let (x, y, z) be the standard Cartesian coordinates, and (ρ, θ, φ) the spherical coordinates, with θ the angle measured away from the +Z axis (as , see conventions in spherical coordinates). As φ has a range of 360° the same considerations as in polar (2 dimensional) coordinates apply whenever an arctangent of it is taken. θ has a range of 180°, running from 0° to 180°, and does not pose any problem when calculated from an arccosine, but beware for an arctangent. If, in the alternative definition, θ is chosen to run from −90° to +90°, in opposite direction of the earlier definition, it can be found uniquely from an arcsine, but beware of an arccota
https://en.wikipedia.org/wiki/Least-squares%20spectral%20analysis
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA. Developed in 1969 and 1971, LSSA is also known as the Vaníček method and the Gauss-Vaniček method after Petr Vaníček, and as the Lomb method or the Lomb–Scargle periodogram, based on the simplifications first by Nicholas R. Lomb and then by Jeffrey D. Scargle. Historical background The close connections between Fourier analysis, the periodogram, and the least-squares fitting of sinusoids have been known for a long time. However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what nowadays is called the Lomb method and least-squares fitting of selected frequencies of sinusoids determined from such periodograms — and connected by a procedure known today as the matching pursuit with post-back fitting or the orthogonal matching pursuit. Petr Vaníček, a Canadian geophysicist and geodesist of the University of New Brunswick, proposed in 1969 also the matching-pursuit approach for equally and unequally spaced data, which he called "successive spectral analysis" and the result a "least-squares periodogram". He generalized this method to account for any systematic components beyond a simple mean, such as a "predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude", and applied it to a variety of samples, in 1971. Vaníček's strictly least-squares method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out i
https://en.wikipedia.org/wiki/Perceptual%20control%20theory
Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels. Principles and differences from other theories The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world. The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment
https://en.wikipedia.org/wiki/Hermite%20constant
In mathematics, the Hermite constant, named after Charles Hermite, determines how long a shortest element of a lattice in Euclidean space can be. The constant γn for integers n > 0 is defined as follows. For a lattice L in Euclidean space Rn with unit covolume, i.e. vol(Rn/L) = 1, let λ1(L) denote the least length of a nonzero element of L. Then is the maximum of λ1(L) over all such lattices L. The square root in the definition of the Hermite constant is a matter of historical convention. Alternatively, the Hermite constant γn can be defined as the square of the maximal systole of a flat n-dimensional torus of unit volume. Example The Hermite constant is known in dimensions 1–8 and 24. For n = 2, one has γ2 = . This value is attained by the hexagonal lattice of the Eisenstein integers. Estimates It is known that A stronger estimate due to Hans Frederick Blichfeldt is where is the gamma function. See also Loewner's torus inequality
https://en.wikipedia.org/wiki/Unsaturated%20fat
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond. A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation. Composition of common fats In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography. The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component. Chemistry and nutrition Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective. Examples
https://en.wikipedia.org/wiki/Ultra-processed%20food
Ultra-processed food (UPF) is an industrially formulated edible substance derived from natural food or synthesized from other organic compounds. The resulting products are designed to be highly profitable, convenient, and hyperpalatable, often through food additives such as preservatives, colourings, and flavourings. The state of research into ultra-processed foods and their effects is evolving rapidly as of 2023. Epidemiological data suggest that consumption of ultra-processed foods is associated with higher risks of certain diseases, including obesity, type 2 diabetes, cardiovascular diseases, and certain types of cancer. Researchers also present ultra-processing as a facet of environmental degradation caused by the food industry. Definitions Concerns around food processing have existed since at least the Industrial Revolution. Many critics identified 'processed food' as problematic, and movements such as raw foodism attempted to eschew food processing entirely, but since even basic cookery results in processed food, this concept failed in itself to influence public policy surrounding the epidemiology of obesity. Michael Pollan's influential book The Omnivore's Dilemma referred to highly processed industrial food as 'edible food-like substances'. Carlos Augusto Monteiro cited Pollan as an influence in coining the term 'ultra-processed food' in a 2009 commentary. Monteiro's team developed the Nova classification for grouping unprocessed and processed foods beginning in 2010, whose definition of ultra-processing has become most widely accepted and has gradually become more refined through successive publications. The identification of ultra-processed foods, as well as the category itself, is a subject of debate among nutrition and public health scientists, and other definitions have been proposed. A survey of systems for classifying levels of food processing in 2021 identified four 'defining themes': Extent of change (from natural state); Nature of change (p
https://en.wikipedia.org/wiki/Floor%20and%20ceiling%20functions
In mathematics and computer science, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least integer greater than or equal to , denoted or . For example, for floor: , , and for ceiling: , and . Historically, the floor of has been–and still is–called the integral part or integer part of , often denoted (as well as a variety of other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers. For an integer, . Although and produce graphs that appear exactly alike, they are not the same when the value of x is an exact integer. For example, when =2.0001; . However, if =2, then , while . Notation The integral part or integer part of a number ( in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula. Carl Friedrich Gauss introduced the square bracket notation in his third proof of quadratic reciprocity (1808). This remained the standard in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names "floor" and "ceiling" and the corresponding notations and . (Iverson used square brackets for a different purpose, the Iverson bracket notation.) Both notations are now used in mathematics, although Iverson's notation will be followed in this article. In some sources, boldface or double brackets are used for floor, and reversed brackets or for ceiling. The fractional part is the sawtooth function, denoted by for real and defined by the formula For all x, . These characters are provided in Unicode: In the LaTeX typesetting system, these symbols can be specified with the and commands in math mode, and extended in size using and as needed. Some authors define as the round-toward-zero function, so and , and call i
https://en.wikipedia.org/wiki/List%20of%20types%20of%20functions
In mathematics, functions can be identified according to the properties they have. These properties describe the functions' behaviour under certain conditions. A parabola is a specific type of function. Relative to set theory These properties concern the domain, the codomain and the image of functions. Injective function: has a distinct value for each distinct input. Also called an injection or, sometimes, one-to-one function. In other words, every element of the function's codomain is the image of at most one element of its domain. Surjective function: has a preimage for every element of the codomain, that is, the codomain equals the image. Also called a surjection or onto function. Bijective function: is both an injection and a surjection, and thus invertible. Identity function: maps any given element to itself. Constant function: has a fixed value regardless of its input. Empty function: whose domain equals the empty set. Set function: whose input is a set. Choice function called also selector or uniformizing function: assigns to each set one of its elements. Relative to an operator (c.q. a group or other structure) These properties concern how the function is affected by arithmetic operations on its argument. The following are special examples of a homomorphism on a binary operation: Additive function: preserves the addition operation: f&hairsp;(x + y) = f&hairsp;(x) + f&hairsp;(y). Multiplicative function: preserves the multiplication operation: f&hairsp;(xy) = f&hairsp;(x)f&hairsp;(y). Relative to negation: Even function: is symmetric with respect to the Y-axis. Formally, for each x: f&hairsp;(x) = f&hairsp;(−x). Odd function: is symmetric with respect to the origin. Formally, for each x: f&hairsp;(−x) = −f&hairsp;(x). Relative to a binary operation and an order: Subadditive function: for which the value of f&hairsp;(x + y) is less than or equal to f&hairsp;(x) + f&hairsp;(y). Superadditive function: for which the value of f&hairsp;(x + y) i
https://en.wikipedia.org/wiki/T.C.%20Mits
T.C. Mits (acronym for "the celebrated man in the street"), is a term coined by Lillian Rosanoff Lieber to refer to an everyman. In Lieber's works, T.C. Mits was a character who made scientific topics more approachable to the public audience. The phrase has enjoyed sparse use by authors in fields such as molecular biology, secondary education, and general semantics. The Education of T.C. MITS Dr. Lillian Rosanoff Lieber wrote this treatise on mathematical thinking in twenty chapters. The writing took a form that resembled free-verse poetry, though Lieber included an introduction stating that the form was meant only to facilitate rapid reading, rather than emulate free-verse. Lieber's husband, a fellow professor at Long Island University, Hugh Gray Lieber, provided illustrations for the book. The title of the book was meant to emphasize that mathematics can be understood by anyone, which was further shown when a special "Overseas edition for the Armed Forces" was published in 1942, and approved by the Council on Books in Wartime to be sent to American troops fighting in World War II. See also John Doe The man on the Clapham omnibus
https://en.wikipedia.org/wiki/List%20of%20q-analogs
This is a list of q-analogs in mathematics and related fields. Algebra Iwahori–Hecke algebra Quantum affine algebra Quantum enveloping algebra Quantum group Analysis Jackson integral q-derivative q-difference polynomial Quantum calculus Combinatorics LLT polynomial q-binomial coefficient q-Pochhammer symbol q-Vandermonde identity Orthogonal polynomials q-Bessel polynomials q-Charlier polynomials q-Hahn polynomials q-Jacobi polynomials: Big q-Jacobi polynomials Continuous q-Jacobi polynomials Little q-Jacobi polynomials q-Krawtchouk polynomials q-Laguerre polynomials q-Meixner polynomials q-Meixner–Pollaczek polynomials q-Racah polynomials Probability and statistics Gaussian q-distribution q-exponential distribution q-Weibull diribution Tsallis q-Gaussian Tsallis entropy Special functions Basic hypergeometric series Elliptic gamma function Hahn–Exton q-Bessel function Jackson q-Bessel function q-exponential q-gamma function q-theta function See also Lists of mathematics topics Q-analogs
https://en.wikipedia.org/wiki/Signal%20reconstruction
In signal processing, reconstruction usually means the determination of an original continuous signal from a sequence of equally spaced samples. This article takes a generalized abstract mathematical approach to signal sampling and reconstruction. For a more practical approach based on band-limited signals, see Whittaker–Shannon interpolation formula. General principle Let F be any sampling method, i.e. a linear map from the Hilbert space of square-integrable functions to complex space . In our example, the vector space of sampled signals is n-dimensional complex space. Any proposed inverse R of F (reconstruction formula, in the lingo) would have to map to some subset of . We could choose this subset arbitrarily, but if we're going to want a reconstruction formula R that is also a linear map, then we have to choose an n-dimensional linear subspace of . This fact that the dimensions have to agree is related to the Nyquist–Shannon sampling theorem. The elementary linear algebra approach works here. Let (all entries zero, except for the kth entry, which is a one) or some other basis of . To define an inverse for F, simply choose, for each k, an so that . This uniquely defines the (pseudo-)inverse of F. Of course, one can choose some reconstruction formula first, then either compute some sampling algorithm from the reconstruction formula, or analyze the behavior of a given sampling algorithm with respect to the given formula. Ideally, the reconstruction formula is derived by minimizing the expected error variance. This requires that either the signal statistics is known or a prior probability for the signal can be specified. Information field theory is then an appropriate mathematical formalism to derive an optimal reconstruction formula. Popular reconstruction formulae Perhaps the most widely used reconstruction formula is as follows. Let be a basis of in the Hilbert space sense; for instance, one could use the eikonal , although other choices are
https://en.wikipedia.org/wiki/List%20of%20quasiparticles
This is a list of quasiparticles.
https://en.wikipedia.org/wiki/Martingale%20difference%20sequence
In probability theory, a martingale difference sequence (MDS) is related to the concept of the martingale. A stochastic series X is an MDS if its expectation with respect to the past is zero. Formally, consider an adapted sequence on a probability space . is an MDS if it satisfies the following two conditions: , and , for all . By construction, this implies that if is a martingale, then will be an MDS—hence the name. The MDS is an extremely useful construct in modern probability theory because it implies much milder restrictions on the memory of the sequence than independence, yet most limit theorems that hold for an independent sequence will also hold for an MDS. A special case of MDS, denoted as {Xt,t}0 is known as innovative sequence of Sn; where Sn and are corresponding to random walk and filtration of the random processes . In probability theory innovation series is used to emphasize the generality of Doob representation. In signal processing the innovation series is used to introduce Kalman filter. The main differences of innovation terminologies are in the applications. The later application aims to introduce the nuance of samples to the model by random sampling.
https://en.wikipedia.org/wiki/Memory%20ordering
Memory ordering describes the order of accesses to computer memory by a CPU. The term can refer either to the memory ordering generated by the compiler during compile time, or to the memory ordering generated by a CPU during runtime. In modern microprocessors, memory ordering characterizes the CPU's ability to reorder memory operations – it is a type of out-of-order execution. Memory reordering can be used to fully utilize the bus-bandwidth of different types of memory such as caches and memory banks. On most modern uniprocessors memory operations are not executed in the order specified by the program code. In single threaded programs all operations appear to have been executed in the order specified, with all out-of-order execution hidden to the programmer – however in multi-threaded environments (or when interfacing with other hardware via memory buses) this can lead to problems. To avoid problems, memory barriers can be used in these cases. Compile-time memory ordering Most programming languages have some notion of a thread of execution which executes statements in a defined order. Traditional compilers translate high-level expressions to a sequence of low-level instructions relative to a program counter at the underlying machine level. Execution effects are visible at two levels: within the program code at a high level, and at the machine level as viewed by other threads or processing elements in concurrent programming, or during debugging when using a hardware debugging aid with access to the machine state (some support for this is often built directly into the CPU or microcontroller as functionally independent circuitry apart from the execution core which continues to operate even when the core itself is halted for static inspection of its execution state). Compile-time memory order concerns itself with the former, and does not concern itself with these other views. General issues of program order Program-order effects of expression evaluation Durin
https://en.wikipedia.org/wiki/Free%20convolution
Free convolution is the free probability analog of the classical notion of convolution of probability measures. Due to the non-commutative nature of free probability theory, one has to talk separately about additive and multiplicative free convolution, which arise from addition and multiplication of free random variables (see below; in the classical case, what would be the analog of free multiplicative convolution can be reduced to additive convolution by passing to logarithms of random variables). These operations have some interpretations in terms of empirical spectral measures of random matrices. The notion of free convolution was introduced by Dan-Virgil Voiculescu. Free additive convolution Let and be two probability measures on the real line, and assume that is a random variable in a non commutative probability space with law and is a random variable in the same non commutative probability space with law . Assume finally that and are freely independent. Then the free additive convolution is the law of . Random matrices interpretation: if and are some independent by Hermitian (resp. real symmetric) random matrices such that at least one of them is invariant, in law, under conjugation by any unitary (resp. orthogonal) matrix and such that the empirical spectral measures of and tend respectively to and as tends to infinity, then the empirical spectral measure of tends to . In many cases, it is possible to compute the probability measure explicitly by using complex-analytic techniques and the R-transform of the measures and . Rectangular free additive convolution The rectangular free additive convolution (with ratio ) has also been defined in the non commutative probability framework by Benaych-Georges and admits the following random matrices interpretation. For , for and are some independent by complex (resp. real) random matrices such that at least one of them is invariant, in law, under multiplication on the left and on the r