source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Hypsochromic%20shift
In spectroscopy, hypsochromic shift () is a change of spectral band position in the absorption, reflectance, transmittance, or emission spectrum of a molecule to a shorter wavelength (higher frequency). Because the blue color in the visible spectrum has a shorter wavelength than most other colors, this effect is also commonly called a blue shift. It should not be confused with a bathochromic shift, which is the opposite process – the molecule's spectra are changed to a longer wavelength (lower frequency). Hypsochromic shifts can occur because of a change in environmental conditions: for example, a change in solvent polarity will result in solvatochromism. A series of structurally related molecules in a substitution series can also show a hypsochromic shift. Hypsochromic shift is a phenomenon seen in molecular spectra, not atomic spectra - it is thus more common to speak of the movement of the peaks in the spectrum rather than lines. where is the wavelength of the spectral peak of interest and For example, β-acylpyrrole will show a hypsochromic shift of 30-40 nm in comparison with α-acylpyrroles. See also Bathochromic shift, a change in band position to a longer wavelength (lower frequency). Spectroscopy Chromism
https://en.wikipedia.org/wiki/Hank%20%28unit%20of%20measure%29
In the textile industry, a hank is a coiled or wrapped unit of yarn or twine, as opposed to other materials like thread or rope, as well as other forms such as ball, cone, bobbin (cylinder-like structure) spool, etc. This is often the best form for use with hand looms, compared to the cone form needed for power looms. Hanks come in varying lengths depending on the type of material and the manufacturer. For instance, a hank of linen is often , and a hank of cotton or silk is . While hanks may differ by manufacturer and by product, a skein is usually considered 1/6th of a hank (either by weight or by length). One source identifies a skein of stranded cotton as being , of tapestry wool as being , and crewel wool as being . In yarns for handcrafts such as knitting or crochet, hanks are not a fixed length but are sold in units by weight, most commonly 50 grams. Depending on the thickness of the strand as well as the inherent density of the material, hanks can range widely in yardage per 50 gram unit; for example, 440 yards for a lace weight mohair, to 60 yards for a chunky weight cotton. Special treatments to the materials that add cost, such as mercerisation or labor-intensive hand-painting of colors, can influence a manufacturer's desired length per unit as well. Knitters and crocheters rewind the hanks into balls or centre-pull skeins prior to use, in order to prevent the yarn from becoming tangled. In the meat industry, a sheep, lamb or hog sausage casing is sold by the hank. This unit of measure equals . References See also Spool Knitting tools and materials Units of measurement de:Strang (Textil)#Maßeinheit
https://en.wikipedia.org/wiki/Data%20General%20Eclipse
The Data General Eclipse line of computers by Data General were 16-bit minicomputers released in early 1974 and sold until 1988. The Eclipse was based on many of the same concepts as the Data General Nova, but included support for virtual memory and multitasking more suitable to the small office than the lab. It was also packaged differently for this reason, in a floor-standing case the size of a small refrigerator. The Eclipse series was supplanted by the 32-bit Data General Eclipse MV/8000 in 1980. Description The Data General Nova was intended to outperform the PDP-8 while being less expensive, and in a similar fashion, the Eclipse was meant to compete against the larger PDP-11 computers. It kept the simple register architecture of the Nova but added a stack pointer which the Nova lacked. The stack pointer was added back to the later Nova 3 machines in 1975 and also used on the later 32-bit Data General Eclipse MV/8000. The AOS operating system was quite sophisticated, advanced compared to the PDP-11 offerings, with access control lists (ACLs) for file protection. Production problems with the Eclipse led to a rash of lawsuits in the late 1970s, after new versions of the machine were pre-ordered by many DG customers and then never arrived. After over a year of waiting, some decided to sue the company, while others simply cancelled their orders and went elsewhere. It appeared that the Eclipse was originally intended to replace the Nova outright, also evidenced by the fact that the Nova 3 series released at the same time was phased out the next year. However, strong continuing demand resulted in the Nova 4, perhaps as a result of the continuing problems with the Eclipse. Facts The original Cray-1 system used an Eclipse to act as a Maintenance and Control Unit (MCU). It was configured with two Ampex CRTs, an 80 MB Ampex disk drive, a thermal printer, and a 9-track tape drive. Its primary purpose was to download an image of either the Cray Operating System or cu
https://en.wikipedia.org/wiki/New%20York%20State%20Mathematics%20League
The New York State Mathematics League (NYSML) competition was originally held in 1973 and has been held annually in a different location each year since. It was founded by Alfred Kalfus. The American Regions Math League competition is based on the format of the NYSML competition. The current iteration contains four sections: the team round, power question, individual round, and a relay. These are done in competition in that order. All of these rounds are done without a calculator. Each individual team can have up to fifteen students, which is the usual amount per team. Like ARML, it has banned the user of calculators beginning in the 2009 contest. Competition Format There are four sections in the current iteration, done so in a day: A team round where a team collaborates to solve ten questions in twenty minutes. There is a possible 50 points to earn here. A power question where a team has an hour to complete ten questions which requires proofs and explanations for a possible 50 points. An individual round, where each team member has five groups of two questions to answer, with each group of questions taking ten minutes, totaling fifty minutes for ten questions for a possible 150 points. A relay round, where teams are broken up into five groups of three if possible. There are three problems, with each member giving their answer back to the next member until it hits the third member, who can rise at 3 minutes for a correct answer to get 5 points, or rise at the time limit of 6 minutes for a correct answer to get 3 points. The maximum is fifty points. This brings the total maximum for points to 300. Past NYSML Competition Sites Past NYSML Winners Past NYSML Individual Winners (a.k.a. the Curt Boddie Award) External links NYSML Homepage https://artofproblemsolving.com/wiki/index.php/New_York_State_Math_League https://web.archive.org/web/20230000000000*/NYSML.com Mathematics competitions
https://en.wikipedia.org/wiki/Earthquake%20engineering
Earthquake engineering is an interdisciplinary branch of engineering that designs and analyzes structures, such as buildings and bridges, with earthquakes in mind. Its overall goal is to make such structures more resistant to earthquakes. An earthquake (or seismic) engineer aims to construct structures that will not be damaged in minor shaking and will avoid serious damage or collapse in a major earthquake. A properly engineered structure does not necessarily have to be extremely strong or expensive. It has to be properly designed to withstand the seismic effects while sustaining an acceptable level of damage. Definition Earthquake engineering is a scientific field concerned with protecting society, the natural environment, and the man-made environment from earthquakes by limiting the seismic risk to socio-economically acceptable levels. Traditionally, it has been narrowly defined as the study of the behavior of structures and geo-structures subject to seismic loading; it is considered as a subset of structural engineering, geotechnical engineering, mechanical engineering, chemical engineering, applied physics, etc. However, the tremendous costs experienced in recent earthquakes have led to an expansion of its scope to encompass disciplines from the wider field of civil engineering, mechanical engineering, nuclear engineering, and from the social sciences, especially sociology, political science, economics, and finance. The main objectives of earthquake engineering are: Foresee the potential consequences of strong earthquakes on urban areas and civil infrastructure. Design, construct and maintain structures to perform at earthquake exposure up to the expectations and in compliance with building codes. Seismic loading Seismic loading means application of an earthquake-generated excitation on a structure (or geo-structure). It happens at contact surfaces of a structure either with the ground, with adjacent structures, or with gravity waves from tsunami. The lo
https://en.wikipedia.org/wiki/Codebase
In software development, a codebase (or code base) is a collection of source code used to build a particular software system, application, or software component. Typically, a codebase includes only human-written source code system files; thus, a codebase usually does not include source code files generated by tools (generated files) or binary library files (object files), as they can be built from the human-written source code. However, it generally does include configuration and property files, as they are the data necessary for the build. A codebase is typically stored in a source control repository in a version control system. A source code repository is a place where large amounts of source code are kept, either publicly or privately. Source code repositories are used most basically for backups and versioning, and on multi-developer projects to handle various source code versions and to provide aid in resolving conflicts that arise from developers submitting overlapping modifications. Subversion, Git and Mercurial are examples of popular tools used to handle this workflow, which are common in open source projects. For smaller projects, its code may be kept as a non-managed set of files (even the Linux kernel was maintained as a set of files for many years). Distinct and monolithic codebases Multiple projects can have separate, distinct codebases, or can have a single, shared or . This is particularly the case for related projects, such as those developed within the same company. In more detail, a monolithic codebase typically entails a single repository (all the code in one place), and often a common build system or common libraries. Whether the codebase is shared or split does not depend on the system architecture and actual build results; thus, a monolithic codebase, which is related to the actual development, does not entail a monolithic system, which is related to software architecture or a single monolithic binary. As a result, a monolithic codebase m
https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Ritz%20method
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz. It is used in all applications that involve approximating eigenvalues and eigenvectors, often under different names. In quantum mechanics, where a system of particles is described using a Hamiltonian, the Ritz method uses trial wave functions to approximate the ground state eigenfunction with the lowest energy. In the finite element method context, mathematically the same algorithm is commonly called the Ritz-Galerkin method. The Rayleigh–Ritz method or Ritz method terminology is typical in mechanical and structural engineering to approximate the eigenmodes and resonant frequencies of a structure. Naming and attribution The name Rayleigh–Ritz is being debated vs. the Ritz method after Walther Ritz, since the numerical procedure has been published by Walther Ritz in 1908-1909. According to A. W. Leissa, Lord Rayleigh wrote a paper congratulating Ritz on his work in 1911, but stating that he himself had used Ritz's method in many places in his book and in another publication. This statement, although later disputed, and the fact that the method in the trivial case of a single vector results in the Rayleigh quotient make the arguable misnomer persist. According to S. Ilanko, citing Richard Courant, both Lord Rayleigh and Walther Ritz independently conceived the idea of utilizing the equivalence between boundary value problems of partial differential equations on the one hand and problems of the calculus of variations on the other hand for numerical calculation of the solutions, by substituting for the variational problems simpler approximating extremum problems in which a finite number of parameters need to be determined; see the article Ritz method for details. Ironically for the debate, the modern justification of the algorithm drops the calculus of variations in f
https://en.wikipedia.org/wiki/IBM%20OS/6
OS/6 (Office System/6 or System 6) is a standalone word processor made by IBM's Office Products Division (OPD), introduced in January, 1977. OS/6 was superseded by the IBM Displaywriter in 1980. Overview The intended configuration is a console with a keyboard, a small, approximately 9" CRT character display and either a daisy wheel or IBM 46/40 ink jet printer, renamed the IBM 6640. Documents are stored on 8-inch floppy diskettes and magnetic stripe card, which is exchangeable with IBM's previous generation of Mag Card Selectrics. The display is pre-WYSIWYG, so special symbols embedded in the displayed text mark formatting information the user can edit. Navigation is pre-mouse and uses arrow keys. In an age before PCs, when typing was still done primarily only by clerical staff, the OS/6 was intended for what IBM envisioned as centralized word processing centers at large organizations. It includes features like mail merge, very high print quality with many formatting options and printers that can feed envelopes or sheets from two drawers, usually referred to within IBM as letterhead and second sheet. Data from Office System/6 can be migrated to IBM 5110 and 5120 with third-party applications. Internally, the OS/6 uses an IBM proprietary 16-bit single-chip microprocessor called the OPD Mini Processor. This processor is a single-chip FET microprocessor designed by Richard Vrba. It had a 16-bit little-endian instruction set built on an 8-bit internal architecture. Sixteen general-purpose registers, implemented as a 32-byte window in memory that operated as a stack, could be used as instruction operands or for indirect references to operands in memory. History Development on OS/6 was done in the "Rio" project at IBM's Austin, Texas facilities. A proposed video display upgrade for the Selectric Mag Card II had been rejected. Instead, it was announced in 1977 that Mag Card II users would be able to add a communications option to link up with System 6. In a 1977 pres
https://en.wikipedia.org/wiki/FrameNet
FrameNet is a group of online lexical databases based upon the theory of meaning known as Frame semantics, developed by linguist Charles J. Fillmore. The project's fundamental notion is simple: most words' meanings may be best understood in terms of a semantic frame, which is a description of a certain kind of event, connection, or item and its actors. As an illustration, the act of cooking usually requires the following: a cook, the food being cooked, a container to hold the food while it is being cooked, and a heating instrument. Within FrameNet, this act is represented by a frame named , and its components (, , , and ), are referred to as frame elements (FEs). The frame also lists a number of words that represent it, known as lexical units (LUs), like fry, bake, boil, and broil. Other frames are simpler. For example, only has an agent or cause, a theme—something that is placed—and the location where it is placed. Some frames are more complex, like , which contains more FEs (offender, injury, injured party, avenger, and punishment). As in the examples of and below, FrameNet's role is to define the frames and annotate sentences to demonstrate how the FEs fit syntactically around the word that elicits the frame. Concepts Frames A frame is a schematic representation of a situation involving various participants, props, and other conceptual roles. Examples of frame names are and . A frame in FrameNet contains a textual description of what it represents (a frame definition), associated frame elements, lexical units, example sentences, and frame-to-frame relations. Frame elements Frame elements (FE) provide additional information to the semantic structure of a sentence. Each frame has a number of core and non-core FEs which can be thought of as semantic roles. Core FEs are essential to the meaning of the frame while non-core FEs are generally descriptive (such as time, place, manner, etc.) For example: The only core FE of the frame is called ; non-core FEs
https://en.wikipedia.org/wiki/List%20of%20vaccine%20topics
This is a list of vaccine-related topics. A vaccine is a biological preparation that improves immunity to a particular disease. A vaccine typically contains an agent that resembles a disease-causing microorganism, and is often made from weakened or killed forms of the microbe or its toxins. The agent stimulates the body's immune system to recognize the agent as foreign, destroy it, and "remember" it, so that the immune system can more easily recognize and destroy any of these microorganisms that it later encounters. Human vaccines Viral diseases Bacterial diseases Vaccines under research Viral diseases Adenovirus vaccine COVID-19 vaccine (Part of today's pandemic since 2019) Coxsackie B virus vaccine Cytomegalovirus vaccine Chikungunya vaccine Eastern Equine encephalitis virus vaccine for humans Enterovirus 71 vaccine Epstein–Barr vaccine H5N1 vaccine Hepatitis C vaccine HIV vaccine HTLV-1 T-lymphotropic leukemia vaccine for humans Marburg virus disease vaccine MERS vaccine Nipah virus vaccine Norovirus vaccine Respiratory syncytial virus vaccine SARS vaccine West Nile virus vaccine for humans Zika fever vaccine Bacterial diseases Caries vaccine Gonorrhea vaccine Ehrlichiosis vaccine Helicobacter pylori vaccine Leprosy vaccine Lyme disease vaccine Staphylococcus aureus vaccine Streptococcus pyogenes vaccine Syphilis vaccine Tularemia vaccine Yersinia pestis vaccine Parasitic diseases Chagas disease vaccine Hookworm vaccine Leishmaniasis vaccine Malaria vaccine Onchocerciasis river blindness vaccine for humans Schistosomiasis vaccine Trypanosomiasis vaccine Non-infectious diseases Alzheimer's disease amyloid protein vaccine Breast cancer vaccine Ovarian cancer vaccine Prostate cancer vaccine Talimogene laherparepvec (T-VEC), - Herpes virus engineered to produce immune-boosting molecule Other Heroin vaccine Vaccine components Adjuvant List of vaccine ingredients Preservative Thiomersal Vaccine types Vaccin
https://en.wikipedia.org/wiki/Outline%20of%20autism
The following outline is provided as an overview of and topical guide to autism: Autism – neurodevelopmental disorder that affects social interaction and communication, and involves restricted and repetitive behavior. What type of thing is autism? Autism can be described as all of the following: Disability – may be physical, cognitive, mental, sensory, emotional, developmental or some combination of these. Developmental disability – a term used in the United States and Canada to describe lifelong disabilities attributable to mental or physical impairments, manifested prior to age 18. Disorder – Developmental disorder – occur at some stage in a child's development, often slowing the development. Neurodevelopmental disorder – or disorder of neural development, is an impairment of the growth and development of the brain or central nervous system. Spectrum disorder Signs of autism Signs of autism are highly variable. Different individuals will have a different mix of traits. Here are some of the more common signs: Avoidance of eye contact – preference to avoid eye contact and feelings of fear or being overwhelmed when looking into someone's eyes Developmental delay – slower acquisition of life skills Emotional dysregulation – mood swings, including outbursts when overwhelmed Executive dysfunction – difficulty staying organized, initiating tasks, and/or controlling impulses Routines – need for routine and fear of unexpected change Sensory processing disorder – over- or under-responsiveness to sensory input Sincerity – tendency to tell the truth Special interests – narrow and passionate areas of interest Stimming – repetitive movements or sounds that stimulate the senses and regulate emotion and sensory processing Conditions and research areas Conditions Autism spectrum disorder – a spectrum of developmental disabilities present from birth usually resulting in social difficulties, communication differences, and restricted and repetitive behavior. Al
https://en.wikipedia.org/wiki/Duocylinder
The duocylinder, also called the double cylinder or the bidisc, is a geometric object embedded in 4-dimensional Euclidean space, defined as the Cartesian product of two disks of respective radii r1 and r2: It is analogous to a cylinder in 3-space, which is the Cartesian product of a disk with a line segment. But unlike the cylinder, both hypersurfaces (of a regular duocylinder) are congruent. Its dual is a duospindle, constructed from two circles, one at the XY plane and the other in the ZW plane. Geometry Bounding 3-manifolds The duocylinder is bounded by two mutually perpendicular 3-manifolds with torus-like surfaces, respectively described by the formulae: and The duocylinder is so called because these two bounding 3-manifolds may be thought of as 3-dimensional cylinders 'bent around' in 4-dimensional space such that they form closed loops in the XY and ZW planes. The duocylinder has rotational symmetry in both of these planes. A regular duocylinder consists of two congruent cells, one square flat torus face (the ridge), zero edges, and zero vertices. The ridge The ridge of the duocylinder is the 2-manifold that is the boundary between the two bounding (solid) torus cells. It is in the shape of a Clifford torus, which is the Cartesian product of two circles. Intuitively, it may be constructed as follows: Roll a 2-dimensional rectangle into a cylinder, so that its top and bottom edges meet. Then roll the cylinder in the plane perpendicular to the 3-dimensional hyperplane that the cylinder lies in, so that its two circular ends meet. The resulting shape is topologically equivalent to a Euclidean 2-torus (a doughnut shape). However, unlike the latter, all parts of its surface are identically deformed. On the (2D surface, embedded in 3D) doughnut, the surface around the 'doughnut hole' is deformed with negative curvature (like a saddle) while the surface outside is deformed with positive curvature (like a sphere). The ridge of the duocylinder may be thoug
https://en.wikipedia.org/wiki/High-dynamic-range%20rendering
High-dynamic-range rendering (HDRR or HDR rendering), also known as high-dynamic-range lighting, is the rendering of computer graphics scenes by using lighting calculations done in high dynamic range (HDR). This allows preservation of details that may be lost due to limiting contrast ratios. Video games and computer-generated movies and special effects benefit from this as it creates more realistic scenes than with more simplistic lighting models. Graphics processor company Nvidia summarizes the motivation for HDR in three points: bright things can be really bright, dark things can be really dark, and details can be seen in both. History The use of high-dynamic-range imaging (HDRI) in computer graphics was introduced by Greg Ward in 1985 with his open-source Radiance rendering and lighting simulation software which created the first file format to retain a high-dynamic-range image. HDRI languished for more than a decade, held back by limited computing power, storage, and capture methods. Not until recently has the technology to put HDRI into practical use been developed. In 1990, Nakame, et al., presented a lighting model for driving simulators that highlighted the need for high-dynamic-range processing in realistic simulations. In 1995, Greg Spencer presented Physically-based glare effects for digital images at SIGGRAPH, providing a quantitative model for flare and blooming in the human eye. In 1997, Paul Debevec presented Recovering high dynamic range radiance maps from photographs at SIGGRAPH, and the following year presented Rendering synthetic objects into real scenes. These two papers laid the framework for creating HDR light probes of a location, and then using this probe to light a rendered scene. HDRI and HDRL (high-dynamic-range image-based lighting) have, ever since, been used in many situations in 3D scenes in which inserting a 3D object into a real environment requires the light probe data to provide realistic lighting solutions. In gaming appli
https://en.wikipedia.org/wiki/Shuttle%20Inc.
Shuttle Inc. () (TAIEX:2405) is a Taiwan-based manufacturer of motherboards, barebone computers, complete PC systems and monitors. Throughout the last 10 years, Shuttle has been one of the world's top 10 motherboard manufacturers, and gained fame in 2001 with the introduction of the Shuttle SV24, one of the world's first commercially successful small form factor computers. Shuttle XPC small form factor computers tend to be popular among PC enthusiasts and hobbyists, although in 2004 Shuttle started a campaign to become a brand name recognized by mainstream PC consumers. Shuttle XPC desktop systems are based on same PC platform as the XPC barebone (case+motherboard+power supply) Shuttle manufactures. More recently, the differentiation between Shuttle barebones and Shuttle systems has become greater, with the launch of system exclusive models such as the M-series and X-series. History 1983 – Shuttle was initially incorporated in Taiwan by David and Simon Yu under the name Holco (浩鑫), and commences trading of computer motherboards. 1984 – Holco begins manufacturing motherboards in its Taoyuan County (now Taoyuan City), Taiwan factory. 1988 – Holco establishes its first overseas branch office, in Fremont, California. 1990 – Holco subsidiary Shuttle Computer Handel is established in Elmshorn, Germany to serve European market. 1994 – Introduces Shuttle RiscPC 4475, a desktop based on DEC Alpha 64-bit microprocessor and Microsoft Windows NT for Alpha. 1995 – Shuttle reaches #5 motherboard manufacturer worldwide in terms of volume. 1997 – Holco officially changes its name to Shuttle Inc. 2000 – Goes public on TAIEX stock market under symbol 2405. 2001 – Introduces Shuttle SV24, a compact all-aluminum computer using desktop components. 2002 – SV24 evolves into XPC line of small form factor barebones computers, including models for Intel's Pentium 4 and AMD's Athlon. 2003 – 8 different XPCs introduced, including models featuring chipsets from Nvidia, Intel, SiS,
https://en.wikipedia.org/wiki/Mathematical%20Olympiad%20Program
The Mathematical Olympiad Program (abbreviated MOP; formerly called the Mathematical Olympiad Summer Program, abbreviated MOSP) is an intensive summer program held at Carnegie Mellon University. The main purpose of MOP, held since 1974, is to select and train the six members of the U.S. team for the International Mathematical Olympiad (IMO). Selection Process Students qualify for the program by taking the United States of America Mathematical Olympiad (USAMO). The top twelve American scorers from all grades form the "black" group. The approximately eighteen next highest American scorers among students from 11th grade and under form the "blue" group. In 2004, the program was expanded to include approximately thirty of the highest-scoring American freshmen and sophomores each year, the "red" group; this was later split into two, forming the "green" group, which consists of approximately fifteen of the highest-scoring freshmen and sophomores who have qualified through the USAMO, and the "red" group, which consists of those who have qualified through the USAJMO. The colorful designations of these groups were adapted from Karate. Also, with the new system the black group includes more or less only the IMO team, which is not necessarily all USAMO winners. Until 2011, only black group MOPpers were eligible for the selection to the USA IMO team, determined by combining USAMO results with results of a similar competition called the Team Selection Test (TST). From 2011, a new competition called the Team Selection Test Selection Test (TSTST) was established; this competition is open for any of the participants of MOP, and along with results from the USAMO, determines the students who take the TSTs. This ultimately, along with the USAMO and MOP competitions, determines the IMO team. Canadians are allowed to take the USAMO, but are not allowed to participate in MOP unless they are U.S. residents. Occasionally, when Canadians are amongst the USAMO winners, top scoring honora
https://en.wikipedia.org/wiki/Jensen%20Huang
Jen-Hsun "Jensen" Huang (; born February 17, 1963) is an American businessman, electrical engineer, and the co-founder, president and CEO of Nvidia Corporation. Early life Huang was born in Tainan, Taiwan. His family first moved to Thailand when he was five years old, then emigrated to the United States around four to five years later, in 1973. When he was ten years old, he lived in the boys dormitory with his brother at Oneida Baptist Institute while attending Oneida Elementary school in Oneida, Kentucky. His family later settled in Oregon, where he graduated from Aloha High School just outside Portland. Jensen received his undergraduate degree in electrical engineering from Oregon State University in 1984, and his master's degree in electrical engineering from Stanford University in 1992. Career After college he was a director at LSI Logic and a microprocessor designer at Advanced Micro Devices, Inc. (AMD). At 30 years old in 1993, Huang co-founded Nvidia and is the CEO and president. He owns 3.6% of Nvidia's stock, which went public in 1999. He earned as CEO in 2007, ranking him as the 61st highest paid U.S. CEO by Forbes. As of June 19, 2023, Huang's net worth is according to the Bloomberg Billionaires Index. Philanthropy In 2022 Huang donated to his alma mater, Oregon State University, as a portion of a donation towards the creation of a supercomputing institute on campus. Huang gave his other alma mater Stanford University to build the Jen-Hsun Huang School of Engineering Center. The building is the second of four that make up Stanford's Science and Engineering Quad. It was designed by Bora Architects of Portland, Oregon and completed in 2010. Huang gave his alma mater Oneida Baptist Institute to build Huang Hall, a new girls' dormitory and classroom building. It was designed by CMW Architects of Lexington, Kentucky. In 2007, Huang was the recipient of the Silicon Valley Education Foundation's Pioneer Business Leader Award for his work in bot
https://en.wikipedia.org/wiki/Trajectory%20optimization
Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC). Although the idea of trajectory optimization has been around for hundreds of years (calculus of variations, brachystochrone problem), it only became practical for real-world problems with the advent of the computer. Many of the original applications of trajectory optimization were in the aerospace industry, computing rocket and missile launch trajectories. More recently, trajectory optimization has also been used in a wide variety of industrial process and robotics applications. History Trajectory optimization first showed up in 1697, with the introduction of the Brachystochrone problem: find the shape of a wire such that a bead sliding along it will move between two points in the minimum time. The interesting thing about this problem is that it is optimizing over a curve (the shape of the wire), rather than a single number. The most famous of the solutions was computed using calculus of variations. In the 1950s, the digital computer started to make trajectory optimization practical for solving real-world problems. The first optimal control approaches grew out of the calculus of variations, based on the research of Gilbert Ames Bliss and Bryson in America, and Pontryagin in Russia. Pontryagin's maximum principle is of part
https://en.wikipedia.org/wiki/KUSI-TV
KUSI-TV (channel 51) is an independent television station in San Diego, California, United States. It is owned by Nexstar Media Group alongside Fox affiliate KSWB-TV (channel 69). KUSI-TV's studios are located on Viewridge Avenue (near I-15) in the Kearny Mesa section of San Diego, and its transmitter is located southeast of Spring Valley. The station operates translator K03JB-D in Temecula (part of the Los Angeles market). After a 15-year dispute over permit ownership that almost derailed the launch of the station on multiple occasions, KUSI began broadcasting in 1982 as a partnership between United States International University and McKinnon Broadcasting Company. It was the first independent station built in San Diego proper. Financial and accreditation problems at USIU led to the sale of its stake to McKinnon in 1990, with McKinnon exercising veto power to block any sale to another entity. McKinnon then started KUSI's news department, which has since grown to produce newscasts throughout the day. In 2023, McKinnon sold KUSI to Nexstar. History 15 years of fighting The construction permit for a channel 51 television station in San Diego was first issued on June 23, 1965, to Jack O. Gross, who had previously founded KFMB-TV channel 8, as KJOG-TV. The permit was issued after applications by Gross and California Western University of San Diego were filed the year before; Gross proposed a conventional independent station, while the private university planned a station with a "high educational and cultural content". In October 1967, with the station still unbuilt, California Western filed to have the station transferred to it, stating that Gross was refusing to abide by an agreement reached that April to sell the station to CWU for $16,000 in expenses. However, a complication arose when Gross informed the Federal Communications Commission (FCC) that he had reached another deal to sell the station to the Broadmoor Broadcasting Corporation, owned by Michael and Dan
https://en.wikipedia.org/wiki/Qrpff
qrpff is a Perl script created by Keith Winstein and Marc Horowitz of the MIT SIPB. It performs DeCSS in six or seven lines. The name itself is an encoding of "decss" in rot-13. The algorithm was rewritten 77 times to condense it down to six lines. In fact, two versions of qrpff exist: a short version (6 lines) and a fast version (7 lines). Both appear below. Short: #!/usr/bin/perl # 472-byte qrpff, Keith Winstein and Marc Horowitz <sipb-iap-dvd@mit.edu> # MPEG 2 PS VOB file -> descrambled output on stdout. # usage: perl -I <k1>:<k2>:<k3>:<k4>:<k5> qrpff # where k1..k5 are the title key bytes in least to most-significant order s''$/=\2048;while(<>){G=29;R=142;if((@a=unqT="C*",_)[20]&48){D=89;_=unqb24,qT,@ b=map{ord qB8,unqb8,qT,_^$a[--D]}@INC;s/...$/1$&/;Q=unqV,qb25,_;H=73;O=$b[4]<<9 |256|$b[3];Q=Q>>8^(P=(E=255)&(Q>>12^Q>>4^Q/8^Q))<<17,O=O>>8^(E&(F=(S=O>>14&7^O) ^S*8^S<<6))<<9,_=(map{U=_%16orE^=R^=110&(S=(unqT,"\xb\ntd\xbz\x14d")[_/16%8]);E ^=(72,@z=(64,72,G^=12*(U-2?0:S&17)),H^=_%64?12:0,@z)[_%8]}(16..271))[_]^((D>>=8 )+=P+(~F&E))for@a[128..$#a]}print+qT,@a}';s/[D-HO-U_]/\$$&/g;s/q/pack+/g;eval Fast: #!/usr/bin/perl -w # 531-byte qrpff-fast, Keith Winstein and Marc Horowitz <sipb-iap-dvd@mit.edu> # MPEG 2 PS VOB file on stdin -> descrambled output on stdout # arguments: title key bytes in least to most-significant order $_='while(read+STDIN,$_,2048){$a=29;$b=73;$c=142;$t=255;@t=map{$_%16or$t^=$c^=( $m=(11,10,116,100,11,122,20,100)[$_/16%8])&110;$t^=(72,@z=(64,72,$a^=12*($_%16 -2?0:$m&17)),$b^=$_%64?12:0,@z)[$_%8]}(16..271);if((@a=unx"C*",$_)[20]&48){$h =5;$_=unxb24,join"",@b=map{xB8,unxb8,chr($_^$a[--$h+84])}@ARGV;s/...$/1$&/;$ d=unxV,xb25,$_;$e=256|(ord$b[4])<<9|ord$b[3];$d=$d>>8^($f=$t&($d>>12^$d>>4^ $d^$d/8))<<17,$e=$e>>8^($t&($g=($q=$e>>14&7^$e)^$q*8^$q<<6))<<9,$_=$t[$_]^ (($h>>=8)+=$f+(~$g&$t))for@a[128..$#a]}print+x"C*",@a}';s/x/pack+/g;eval The fast version is actually fast enough to decode a movie in real-time. qrpff and related memorabilia was so
https://en.wikipedia.org/wiki/Cyberchondria
Cyberchondria, otherwise known as compucondria, is the unfounded escalation of concerns about common symptomology based on review of search results and literature online. Articles in popular media position cyberchondria anywhere from temporary neurotic excess to adjunct hypochondria. Cyberchondria is a growing concern among many healthcare practitioners as patients can now research any and all symptoms of a rare disease, illness or condition, and manifest a state of medical anxiety. Derivation and use The term "cyberchondria" is a portmanteau neologism derived from the terms cyber- and hypochondria. (The term "hypochondrium" derives from Greek and literally means the region below the "cartilage" or "breast bone.") Researchers at Harris Interactive clarified the etymology of cyberchondria, and state in studies and interviews that the term is not necessarily intended to be pejorative. A review in the British Medical Journal publication Journal of Neurology, Neurosurgery, and Psychiatry from 2003 says cyberchondria was used in 2001 in an article in the United Kingdom newspaper The Independent to describe "the excessive use of internet health sites to fuel health anxiety." The BBC also used cyberchondria in April, 2001. The BMJ review also cites the 1997 book from Elaine Showalter, who writes the internet is a new way to spread "pathogenic ideas" like Gulf War syndrome and myalgic encephalomyelitis. Patients with cyberchondria and patients of general hypochondriasis often are convinced they have disorders "with common or ambiguous symptoms." Studies Online search behaviors and their influences The first systematic study of cyberchondria, reported in November 2008, was performed by Microsoft researchers Ryen White and Eric Horvitz, who conducted a large-scale study that included several phases of analysis. White and Horvitz defined cyberchondria as the “unfounded escalation of concerns about common symptomatology, based on the review of search results and literatur
https://en.wikipedia.org/wiki/Nonvolatile%20BIOS%20memory
Nonvolatile BIOS memory refers to a small memory on PC motherboards that is used to store BIOS settings. It is traditionally called CMOS RAM because it uses a volatile, low-power complementary metal–oxide–semiconductor (CMOS) SRAM (such as the Motorola MC146818 or similar) powered by a small "CMOS" battery when system and standby power is off. It is referred to as non-volatile memory or NVRAM because, after the system loses power, it does retain state by virtue of the CMOS battery. The typical NVRAM capacity is 256 bytes. The CMOS RAM and the real-time clock have been integrated as a part of the southbridge chipset and it may not be a standalone chip on modern motherboards. In turn, the southbridge have been integrated into a single Platform Controller Hub. Today's UEFI motherboards use NVRAM to store configuration data (NVRAM is a part of the UEFI flash ROM), but by many OEMs' design, the UEFI settings are still lost if the CMOS battery fails. CMOS battery The memory battery (aka motherboard, CMOS, real-time clock (RTC), clock battery) is generally a CR2032 lithium coin cell. This cell battery has an estimated life of three years when power supply unit (PSU) is unplugged or when the PSU power switch is turned off. This battery type, unlike the lithium-ion battery, is not rechargeable and trying to do so may result in an explosion. Motherboards have circuitry preventing batteries from being charged and discharged when a motherboard is powered on. Other common battery cell types can last significantly longer or shorter periods, such as the smaller CR2016 which will generally last about 40% less time than CR2032. Higher temperatures and longer power-off time will shorten battery cell life. When replacing the battery cell, the system time and CMOS BIOS settings may revert to default values. Unwanted BIOS reset may be avoided by replacing the battery cell with the PSU power switch turned on and plugged into an electric wall socket. On ATX motherboards, the PSU will
https://en.wikipedia.org/wiki/Mach%20tuck
Mach tuck is an aerodynamic effect whereby the nose of an aircraft tends to pitch downward as the airflow around the wing reaches supersonic speeds. This diving tendency is also known as tuck under. The aircraft will first experience this effect at significantly below Mach 1. Causes Mach tuck is usually caused by two things, a rearward movement of the centre of pressure of the wing and a decrease in wing downwash velocity at the tailplane both of which cause a nose down pitching moment. For a particular aircraft design only one of these may be significant in causing a tendency to dive, delta-winged aircraft with no foreplane or tailplane in the first case and, for example, the Lockheed P-38 in the second case. Alternatively, a particular design may have no significant tendency, for example the Fokker F28 Fellowship. As an aerofoil generating lift moves through the air, the air flowing over the top surface accelerates to a higher local speed than the air flowing over the bottom surface. When the aircraft speed reaches its critical Mach number the accelerated airflow locally reaches the speed of sound and creates a small shock wave, even though the aircraft is still travelling below the speed of sound. The region in front of the shock wave generates high lift. As the aircraft itself flies faster, the shock wave over the wing gets stronger and moves rearwards, creating high lift further back along the wing. This rearward movement of lift causes the aircraft to tuck or pitch nose-down. The severity of Mach tuck on any given design is affected by the thickness of the aerofoil, the sweep angle of the wing, and the location of the tailplane relative to the main wing. A tailplane which is positioned further aft can provide a larger stabilizing pitch-up moment. The camber and thickness of the aerofoil affect the critical Mach number, with a more highly curved upper surface causing a lower critical Mach number. On a swept wing the shock wave typically forms first at the
https://en.wikipedia.org/wiki/Domestication%20of%20vertebrates
The domestication of vertebrates is the mutual relationship between vertebrate animals including birds and mammals, and the humans who have influence on their care and reproduction. Charles Darwin recognized a small number of traits that made domesticated species different from their wild ancestors. He was also the first to recognize the difference between conscious selective breeding (i.e. artificial selection) in which humans directly select for desirable traits, and unconscious selection where traits evolve as a by-product of natural selection or from selection on other traits. There is a genetic difference between domestic and wild populations. There is also a genetic difference between the domestication traits that researchers believe to have been essential at the early stages of domestication, and the improvement traits that have appeared since the split between wild and domestic populations. Domestication traits are generally fixed within all domesticates, and were selected during the initial episode of domestication of that animal or plant, whereas improvement traits are present only in a portion of domesticates, though they may be fixed in individual breeds or regional populations. Domestication should not be confused with taming. Taming is the conditioned behavioral modification of a wild-born animal when its natural avoidance of humans is reduced and it accepts the presence of humans, but domestication is the permanent genetic modification of a bred lineage that leads to an inherited predisposition toward humans. Certain animal species, and certain individuals within those species, make better candidates for domestication than others because they exhibit certain behavioral characteristics: (1) the size and organization of their social structure; (2) the availability and the degree of selectivity in their choice of mates; (3) the ease and speed with which the parents bond with their young, and the maturity and mobility of the young at birth; (4) the degr
https://en.wikipedia.org/wiki/Climate%20change%20mitigation
Climate change mitigation is action to limit climate change by reducing emissions of greenhouse gases or removing those gases from the atmosphere. The recent rise in global average temperature is mostly due to emissions from unabated burning of fossil fuels such as coal, oil, and natural gas. Mitigation can reduce emissions by transitioning to sustainable energy sources, conserving energy, and increasing efficiency. It is possible to remove carbon dioxide () from the atmosphere by enlarging forests, restoring wetlands and using other natural and technical processes. Experts call these processes carbon sequestration. Governments and companies have pledged to reduce emissions to prevent dangerous climate change in line with international negotiations to limit warming by reducing emissions. Solar energy and wind power have the greatest potential for mitigation at the lowest cost compared to a range of other options. The availability of sunshine and wind is variable. But it is possible to deal with this through energy storage and improved electrical grids. These include long-distance electricity transmission, demand management and diversification of renewables. It is possible to reduce emissions from infrastructure that directly burns fossil fuels, such as vehicles and heating appliances, by electrifying the infrastructure. If the electricity comes from renewable sources instead of fossil fuels this will reduce emissions. Using heat pumps and electric vehicles can improve energy efficiency. If industrial processes must create carbon dioxide, carbon capture and storage can reduce net emissions. Greenhouse gas emissions from agriculture include methane as well as nitrous oxide. It is possible to cut emissions from agriculture by reducing food waste, switching to a more plant-based diet, by protecting ecosystems and by improving farming processes. Changing energy sources, industrial processes and farming methods can reduce emissions. So can changes in demand, for instanc
https://en.wikipedia.org/wiki/Crystallinity
Crystallinity refers to the degree of structural order in a solid. In a crystal, the atoms or molecules are arranged in a regular, periodic manner. The degree of crystallinity has a big influence on hardness, density, transparency and diffusion. In an ideal gas, the relative positions of the atoms or molecules are completely random. Amorphous materials, such as liquids and glasses, represent an intermediate case, having order over short distances (a few atomic or molecular spacings) but not over longer distances. Many materials, such as glass-ceramics and some polymers, can be prepared in such a way as to produce a mixture of crystalline and amorphous regions. In such cases, crystallinity is usually specified as a percentage of the volume of the material that is crystalline. Even within materials that are completely crystalline, however, the degree of structural perfection can vary. For instance, most metallic alloys are crystalline, but they usually comprise many independent crystalline regions (grains or crystallites) in various orientations separated by grain boundaries; furthermore, they contain other crystallographic defects (notably dislocations) that reduce the degree of structural perfection. The most highly perfect crystals are silicon boules produced for semiconductor electronics; these are large single crystals (so they have no grain boundaries), are nearly free of dislocations, and have precisely controlled concentrations of defect atoms. Crystallinity can be measured using x-ray crystallography, but calorimetric techniques are also commonly used. Rock crystallinity Geologists describe four qualitative levels of crystallinity: holocrystalline rocks are completely crystalline; hypocrystalline rocks are partially crystalline, with crystals embedded in an amorphous or glassy matrix; hypohyaline rocks are partially glassy; holohyaline rocks (such as obsidian) are completely glassy. References Oxford dictionary of science, 1999, . Crystals Physical
https://en.wikipedia.org/wiki/Audio%20converter
An audio converter is a device or software that converts an audio signal from one format to another. Hardware audio converters include analog-to-digital converters (ADCs), which convert analog audio to uncompressed digital form (e.g., PCM), and their reciprocal partners, digital-to-analog converters (DACs), which convert uncompressed digital audio to analog form. ADCs and DACs are usually components of hardware products. For example, sound cards and capture cards both include ADCs to allow a computer to record audio. Sound cards also feature DACs for audio playback. Some audio conversion functions can be performed by software or by specialized hardware. For example, an audio transcoder converts from one compressed audio format to another (e.g., MP3 to AAC) by means of two audio codecs: One for decoding (uncompressing) the source and one for encoding (compressing) the destination file or stream. See also Audio file format Comparison of audio coding formats List of audio conversion software Audio software Digital signal processing
https://en.wikipedia.org/wiki/Retraction%20%28topology%29
In topology, a branch of mathematics, a retraction is a continuous mapping from a topological space into a subspace that preserves the position of all points in that subspace. The subspace is then called a retract of the original space. A deformation retraction is a mapping that captures the idea of continuously shrinking a space into a subspace. An absolute neighborhood retract (ANR) is a particularly well-behaved type of topological space. For example, every topological manifold is an ANR. Every ANR has the homotopy type of a very simple topological space, a CW complex. Definitions Retract Let X be a topological space and A a subspace of X. Then a continuous map is a retraction if the restriction of r to A is the identity map on A; that is, for all a in A. Equivalently, denoting by the inclusion, a retraction is a continuous map r such that that is, the composition of r with the inclusion is the identity of A. Note that, by definition, a retraction maps X onto A. A subspace A is called a retract of X if such a retraction exists. For instance, any non-empty space retracts to a point in the obvious way (the constant map yields a retraction). If X is Hausdorff, then A must be a closed subset of X. If is a retraction, then the composition ι∘r is an idempotent continuous map from X to X. Conversely, given any idempotent continuous map we obtain a retraction onto the image of s by restricting the codomain. Deformation retract and strong deformation retract A continuous map is a deformation retraction of a space X onto a subspace A if, for every x in X and a in A, In other words, a deformation retraction is a homotopy between a retraction and the identity map on X. The subspace A is called a deformation retract of X. A deformation retraction is a special case of a homotopy equivalence. A retract need not be a deformation retract. For instance, having a single point as a deformation retract of a space X would imply that X is path connected (and in fact that
https://en.wikipedia.org/wiki/Plextor
Plextor (styled PLEXTOR) (; ) is a Taiwanese (formerly Japanese) consumer electronics brand, best known for solid-state drives and optical disc drives. Company The brand name Plextor was used for all products manufactured by the Electronic Equipment Division and Printing Equipment Division of the Japanese company Plextor Inc., which was a 100%-owned subsidiary company of Shinano Kenshi Corp., also a Japanese company. The brand was formerly known as TEXEL, under which name it introduced its first CD-ROM optical disc drive in 1989. The brand has been used for flash memory products, Blu-ray players and burners, DVD-ROM burners, CD-ROM burners, DVD and CD media, network hard disks, portable hard disks, digital video recorders, and floppy disk drives. The brand Plextor was in 2010 licensed to Philips & Lite-On Digital Solutions Corporation, a subsidiary company of Lite-On Technology Corporation. Therefore, all the Plextor products since then, especially SSDs, are of a Taiwanese, not a Japanese brand. However, the Japanese company Plextor Inc., who originated this brand name, continues, and sells new products under other brands such as PLEXLOGGER and PLEXTALK. Products In an effort to strengthen its storage leader position, Plextor introduced its first-ever solid state drives, the M1 SSD series available in 64 GB and 128 GB capacities. Soon after its M1 release, Plextor released the M2 Series 128 GB, and 256 GB, available in 64, using SATA 6 Gbit/s interface, which they claimed to be the first SATA 3 SSD available at the time. In October 2011, Plextor announced the limited edition M2P Series which boast significant increase in speed performance from its M2 Series. Notably, Plextor introduced "ironclad" five-year warranty for its SSD, making them one of the few manufacturers to do so. In addition to the five-year warranty, Plextor introduced its exclusive TrueSpeed Technology with the M2P series. True Speed Technology is designed to maintain high speed in the real-wor
https://en.wikipedia.org/wiki/Faraday%20paradox
The Faraday paradox or Faraday's paradox is any experiment in which Michael Faraday's law of electromagnetic induction appears to predict an incorrect result. The paradoxes fall into two classes: Faraday's law appears to predict that there will be zero electromotive force (EMF) but there is a non-zero EMF. Faraday's law appears to predict that there will be a non-zero EMF but there is zero EMF. Faraday deduced his law of induction in 1831, after inventing the first electromagnetic generator or dynamo, but was never satisfied with his own explanation of the paradox. Faraday's law compared to the Maxwell–Faraday equation Faraday's law (also known as the Faraday–Lenz law) states that the electromotive force (EMF) is given by the total derivative of the magnetic flux with respect to time t: where is the EMF and ΦB is the magnetic flux through a loop of wire. The direction of the electromotive force is given by Lenz's law. An often overlooked fact is that Faraday's law is based on the total derivative, not the partial derivative, of the magnetic flux. This means that an EMF may be generated even if total flux through the surface is constant. To overcome this issue, special techniques may be used. See below for the section on Use of special techniques with Faraday's law. However, the most common interpretation of Faraday's law is that: This version of Faraday's law strictly holds only when the closed circuit is a loop of infinitely thin wire, and is invalid in other circumstances. It ignores the fact that Faraday's law is defined by the total, not partial, derivative of magnetic flux and also the fact that EMF is not necessarily confined to a closed path but may also have radial components as discussed below. A different version, the Maxwell–Faraday equation (discussed below), is valid in all circumstances, and when used in conjunction with the Lorentz force law it is consistent with correct application of Faraday's law. The Maxwell–Faraday equation is a gener
https://en.wikipedia.org/wiki/Metro%20Ethernet
A metropolitan-area Ethernet, Ethernet MAN, or metro Ethernet network is a metropolitan area network (MAN) that is based on Ethernet standards. It is commonly used to connect subscribers to a larger service network or for internet access. Businesses can also use metropolitan-area Ethernet to connect their own offices to each other. An Ethernet interface is typically more economical than a synchronous digital hierarchy (SONET/SDH) or plesiochronous digital hierarchy (PDH) interface of the same bandwidth. Another distinct advantage of an Ethernet-based access network is that it can be easily connected to the customer network, due to the prevalent use of Ethernet in corporate and residential networks. A typical service provider's network is a collection of switches and routers connected through optical fiber. The topology could be a ring, hub-and-spoke (star), or full or partial mesh. The network will also have a hierarchy: core, distribution (aggregation), and access. The core in most cases is an existing IP/MPLS backbone but may migrate to newer forms of Ethernet transport in the form of 10 Gbit/s, 40 Gbit/s, or 100 Gbit/s speeds or even possibly 400 Gbit/s to Terabit Ethernet network in the future. Ethernet on the MAN can be used as pure Ethernet, Ethernet over SDH, Ethernet over Multiprotocol Label Switching (MPLS), or Ethernet over DWDM. Ethernet-based deployments with no other underlying transport are cheaper but are harder to implement in a resilient and scalable manner, which has limited its use to small-scale or experimental deployments. SDH-based deployments are useful when there is an existing SDH infrastructure already in place; its main shortcoming is the loss of flexibility in bandwidth management due to the rigid hierarchy imposed by the SDH network. MPLS-based deployments are costly but highly reliable and scalable and are typically used by large service providers. Metropolitan area networks Familiar network domains are likely to exist regardless of
https://en.wikipedia.org/wiki/Barber%27s%20pole
A barber's pole is a type of sign used by barbers to signify the place or shop where they perform their craft. The trade sign is, by a tradition dating back to the Middle Ages, a staff or pole with a helix of colored stripes (often red and white in many countries, but usually red, white and blue in Japan and the United States). The pole may be stationary or may rotate, often with the aid of an electric motor. A "barber's pole" with a helical stripe is a familiar sight, and is used as a secondary metaphor to describe objects in many other contexts. For example, if the shaft or tower of a lighthouse has been painted with a helical stripe as a daymark, the lighthouse could be described as having been painted in "barber's pole" colors. Origin in barbering and surgery During medieval times, barbers performed surgery on customers, as well as tooth extractions. The original pole had a brass wash basin at the top (representing the vessel in which leeches were kept) and bottom (representing the basin that received the blood). The pole itself represents the staff that the patient gripped during the procedure to encourage blood flow and the twined pole motif is likely related to the staff of the Greek god of speed and commerce Hermes, aka the Caduceus, evidenced for example by early physician van Helmont's description of himself as "Francis Mercurius Van Helmont, A Philosopher by that one in whom are all things, A Wandering Hermite. At the Council of Tours in 1163, the clergy was banned from the practice of surgery. From then, physicians were clearly separated from the surgeons and barbers. Later, the role of the barbers was defined by the College de Saint-Côme et Saint-Damien, established by Jean Pitard in Paris circa 1210, as academic surgeons of the long robe and barber surgeons of the short robe. In Renaissance-era Amsterdam, the surgeons used the colored stripes to indicate that they were prepared to bleed their patients (red), set bones or pull teeth (white), or gi
https://en.wikipedia.org/wiki/313%20%28number%29
313 (three hundred [and] thirteen) is the natural number following 312 and preceding 314. Additionally, 313 is: a prime number a twin prime with 311 a centered square number a full reptend prime (and the smallest number which is a full reptend prime in base 10 but not in base 2 to 9) a Pythagorean prime a regular prime a palindromic prime in both decimal and binary. a truncatable prime a weakly prime in base 5 a happy number an Armstrong number - in base 4 ( 3×42 + 1×41 + 3×40 = 33 + 13 + 33 ) an index of a prime Lucas number. References Integers
https://en.wikipedia.org/wiki/Aftertaste
Aftertaste is the taste intensity of a food or beverage that is perceived immediately after that food or beverage is removed from the mouth. The aftertastes of different foods and beverages can vary by intensity and over time, but the unifying feature of aftertaste is that it is perceived after a food or beverage is either swallowed or spat out. The neurobiological mechanisms of taste (and aftertaste) signal transduction from the taste receptors in the mouth to the brain have not yet been fully understood. However, the primary taste processing area located in the insula has been observed to be involved in aftertaste perception. Temporal taste perception Characteristics of a food's aftertaste are quality, intensity, and duration. Quality describes the actual taste of a food and intensity conveys the magnitude of that taste. Duration describes how long a food's aftertaste sensation lasts. Foods that have lingering aftertastes typically have long sensation durations. Because taste perception is unique to every person, descriptors for taste quality and intensity have been standardized, particularly for use in scientific studies. For taste quality, foods can be described by the commonly used terms "sweet", "sour", "salty", "bitter", "umami", or "no taste". Description of aftertaste perception relies heavily upon the use of these words to convey the taste that is being sensed after a food has been removed from the mouth. The description of taste intensity is also subject to variability among individuals. Variations of the Borg Category Ratio Scale or other similar metrics are often used to assess the intensities of foods. The scales typically have categories that range from either zero or one through ten (or sometimes beyond ten) that describe the taste intensity of a food. A score of zero or one would correspond to unnoticeable or weak taste intensities, while a higher score would correspond to moderate or strong taste intensities. It is the prolonged moderate or stro
https://en.wikipedia.org/wiki/NScripter
, officially abbreviated as Nscr, also known under its production title Scripter4, is a game engine developed by Naoki Takahashi between 1999 and 2018 functioning with its own script language which facilitates the creation of both visual and sound novels. The SDK is only available for Windows. From version 2.82, NScripter supports both Japanese characters - these are two bytes long - and any single-byte character; before that, it only supported Japanese characters. This engine was very popular in Japan because of its simplicity and because it was free for amateur game makers. Additionally, there are forks available to extend NScripter's capabilities to display characters from another language, run a game on other platforms, etc. NScripter NScripter's development ranged from to ; it was first called by its production title Scripter4 because it was the successor to Scripter3, Naoki Takahashi's previous engine. was the date of the release of the final version of NScripter. Characteristics The script is executed by the engine in an interpreter. The syntax is very simple, similar to the BASIC language. The functions needed to create visual novels and sound novels, such as displaying text, sprites and CG, playing music and handling choices, are built into the engine as a basic API. As a result, game creation is simplified by the ability to write a script that calls these functions directly. In order to meet specific needs, it is possible to use a method called 'system customisation' which modifies the behaviour of the engine itself in order to add features such as a save system, complex effects not provided in the basic API, or video management. To do this, external DLLs can be used. These functions can be used to create simulation games, etc. On the other hand, before version 2.92, object-oriented elements were not incorporated into the software and NScripter did not handle parallelism at all. The statement was used to try to do structured programming within NSc
https://en.wikipedia.org/wiki/Signal%20lamp
A signal lamp (sometimes called an Aldis lamp or a Morse lamp) is a visual signaling device for optical communication by flashes of a lamp, typically using Morse code. The idea of flashing dots and dashes from a lantern was first put into practice by Captain Philip Howard Colomb, of the Royal Navy, in 1867. Colomb's design used limelight for illumination, and his original code was not the same as Morse code. During World War I, German signalers used optical Morse transmitters called , with a range of up to 8 km (5 miles) at night, using red filters for undetected communications. Modern signal lamps produce a focused pulse of light, either by opening and closing shutters mounted in front of the lamp, or by tilting a concave mirror. They continue to be used to the present day on naval vessels and for aviation light signals in air traffic control towers, as a backup device in case of a complete failure of an aircraft's radio. History Signal lamps were pioneered by the Royal Navy in the late 19th century. They were the second generation of signalling in the Royal Navy, after the flag signals most famously used to spread Nelson's rallying-cry, "England expects that every man will do his duty", before the Battle of Trafalgar. The idea of flashing dots and dashes from a lantern was first put into practice by Captain, later Vice Admiral, Philip Howard Colomb, of the Royal Navy, in 1867. Colomb's design used limelight for illumination. His original code was not identical to Morse code, but the latter was subsequently adopted. Another signalling lamp was the Begbie lamp, a kerosene lamp with a lens to focus the light over a long distance. During the trench warfare of World War I when wire communications were often cut, German signals used three types of optical Morse transmitters, called , the intermediate type for distances of up to 4 km (2.5 miles) in daylight and of up to 8 km (5 miles) at night, using red filters for undetected communications. In 1944 Arthur Cyri
https://en.wikipedia.org/wiki/Atresia
Atresia is a condition in which an orifice or passage in the body is (usually abnormally) closed or absent. Examples of atresia include: Aural atresia (anotia), a congenital deformity where the ear canal is underdeveloped. Biliary atresia, a condition in newborns in which the common bile duct between the liver and the small intestine is blocked or absent. Congenital bronchial atresia, a rare congenital abnormality Choanal atresia, blockage of the back of the nasal passage, usually by abnormal bony or soft tissue. Esophageal atresia, which affects the alimentary tract and causes the esophagus to end before connecting normally to the stomach. Follicular atresia, degeneration and resorption of the ovarian follicles.   Imperforate anus, malformation of the opening between the rectum and anus. Intestinal atresia, malformation of the intestine, usually resulting from a vascular accident in utero. Microtia: see above, Aural atresia (anotia). Ovarian follicle atresia, the degeneration and subsequent resorption of one or more immature ovarian follicles. Potter sequence, congenital decreased size of the kidney leading to absolutely no functionality of the kidney, usually related to a single kidney. Pulmonary atresia, malformation of the pulmonary valve in which the valve orifice fails to develop. Renal agenesis, only having one kidney. Tricuspid atresia, a form of congenital heart disease whereby there is a complete absence of the tricuspid valve, and consequently an absence of the right atrioventricular connection. Vaginal atresia, a congenital occlusion of the vagina or subsequent adhesion of the walls of the vagina, resulting in its occlusion. References Medical terminology Anatomy
https://en.wikipedia.org/wiki/Global%20motion%20compensation
Global motion compensation (GMC) is a motion compensation technique used in video compression to reduce the bitrate required to encode video. It is most commonly used in MPEG-4 ASP, such as with the DivX and Xvid codecs. Operation Global motion compensation describes the motion in a scene based on a single affine transform instruction. The reference frame is panned, rotated and zoomed in accordance to GMC warp points to create a prediction of how the following frame will look. Since this operation works on individual pixels (rather than blocks), it is capable of creating predictions that are not possible using block-based approaches. Each macroblock in such a frame can be compensated using global motion (no further motion information is then signalled) or, alternatively, local motion (as if GMC were off). This choice, while costing an additional bit per macroblock, can improve prediction quality and therefore reduce residual. Because the transforms used in global motion compensation are only added to the encoding stream when used, they do not have a constant bitrate overhead. A predicted frame which uses GMC is called an S-frame (sprite frame) while a predicted frame encoded without GMC is called either a P-frame, if it was predicted purely by previous (past) frames, or a B-frame if it was predicted jointly with past and future frames (an unpredicted frame encoded as a whole image is referred to as an I-frame). Implementations DivX offers 1 warp-point GMC encoding: This enables easier hardware support in DivX certified and non-certified devices. But as 1 warp-point GMC limits the global transform to panning operation only (since panning can be described using blocks), this implementation rarely improves video quality. Xvid offers 3 warp-point GMC encoding: As a result, it currently has no hardware support. Criticism GMC failed to meet expectations of dramatic improvements in motion compensation, and as a result it was omitted from the H.264/MPEG-4 AVC specifi
https://en.wikipedia.org/wiki/Jazelle
Jazelle DBX (direct bytecode execution) is an extension that allows some ARM processors to execute Java bytecode in hardware as a third execution state alongside the existing ARM and Thumb modes. Jazelle functionality was specified in the ARMv5TEJ architecture and the first processor with Jazelle technology was the ARM926EJ-S. Jazelle is denoted by a "J" appended to the CPU name, except for post-v5 cores where it is required (albeit only in trivial form) for architecture conformance. Jazelle RCT (Runtime Compilation Target) is a different technology based on ThumbEE mode; it supports ahead-of-time (AOT) and just-in-time (JIT) compilation with Java and other execution environments. The most prominent use of Jazelle DBX is by manufacturers of mobile phones to increase the execution speed of Java ME games and applications. A Jazelle-aware Java virtual machine (JVM) will attempt to run Java bytecode in hardware, while returning to the software for more complicated, or lesser-used bytecode operations. ARM claims that approximately 95% of bytecode in typical program usage ends up being directly processed in the hardware. The published specifications are very incomplete, being only sufficient for writing operating system code that can support a JVM that uses Jazelle. The declared intent is that only the JVM software needs to (or is allowed to) depend on the hardware interface details. This tight binding facilitates that the hardware and JVM can evolve together without affecting other software. In effect, this gives ARM Holdings considerable control over which JVMs are able to exploit Jazelle. It also prevents open source JVMs from using Jazelle. These issues do not apply to the ARMv7 ThumbEE environment, the nominal successor to Jazelle DBX. Implementation The Jazelle extension uses low-level binary translation, implemented as an extra stage between the fetch and decode stages in the processor instruction pipeline. Recognised bytecodes are converted into a stri
https://en.wikipedia.org/wiki/ScienceDirect
ScienceDirect is a website that provides access to a large bibliographic database of scientific and medical publications of the Dutch publisher Elsevier. It hosts over 18 million pieces of content from more than 4,000 academic journals and 30,000 e-books of this publisher. The access to the full-text requires subscription, while the bibliographic metadata is free to read. ScienceDirect is operated by Elsevier. It was launched in March 1997. Usage The journals are grouped into four main sections: Physical Sciences and Engineering Life Sciences Health Sciences Social Sciences and Humanities. Article abstracts are freely available, and access to their full texts (in PDF and, for newer publications, also HTML) generally requires a subscription or pay-per-view purchase unless the content is freely available in open access. Subscriptions to the overall offering hosted on ScienceDirect, rather than to specific titles it carries, are usually acquired through a so called big deal. The other big five have similar offers. ScienceDirect also competes for audience with other large aggregators and hosts of scholarly communication content such as academic social network ResearchGate and open access repository arXiv, as well as with fully open access publishing venues and mega journals like PLOS. ScienceDirect also carries Cell. See also List of academic databases and search engines Scopus References Further reading External links Internet properties established in 1997 Academic journal online publishing platforms Commercial digital libraries Digital libraries Elsevier Full-text scholarly online databases
https://en.wikipedia.org/wiki/Peer%20to%20Peer%20Remote%20Copy
Peer to Peer Remote Copy or PPRC is a protocol to replicate a storage volume to another control unit in a remote site. Synchronous PPRC causes each write to the primary volume to be performed to the secondary as well, and the I/O is only considered complete when update to both primary and secondary have completed. Asynchronous PPRC will flag tracks on the primary to be duplicated to the secondary when time permits. PPRC is also the name IBM calls their implementation of the protocol for their storage hardware. Other vendors have their own implementation. For example, the HDS implementation is called TrueCopy. EMC also provides a similar capability on their VPLEX platforms called "MirrorView". PPRC can be used to provide very fast data recovery due to failure of the primary site. In IBM zSeries computers with two direct access storage device (DASD) control units connected through dedicated connections, PPRC is the protocol used to mirror a DASD volume in one control unit (the primary) to a DASD volume in the other control unit (the secondary). In the IBM SAN Volume Controller PPRC is used to mirror a virtualized storage volume to remote (or the same) cluster. PPRC is also referred to as Metro Mirror when comparing to Global Mirror. See also Storage replication Hitachi TrueCopy Extended Remote Copy Global Mirror Copy Services Norton Ghost EMC SRDF (Symmetrix Remote Data Facility) Microsoft Windows Server 2016 Storage Replica IBM mainframe technology Storage software
https://en.wikipedia.org/wiki/Local%20oscillator
In electronics, a local oscillator (LO) is an electronic oscillator used with a mixer to change the frequency of a signal. This frequency conversion process, also called heterodyning, produces the sum and difference frequencies from the frequency of the local oscillator and frequency of the input signal. Processing a signal at a fixed frequency gives a radio receiver improved performance. In many receivers, the function of local oscillator and mixer is combined in one stage called a "converter" - this reduces the space, cost, and power consumption by combining both functions into one active device. Applications Local oscillators are used in the superheterodyne receiver, the most common type of radio receiver circuit. They are also used in many other communications circuits such as modems, cable television set top boxes, frequency division multiplexing systems used in telephone trunklines, microwave relay systems, telemetry systems, atomic clocks, radio telescopes, and military electronic countermeasure (antijamming) systems. In satellite television reception, the microwave frequencies used from the satellite down to the receiving antenna are converted to lower frequencies by a local oscillator and mixer mounted at the antenna. This allows the received signals to be sent over a length of cable that would otherwise have unacceptable signal loss at the original reception frequency. In this application, the local oscillator is of a fixed frequency and the down-converted signal frequency is variable. Performance requirements Application of local oscillators in a receiver design requires care to ensure no spurious signals are radiated. Such signals can cause interference in the operation of other receivers. The performance of a signal processing system depends on the characteristics of the local oscillator. The local oscillator must produce a stable frequency with low harmonics. Stability must take into account temperature, voltage, and mechanical drift as factors
https://en.wikipedia.org/wiki/Reversible%20inhibition%20of%20sperm%20under%20guidance
Reversible inhibition of sperm under guidance (RISUG), formerly referred to as the synthetic polymer styrene maleic anhydride (SMA), is the development name of a male contraceptive injection developed at IIT Kharagpur in India by the team of Dr. Sujoy K. Guha. RISUG has been patented in India, China, Bangladesh, and the United States. Phase III clinical trials were underway in India, and were slowed by insufficient volunteers. Beginning in 2011, a contraceptive product based on RISUG, Vasalgel, was under development in the US by Parsemus Foundation. In 2023, the patent for Vasalgel was acquired by NEXT Life Sciences, Inc., which plans to bring the technology to market under the name Plan A for Men. Development Sujoy K. Guha developed RISUG after years of developing other inventions. He originally wanted to create an artificial heart that could pump blood using a strong electrical pulse. Using the 13-chamber model of a cockroach heart, he designed a softer pumping mechanism that would theoretically be safe to use in humans. As India's population grew throughout the 1970s, Guha modified his heart pump design to create a water pump that could work off of differences in ionic charges between salt water and fresh water in water treatment facilities. This filtration system did not require electricity and could potentially help large groups of people have access to clean water. India, however, decided that the population problem would be better served by developing more effective contraception. So Guha again modified his design to work safely inside the body, specifically inside the male genitalia. The non-toxic polymer of RISUG also uses differences in the charges of the semen to rupture the sperm as it flows through the vas deferens. Intellectual property rights to RISUG in the United States were acquired between 2010 and 2012 by the Parsemus Foundation, a not-for-profit organization, which has branded it as "Vasalgel". Vasalgel, which has a slightly different formu
https://en.wikipedia.org/wiki/Male%20contraceptive
Male contraceptives, also known as male birth control, are methods of preventing pregnancy by leveraging male physiology. Globally, the most common forms of male contraceptives include condoms, vasectomy, and withdrawal. Men are largely limited to these forms of contraception, and combined, male contraceptives make up less than one-third of total contraceptive use. Novel forms of male contraception are in clinical and nonclinical stages of research and development, however, none have reached regulatory approval for widespread use. Studies of men indicate that around half of survey populations are interested using a novel contraceptive method, and they display interest in a wide variety of contraceptive methods including hormonal and non-hormonal pills, gels, and implants. Currently available methods Vasectomy Vasectomy is surgical procedure for permanent male sterilization usually performed in a physician's office in an outpatient procedure. During the procedure, the vasa deferentia of a patient are severed, and then tied or sealed to prevent the transport of sperm through the reproductive tract and thereby causing a pregnancy. Vasectomy is an effective procedure, with less than 0.15% of partners becoming pregnant within the first 12 months after the procedure. Vasectomy is also a widely reliable and safe method of contraception, and complications are both rare and minor. However, due to the presence of sperm retained beyond the blocked vasa deferentia, vasectomies are not initially effective and the remaining sperm must be cleared through ejaculation and / or time. Vasectomies can be reversed, though rates of successful reversal are variable, and the procedure is often costly. Condoms A condom is a sheathed barrier device that is rolled onto an erect penis before intercourse and retains ejaculated semen, thereby preventing pregnancy. Condoms are marginally effective when compared to vasectomy or modern methods of contraception for women, and have a typical-u
https://en.wikipedia.org/wiki/American%20Council%20of%20Learned%20Societies
The American Council of Learned Societies (ACLS) is a private, nonprofit federation of 75 scholarly organizations in the humanities and related social sciences founded in 1919. It is best known for its fellowship competitions which provide a range of opportunities for scholars in the humanities and related social sciences at all career stages, from graduate students to distinguished professors to independent scholars, working with a number of disciplines and methodologies in the U.S. and abroad. Background The federation was created in 1919 to represent the United States in the Union Académique Internationale (International Union of Academies). The founders of ACLS, representatives of 13 learned societies, believed that a federation of scholarly organizations (dedicated to excellence in research, and most with open membership) was the best combination of U.S. democracy and intellectual aspirations. According to the council's constitution, its mission was advancing humanistic studies and social sciences and maintaining and strengthening national societies dedicated to those studies. Advancing scholarship in the humanities Since its founding, ACLS has provided the humanities and related social sciences with leadership, opportunities for innovation, and national and international representation. The council's many activities have at their core the practice of scholarly self-governance. Central to ACLS throughout its history have been its programs of fellowships and grants aiding research. ACLS made its first grants, totaling $4,500, in 1926; in 2012, ACLS awarded over $15 million in fellowship stipends and other awards to more than 320 scholars in the United States and abroad. All ACLS awards are made through rigorous peer review by specially appointed committees of scholars from throughout the United States and, in some programs, abroad. During the late 1950s, the council encouraged Hans Wehr in his writing of the first English edition of his Dictionary of Modern Wr
https://en.wikipedia.org/wiki/Cephalic%20vein
In human anatomy, the cephalic vein is a superficial vein in the arm. It originates from the radial end of the dorsal venous network of hand, and ascends along the radial (lateral) side of the arm before emptying into the axillary vein. At the elbow, it communicates with the basilic vein via the median cubital vein. Anatomy The cephalic vein is situated within the superficial fascia along the anterolateral surface of the biceps. Origin The cephalic vein forms over the anatomical snuffbox at the radial end of the dorsal venous network of hand. Course and relations From its origin, it ascends up the lateral aspect of the radius. Near the shoulder, the cephalic vein passes between the deltoid and pectoralis major muscles (deltopectoral groove) through the clavipectoral triangle, where it empties into the axillary vein. Anastomoses It communicates with the basilic vein via the median cubital vein at the elbow. Clinical significance The cephalic vein is often visible through the skin, and its location in the deltopectoral groove is fairly consistent, making this site a good candidate for venous access. Permanent pacemaker leads are often placed in the cephalic vein in the deltopectoral groove. The vein may be used for intravenous access, as large bore cannula may be easily placed. However, the cannulation of a vein as close to the radial nerve as the cephalic vein can sometimes lead to nerve damage. History Ordinarily the term cephalic refers to anatomy of the head. When the Persian Muslim physician Ibn Sīnā's Canon was translated into medieval Latin, cephalic was mistakenly chosen to render the Arabic term al-kífal, meaning "outer". Additional images See also Basilic vein Median cubital vein References External links Anatomy Veins of the upper limb Human surface anatomy Cardiovascular system Circulatory system
https://en.wikipedia.org/wiki/Industrial%20Ethernet
Industrial Ethernet (IE) is the use of Ethernet in an industrial environment with protocols that provide determinism and real-time control. Protocols for industrial Ethernet include EtherCAT, EtherNet/IP, PROFINET, POWERLINK, SERCOS III, CC-Link IE, and Modbus TCP. Many industrial Ethernet protocols use a modified media access control (MAC) layer to provide low latency and determinism. Some microprocessors provide industrial Ethernet support. Industrial Ethernet can also refer to the use of standard Ethernet protocols with rugged connectors and extended temperature switches in an industrial environment, for automation or process control. Components used in plant process areas must be designed to work in harsh environments of temperature extremes, humidity, and vibration that exceed the ranges for information technology equipment intended for installation in controlled environments. The use of fiber-optic Ethernet variants reduces the problems of electrical noise and provides electrical isolation. Some industrial networks emphasized deterministic delivery of transmitted data, whereas Ethernet used collision detection which made transport time for individual data packets difficult to estimate with increasing network traffic. Typically, industrial uses of Ethernet employ full-duplex standards and other methods so that collisions do not unacceptably influence transmission times. Application environment Industrial use requires consideration of the environment in which the equipment must operate. Factory equipment must tolerate a wider range of temperature, vibration, physical contamination and electrical noise than equipment installed in dedicated information-technology wiring closets. Since critical process control may rely on an Ethernet link, economic cost of interruptions may be high and high availability is therefore an essential criterion. Industrial Ethernet networks must interoperate with both current and legacy systems, and must provide predictable performa
https://en.wikipedia.org/wiki/Er%3AYAG%20laser
An Er:YAG laser (erbium-doped yttrium aluminium garnet laser, erbium YAG laser) is a solid-state laser whose active laser medium is erbium-doped yttrium aluminium garnet (Er:Y3Al5O12). Er:YAG lasers typically emit light with a wavelength of 2940 nm, which is infrared light. Applications The output of an Er:YAG laser is strongly absorbed by water. As a result, they are widely used for medical procedures in which deep penetration of tissues is not desired. Erbium-YAG lasers have been used for laser resurfacing of human skin. Example uses include treating acne scarring, deep rhytides, and melasma. In addition to being absorbed by water, the output of Er:YAG lasers is also absorbed by hydroxyapatite, which makes it a good laser for cutting bone as well as soft tissue. Bone surgery applications have been found in oral surgery, dentistry, implant dentistry, and otolaryngology. Er:YAG lasers are safer for the removal of warts than carbon dioxide lasers, because human papillomavirus (HPV) DNA is not found in the laser plume. Er:YAG lasers can be used in laser aided cataract surgery but owing to its water absorbable nature Nd:YAG is preferred more. Erbium YAG dental lasers are effective for removing tooth decay atraumatically, often without the need for local anesthetic to numb the tooth. Eliminating the vibration of the dental drill removes the risk of causing microfractures in the tooth. When used initially at low power settings, the laser energy has a sedative effect on the nerve, resulting in the ability to subsequently increase the power without creating the sensation of pain in the tooth. References Further reading External links DOE about Er-YAG lasers 1994 Solid-state lasers Dental lasers Medical equipment Laser medicine Articles containing video clips
https://en.wikipedia.org/wiki/Southeastern%20Anatolia%20Project
The Southeastern Anatolia Project (, GAP) is a multi-sector integrated regional development project based on the concept of sustainable development for the 9 million people (2005) living in the Southeastern Anatolia region of Turkey. According to a governmental source, the aim of the GAP is to eliminate regional development disparities by raising incomes and living standards and to contribute to the national development targets of social stability and economic growth by enhancing the productive and employment generating capacity of the rural sector. The total cost of the project is over 100 billion Turkish lira (TL) (2017 adjusted price), of which 30.6 billion TL of this investment was realized at the end of 2010. The real investment (corrected value) was 72.6% for the end of 2010. The project area covers nine provinces (Adıyaman, Batman, Diyarbakır, Gaziantep, Kilis, Siirt, Şanlıurfa, Mardin, and Şırnak) which are located in the basins of the Euphrates and Tigris and in Upper Mesopotamia. Current activities under GAP include sectors like agriculture and irrigation, hydroelectric power production, urban and rural infrastructure, forestry, education and health. Water resources development envisages the construction of 22 dams and 19 power plants (nine plants, corresponding to 74% capacity of total projected power output, were completed by 2010) and irrigation schemes on an area extending over 17,000 square kilometres. Seven airports have been built and are currently active. The GAP cargo airport in Şırnak, which is also the biggest in Turkey, has been completed. History The initial idea and decision to utilize the waters of the Euphrates and Tigris rivers came from Atatürk, the founder of the Republic. During the one party era, the need for electrical energy was a priority issue. The Electricity Studies Administration was founded in 1936 to investigate how rivers in the country could be utilized for energy production. The Administration began its detailed studies
https://en.wikipedia.org/wiki/Virtual%20IP%20address
A virtual IP address (VIP or VIPA) is an IP address that does not correspond to a physical network interface. Uses for VIPs include network address translation (especially, one-to-many NAT), fault-tolerance, and mobility. Usage For one-to-many NAT, a VIP address is advertised from the NAT device (often a router), and incoming data packets destined to that VIP address are routed to different actual IP addresses (with address translation). These VIP addresses have several variations and implementation scenarios, including Common Address Redundancy Protocol (CARP) and Proxy ARP. In addition, if there are multiple actual IP addresses, load balancing can be performed as part of NAT. VIP addresses are also used for connection redundancy by providing alternative fail-over options for one machine. For this to work, the host has to run an interior gateway protocol like Open Shortest Path First (OSPF), and appear as a router to the rest of the network. It advertises virtual links connected via itself to all of its actual network interfaces. If one network interface fails, normal OSPF topology reconvergence will cause traffic to be sent via another interface. A VIP address can be used to provide nearly unlimited mobility. For example, if an application has an IP address on a physical subnet, that application can be moved only to a host on that same subnet. VIP addresses can be advertised on their own subnet, so its application can be moved anywhere on the reachable network without changing addresses. See also Anycast, single IP bound simultaneously to many, potentially geographically disparate, NICs IP network multipathing (IPMP), Solaris virtual IP implementation for fault-tolerance and load balancing VLAN Notes References Cluster computing IP addresses
https://en.wikipedia.org/wiki/Bankruptcy%20problem
A bankruptcy problem, also called a claims problem, is a problem of distributing a homogeneous divisible good (such as money) among people with different claims. The focus is on the case where the amount is insufficient to satisfy all the claims. The canonical application is a bankrupt firm that is to be liquidated. The firm owes different amounts of money to different creditors, but the total worth of the company's assets is smaller than its total debt. The problem is how to divide the scarce existing money among the creditors. Another application would be the division of an estate amongst several heirs, particularly when the estate cannot meet all the deceased's commitments. A third application is tax assessment. One can consider the claimants as taxpayers, the claims as the incomes, and the endowment as the total after-tax income. Determining the allocation of total after-tax income is equivalent to determining the allocation of tax payments. Definitions The amount available to divide is denoted by (=Estate or Endowment). There are n claimants. Each claimant i has a claim denoted by . It is assumed that , that is, the total claims are (weakly) larger than the estate. A division rule is a function that maps a problem instance to a vector such that and for all i. That is: each claimant receives at most its claim, and the sum of allocations is exactly the estate E. Generalizations There are generalized variants in which the total claims might be smaller than the estate. In these generalized variants, is not assumed and is not required. Another generalization, inspired by realistic bankruptcy problems, is to add an exogeneous priority ordering among the claimants, that may be different even for claimants with identical claims. This problem is called a claims problem with priorities. Another variant is called a claims problem with weights. Rules There are various rules for solving bankruptcy problems in practice. The proportional rule divide
https://en.wikipedia.org/wiki/Two-player%20game
A two-player game is a multiplayer game that is played by precisely two players. This is distinct from a solitaire game, which is played by only one player. Examples The following are some examples of two-player games. This list is not intended to be exhaustive. Board games: Chess Draughts Go Some wargames, such as Hammer of the Scots Card games: Cribbage Whist Rummy 66 Pinochle Magic: The Gathering, a collectible card game in which players duel Sports: Cue sports, a family of games that use cue sticks and billiard balls Many athletic games, such as tennis (singles) Video games: Pong A Way Out See also List of types of games Zero-sum game References Game terminology Game theory game classes
https://en.wikipedia.org/wiki/Mark%20Dean%20%28computer%20scientist%29
Mark E. Dean (born March 2, 1957) is an American inventor and computer engineer. He developed the ISA bus, and he led a design team for making a one-gigahertz computer processor chip. He holds three of nine PC patents for being the co-creator of the IBM personal computer released in 1981. In 1995, Dean was named the first ever African-American IBM Fellow. Dean was elected as a member into the National Academy of Engineering in 2001 for innovative and pioneering contributions to personal computer development. In 2000, Mark discussed a hand held device that would be able to display media content, like a digital newspaper. In August 2011, Dean stated that he uses a tablet computer instead of a PC in his blog. Early life Dean was born in Jefferson City, Tennessee. Dean displayed an affinity for technology and invention at a young age. His father, James, worked with electrical equipment for turbines and spillways. James would often bring Mark with him on work trips, introducing him to engineering. When Mark was young, he and his dad constructed a tractor from scratch. In middle school, Mark had made up his mind on becoming a computer engineer. Dean attended Jefferson City High School in Tennessee, where he excelled in both academics and athletics. While in high school, during the 1970s, Mark built his own personal computer. Recognition Dean is the first African-American to become an IBM Fellow, which is the highest level of technical excellence at the company. In 1997, he was inducted into the National Inventors Hall of Fame. He was elected to the National Academy of Engineering in 2001. In 1997, Dean was awarded the Black Engineer of the Year Presidents Award. From August 2018 to July 2019, Dean was the interim dean of the UT's Tickle College of Engineering. As of April 26, 2019, April 25 is now officially Mark Dean Day in Knox County, Tennessee. Career Mark graduated with a bachelor's degree in electrical engineering during 1979. Soon after, Mark got a job at
https://en.wikipedia.org/wiki/ISO%2019092-2
ISO 19092 Financial Services - Biometrics - Part 2: Message syntax and cryptographic requirements is an ISO standard that describes the techniques, protocols, cryptographic requirements, and syntax for using biometrics as an identification and verification mechanism in a wide variety of security applications in the financial industry. This standard provides support for policy based matching decisions for remote authentication and allows biometrics to be used securely with the ISO 8583 retail transaction messaging standard. A secure review and audit event journal syntax is provided that allows many of the security controls specified in ISO 19092-1 to be implemented. Cryptography standards 19092-2
https://en.wikipedia.org/wiki/Information%20flow%20%28information%20theory%29
Information flow in an information theoretical context is the transfer of information from a variable to a variable in a given process. Not all flows may be desirable; for example, a system should not leak any confidential information (partially or not) to public observers—as it is a violation of privacy on an individual level, or might cause major loss on a corporate level. Introduction Securing the data manipulated by computing systems has been a challenge in the past years. Several methods to limit the information disclosure exist today, such as access control lists, firewalls, and cryptography. However, although these methods do impose limits on the information that is released by a system, they provide no guarantees about information propagation. For example, access control lists of file systems prevent unauthorized file access, but they do not control how the data is used afterwards. Similarly, cryptography provides a means to exchange information privately across a non-secure channel, but no guarantees about the confidentiality of the data are given once it is decrypted. In low level information flow analysis, each variable is usually assigned a security level. The basic model comprises two distinct levels: low and high, meaning, respectively, publicly observable information, and secret information. To ensure confidentiality, flowing information from high to low variables should not be allowed. On the other hand, to ensure integrity, flows to high variables should be restricted. More generally, the security levels can be viewed as a lattice with information flowing only upwards in the lattice. For example, considering two security levels and (low and high), if , flows from to , from to , and to would be allowed, while flows from to would not. Throughout this article, the following notation is used: variable (low) shall denote a publicly observable variable variable (high) shall denote a secret variable Where and are the only two securi
https://en.wikipedia.org/wiki/Central%20binomial%20coefficient
In mathematics the nth central binomial coefficient is the particular binomial coefficient They are called central since they show up exactly in the middle of the even-numbered rows in Pascal's triangle. The first few central binomial coefficients starting at n = 0 are: , , , , , , 924, 3432, 12870, 48620, ...; Combinatorial interpretations and other properties The central binomial coefficient is the number of arrangements where there are an equal number of two types of objects. For example, when , the binomial coefficient is equal to 6, and there are six arrangements of two copies of A and two copies of B: AABB, ABAB, ABBA, BAAB, BABA, BBAA. The same central binomial coefficient is also the number of words of length 2n made up of A and B where there are never more B than A at any point as one reads from left to right. For example, when , there are six words of length 4 in which each prefix has at least as many copies of A as of B: AAAA, AAAB, AABA, AABB, ABAA, ABAB. The number of factors of 2 in is equal to the number of 1s in the binary representation of n. As a consequence, 1 is the only odd central binomial coefficient. Generating function The ordinary generating function for the central binomial coefficients is This can be proved using the binomial series and the relation where is a generalized binomial coefficient. The central binomial coefficients have exponential generating function where I0 is a modified Bessel function of the first kind. The generating function of the squares of the central binomial coefficients can be written in terms of the complete elliptic integral of the first kind: Asymptotic growth Simple bounds that immediately follow from are The asymptotic behavior can be described more precisely: Related sequences The closely related Catalan numbers Cn are given by: A slight generalization of central binomial coefficients is to take them as , with appropriate real numbers n, where is the gamma function and is the
https://en.wikipedia.org/wiki/Pascal%27s%20rule
In mathematics, Pascal's rule (or Pascal's formula) is a combinatorial identity about binomial coefficients. It states that for positive natural numbers n and k, where is a binomial coefficient; one interpretation of the coefficient of the term in the expansion of . There is no restriction on the relative sizes of and , since, if the value of the binomial coefficient is zero and the identity remains valid. Pascal's rule can also be viewed as a statement that the formula solves the linear two-dimensional difference equation over the natural numbers. Thus, Pascal's rule is also a statement about a formula for the numbers appearing in Pascal's triangle. Pascal's rule can also be generalized to apply to multinomial coefficients. Combinatorial proof Pascal's rule has an intuitive combinatorial meaning, that is clearly expressed in this counting proof. Proof. Recall that equals the number of subsets with k elements from a set with n elements. Suppose one particular element is uniquely labeled X in a set with n elements. To construct a subset of k elements containing X, include X and choose k − 1 elements from the remaining n − 1 elements in the set. There are such subsets. To construct a subset of k elements not containing X, choose k elements from the remaining n − 1 elements in the set. There are such subsets. Every subset of k elements either contains X or not. The total number of subsets with k elements in a set of n elements is the sum of the number of subsets containing X and the number of subsets that do not contain X, . This equals ; therefore, . Algebraic proof Alternatively, the algebraic derivation of the binomial case follows. Generalization Pascal's rule can be generalized to multinomial coefficients. For any integer p such that , and , where is the coefficient of the term in the expansion of . The algebraic derivation for this general case is as follows. Let p be an integer such that , and . Then See also Pascal's triangle
https://en.wikipedia.org/wiki/Lazy%20Jones
Lazy Jones is a platform game for the Commodore 64, ZX Spectrum, MSX and Tatung Einstein. It was written by David Whittaker and released by Terminal Software in 1984. The Spectrum version was ported by Simon Cobb. Lazy Jones is a collection of fifteen sub-games. The game takes place inside a hotel with three floors, connected by an elevator. The character is a lazy hotel employee who does not much care for his work, but prefers to sneak into the rooms to play video games instead. Gameplay The main screen in Lazy Jones is the hotel interior. There, the character can use the elevator to travel freely between the three floors, but he must watch out for enemies: the current hotel manager on the top floor, the ghost of the previous manager on the bottom floor, and a haunted cleaning cart on the middle floor. The enemies only walk around and do not pursue the character, but contact with them is fatal. Each floor has six rooms, three on each side of the elevator. Each room can be entered once. Inside most rooms is a video game, which the character immediately begins playing. As well as the video games, there is the hotel bar, a bed, a cleaning closet and a toilet. The bar works like a video game, but the other rooms are useless decorations (intentionally added, because Whittaker had run out of ideas for new games). When all rooms have been visited, the game starts over again, but increasingly faster each time. The sub-games are generally simplified versions of 1970s and 1980s video games, such as Space Invaders, Frogger, Snake, H.E.R.O., Breakout or Chuckie Egg. Their plots and gameplay are very simple, and in most of them the player simply must avoid incoming enemies long enough to score many points. In some, the player must shoot enemies to score points. Each sub-game has a time limit. In some sub-games it is possible to "die", thus ending the sub-game prematurely, while others only end after the time limit expires. But this also depends on the portrayed game vers
https://en.wikipedia.org/wiki/TILLING%20%28molecular%20biology%29
TILLING (Targeting Induced Local Lesions in Genomes) is a method in molecular biology that allows directed identification of mutations in a specific gene. TILLING was introduced in 2000, using the model plant Arabidopsis thaliana, and expanded on into other uses and methodologies by a small group of scientists including Luca Comai. TILLING has since been used as a reverse genetics method in other organisms such as zebrafish, maize, wheat, rice, soybean, tomato and lettuce. Overview The method combines a standard and efficient technique of mutagenesis using a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA screening-technique that identifies single base mutations (also called point mutations) in a target gene. The TILLING method relies on the formation of DNA heteroduplexes that are formed when multiple alleles are amplified by PCR and are then heated and slowly cooled. A “bubble” forms at the mismatch of the two DNA strands, which is then cleaved by a single stranded nuclease. The products are then separated by size on several different platforms (see below). Mismatches may be due to induced mutation, heterozygosity within an individual, or natural variation between individuals. EcoTILLING is a method that uses TILLING techniques to look for natural mutations in individuals, usually for population genetics analysis. DEcoTILLING is a modification of TILLING and EcoTILLING which uses an inexpensive method to identify fragments. Since the advent of NGS sequencing technologies, TILLING-by-sequencing has been developed based on Illumina sequencing of target genes amplified from multidimensionally pooled templates to identify possible single-nucleotide changes. Single strand cleavage enzymes There are several sources for single strand nucleases. The first widely used enzyme was mung bean nuclease, but this nuclease has been shown to have high non-specific activity, and only works at low pH, which can degrade PCR products and dye-la
https://en.wikipedia.org/wiki/Sentence%20%28mathematical%20logic%29
In mathematical logic, a sentence (or closed formula) of a predicate logic is a Boolean-valued well-formed formula with no free variables. A sentence can be viewed as expressing a proposition, something that must be true or false. The restriction of having no free variables is needed to make sure that sentences can have concrete, fixed truth values: as the free variables of a (general) formula can range over several values, the truth value of such a formula may vary. Sentences without any logical connectives or quantifiers in them are known as atomic sentences; by analogy to atomic formula. Sentences are then built up out of atomic formulas by applying connectives and quantifiers. A set of sentences is called a theory; thus, individual sentences may be called theorems. To properly evaluate the truth (or falsehood) of a sentence, one must make reference to an interpretation of the theory. For first-order theories, interpretations are commonly called structures. Given a structure or interpretation, a sentence will have a fixed truth value. A theory is satisfiable when it is possible to present an interpretation in which all of its sentences are true. The study of algorithms to automatically discover interpretations of theories that render all sentences as being true is known as the satisfiability modulo theories problem. Example For the interpretation of formulas, consider these structures: the positive real numbers, the real numbers, and complex numbers. The following example in first-order logic a sentence. This sentence means that for every y, there is an x such that This sentence is true for positive real numbers, false for real numbers, and true for complex numbers. However, the formula is a sentence because of the presence of the free variable y. For real numbers, this formula is true if we substitute (arbitrarily) but is false if It is the presence of a free variable, rather than the inconstant truth value, that is important; for example,
https://en.wikipedia.org/wiki/Inverse%20dynamics
Inverse dynamics is an inverse problem. It commonly refers to either inverse rigid body dynamics or inverse structural dynamics. Inverse rigid-body dynamics is a method for computing forces and/or moments of force (torques) based on the kinematics (motion) of a body and the body's inertial properties (mass and moment of inertia). Typically it uses link-segment models to represent the mechanical behaviour of interconnected segments, such as the limbs of humans or animals or the joint extensions of robots, where given the kinematics of the various parts, inverse dynamics derives the minimum forces and moments responsible for the individual movements. In practice, inverse dynamics computes these internal moments and forces from measurements of the motion of limbs and external forces such as ground reaction forces, under a special set of assumptions. Applications The fields of robotics and biomechanics constitute the major application areas for inverse dynamics. Within robotics, inverse dynamics algorithms are used to calculate the torques that a robot's motors must deliver to make the robot's end-point move in the way prescribed by its current task. The "inverse dynamics problem" for robotics was solved by Eduardo Bayo in 1987. This solution calculates how each of the numerous electric motors that control a robot arm must move to produce a particular action. Humans can perform very complicated and precise movements, such as controlling the tip of a fishing rod well enough to cast the bait accurately. Before the arm moves, the brain calculates the necessary movement of each muscle involved and tells the muscles what to do as the arm swings. In the case of a robot arm, the "muscles" are the electric motors which must turn by a given amount at a given moment. Each motor must be supplied with just the right amount of electric current, at just the right time. Researchers can predict the motion of a robot arm if they know how the motors will move. This is known as the forw
https://en.wikipedia.org/wiki/Phototube
A phototube or photoelectric cell is a type of gas-filled or vacuum tube that is sensitive to light. Such a tube is more correctly called a 'photoemissive cell' to distinguish it from photovoltaic or photoconductive cells. Phototubes were previously more widely used but are now replaced in many applications by solid state photodetectors. The photomultiplier tube is one of the most sensitive light detectors, and is still widely used in physics research. Operating principles Phototubes operate according to the photoelectric effect: Incoming photons strike a photocathode, knocking electrons out of its surface, which are attracted to an anode. Thus current is dependent on the frequency and intensity of incoming photons. Unlike photomultiplier tubes, no amplification takes place, so the current through the device is typically of the order of a few microamperes. The light wavelength range over which the device is sensitive depends on the material used for the photoemissive cathode. A caesium-antimony cathode gives a device that is very sensitive in the violet to ultra-violet region with sensitivity falling off to blindness to red light. Caesium on oxidised silver gives a cathode that is most sensitive to infra-red to red light, falling off towards blue, where the sensitivity is low but not zero. Vacuum devices have a near constant anode current for a given level of illumination relative to anode voltage. Gas-filled devices are more sensitive, but the frequency response to modulated illumination falls off at lower frequencies compared to the vacuum devices. The frequency response of vacuum devices is generally limited by the transit time of the electrons from cathode to anode. Applications One major application of the phototube was the reading of optical sound tracks for projected films. Phototubes were used in a variety of light-sensing applications until some were superseded by photoresistors and photodiodes. References Optical devices Sensors Vacuum tubes
https://en.wikipedia.org/wiki/Degree%20distribution
In the study of graphs and networks, the degree of a node in a network is the number of connections it has to other nodes and the degree distribution is the probability distribution of these degrees over the whole network. Definition The degree of a node in a network (sometimes referred to incorrectly as the connectivity) is the number of connections or edges the node has to other nodes. If a network is directed, meaning that edges point in one direction from one node to another node, then nodes have two different degrees, the in-degree, which is the number of incoming edges, and the out-degree, which is the number of outgoing edges. The degree distribution P(k) of a network is then defined to be the fraction of nodes in the network with degree k. Thus if there are n nodes in total in a network and nk of them have degree k, we have . The same information is also sometimes presented in the form of a cumulative degree distribution, the fraction of nodes with degree smaller than k, or even the complementary cumulative degree distribution, the fraction of nodes with degree greater than or equal to k (1 - C) if one considers C as the cumulative degree distribution; i.e. the complement of C. Observed degree distributions The degree distribution is very important in studying both real networks, such as the Internet and social networks, and theoretical networks. The simplest network model, for example, the (Erdős–Rényi model) random graph, in which each of n nodes is independently connected (or not) with probability p (or 1 − p), has a binomial distribution of degrees k: (or Poisson in the limit of large n, if the average degree is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the World Wide Web, and some social networks were argu
https://en.wikipedia.org/wiki/Progressive%20contextualization
Progressive contextualization (PC) is a scientific method pioneered and developed by Andrew P. Vayda and research team between 1979 and 1984. The method was developed to help understand cause of damage and destruction of forest and land during the New Order Regime in Indonesia, as well as practical ethnography. Vayda proposed the Progressive contextualization method due to his dissatisfaction with several conventional anthropological methods to describe accurately and quickly cases of illegal logging, land destruction and the network of actor-investor protecting the actions, as well as various consequences detrimental to the environment and social life. The essence of this method is to track and assess: what the actor (actor-based) or network of certain actors (actor-based network) does in a certain location and time the series of consequences (intended or unintended) that result from what the actors and/or networks do, in a time and space that can be different from the original time and space, as long as it is in accordance with the interest of the research and the available time. Therefore, the PC method does not have to be bound to a certain research place and time pre-determined in the research design. It rejects the assumption of ecological and socio-cultural homogeneity. Instead, it focuses on diversity and it looks at how different individuals and groups operate in and adapt to their total environments through a variety of behaviors, technologies, organizations, structures and beliefs. Due attention to context in the elucidation of actions and consequences may often mean having to deal with precisely the kind of factors and processes often scanted or denied by holistic approaches: the loose, transient, and contingent interactions, the disarticulating processes, and the movements of people, resources, and ideas across whatever boundaries that ecosystems, societies, and cultures are thought to have — Vayda, 1986 Based on such a premise and through the pr
https://en.wikipedia.org/wiki/Activity%20%28UML%29
An activity in Unified Modeling Language (UML) is a major task that must take place in order to fulfill an operation contract. The Student Guide to Object-Oriented Development defines an activity as a sequence of activities that make up a process. Activities can be represented in activity diagrams An activity can represent: The invocation of an operation. A step in a business process. An entire business process. Activities can be decomposed into subactivities, until at the bottom we find atomic actions. The underlying conception of an activity has changed between UML 1.5 and UML 2.0. In UML 2.0 an activity is no longer based on the state-chart rather it is based on a Petri net like coordination mechanism. There the activity represents user-defined behavior coordinating actions. Actions in turn are pre-defined (UML offers a series of actions for this). Unified Modeling Language
https://en.wikipedia.org/wiki/Ultradian%20rhythm
In chronobiology, an ultradian rhythm is a recurrent period or cycle repeated throughout a 24-hour day. In contrast, circadian rhythms complete one cycle daily, while infradian rhythms such as the human menstrual cycle have periods longer than a day. The Oxford English Dictionary's definition of Ultradian specifies that it refers to cycles with a period shorter than a day but longer than an hour. The descriptive term ultradian is used in sleep research in reference to the 90–120 minute cycling of the sleep stages during human sleep. There is a circasemidian rhythm in body temperature and cognitive function which is technically ultradian. However, this appears to be the first harmonic of the circadian rhythm of each and not an endogenous rhythm with its own rhythm generator. Other ultradian rhythms include blood circulation, blinking, pulse, hormonal secretions such as growth hormone, heart rate, thermoregulation, micturition, bowel activity, nostril dilation, appetite, and arousal. Ultradian rhythms of appetite require antiphasic release of neuropeptide Y (NPY) and corticotropin-releasing hormone (CRH), stimulating and inhibiting appetite ultradian rhythms. Recently, ultradian rhythms of arousal lasting approximately 4 hours were attributed to the dopaminergic system in mammals. When the dopaminergic system is perturbed either by use of drugs or by genetic disruption, these 4-hour rhythms can lengthen significantly into the infradian (> 24 h) range, sometimes even lasting for days (> 110 h) when methamphetamines are provided. Ultradian mood states in bipolar disorder cycle much faster than rapid cycling; the latter is defined as four or more mood episodes in one year, sometimes occurring within a few weeks. Ultradian mood cycling is characterized by cycles shorter than 24 hours. See also Circadian Rhythm References Chronobiology Neurochemistry Sleep physiology
https://en.wikipedia.org/wiki/Fixed-point%20index
In mathematics, the fixed-point index is a concept in topological fixed-point theory, and in particular Nielsen theory. The fixed-point index can be thought of as a multiplicity measurement for fixed points. The index can be easily defined in the setting of complex analysis: Let f(z) be a holomorphic mapping on the complex plane, and let z0 be a fixed point of f. Then the function f(z) − z is holomorphic, and has an isolated zero at z0. We define the fixed-point index of f at z0, denoted i(f, z0), to be the multiplicity of the zero of the function f(z) − z at the point z0. In real Euclidean space, the fixed-point index is defined as follows: If x0 is an isolated fixed point of f, then let g be the function defined by Then g has an isolated singularity at x0, and maps the boundary of some deleted neighborhood of x0 to the unit sphere. We define i(f, x0) to be the Brouwer degree of the mapping induced by g on some suitably chosen small sphere around x0. The Lefschetz–Hopf theorem The importance of the fixed-point index is largely due to its role in the Lefschetz–Hopf theorem, which states: where Fix(f) is the set of fixed points of f, and Λf is the Lefschetz number of f. Since the quantity on the left-hand side of the above is clearly zero when f has no fixed points, the Lefschetz–Hopf theorem trivially implies the Lefschetz fixed-point theorem. Notes References Robert F. Brown: Fixed Point Theory, in: I. M. James, History of Topology, Amsterdam 1999, , 271–299. Fixed points (mathematics) Topology
https://en.wikipedia.org/wiki/Image%20moment
In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation. Image moments are useful to describe objects after segmentation. Simple properties of the image which are found via image moments include area (or total intensity), its centroid, and information about its orientation. Raw moments For a 2D continuous function f(x,y) the moment (sometimes called "raw moment") of order (p + q) is defined as for p,q = 0,1,2,... Adapting this to scalar (greyscale) image with pixel intensities I(x,y), raw image moments Mij are calculated by In some cases, this may be calculated by considering the image as a probability density function, i.e., by dividing the above by A uniqueness theorem (Hu [1962]) states that if f(x,y) is piecewise continuous and has nonzero values only in a finite part of the xy plane, moments of all orders exist, and the moment sequence (Mpq) is uniquely determined by f(x,y). Conversely, (Mpq) uniquely determines f(x,y). In practice, the image is summarized with functions of a few lower order moments. Examples Simple image properties derived via raw moments include: Area (for binary images) or sum of grey level (for greytone images): Centroid: Central moments Central moments are defined as where and are the components of the centroid. If ƒ(x, y) is a digital image, then the previous equation becomes The central moments of order up to 3 are: It can be shown that: Central moments are translational invariant. Examples Information about image orientation can be derived by first using the second order central moments to construct a covariance matrix. The covariance matrix of the image is now . The eigenvectors of this matrix correspond to the major and minor axes of the image intensity, so the orientation can thus be extracted from the angle
https://en.wikipedia.org/wiki/Pathovar
A pathovar is a bacterial strain or set of strains with the same or similar characteristics, that is differentiated at infrasubspecific level from other strains of the same species or subspecies on the basis of distinctive pathogenicity to one or more plant hosts. Pathovars are named as a ternary or quaternary addition to the species binomial name, for example the bacterium that causes citrus canker Xanthomonas axonopodis, has several pathovars with different host ranges, X. axonopodis pv. citri is one of them; the abbreviation 'pv.' means pathovar. The type strains of pathovars are pathotypes, which are distinguished from the types (holotype, neotype, etc.) of the species to which the pathovar belongs. See also Infraspecific names in botany Phytopathology Trinomen, infraspecific names in zoology (subspecies only) References Biological classification Bacterial plant pathogens and diseases Microbiology Pathovars
https://en.wikipedia.org/wiki/Differential%20space%E2%80%93time%20code
Differential space–time codes are ways of transmitting data in wireless communications. They are forms of space–time code that do not need to know the channel impairments at the receiver in order to be able to decode the signal. They are usually based on space–time block codes, and transmit one block-code from a set in response to a change in the input signal. The differences among the blocks in the set are designed to allow the receiver to extract the data with good reliability. The first differential space-time block code was disclosed by Vahid Tarokh and Hamid Jafarkhani. References Encodings
https://en.wikipedia.org/wiki/Goertzel%20algorithm
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone multi-frequency signaling (DTMF) tones produced by the push buttons of the keypad of a traditional analog telephone. The algorithm was first described by Gerald Goertzel in 1958. Like the DFT, the Goertzel algorithm analyses one selectable frequency component from a discrete signal. Unlike direct DFT calculations, the Goertzel algorithm applies a single real-valued coefficient at each iteration, using real-valued arithmetic for real-valued input sequences. For covering a full spectrum (except when using for continuous stream of data where coefficients are reused for subsequent calculations, which has computational complexity equivalent of sliding DFT), the Goertzel algorithm has a higher order of complexity than fast Fourier transform (FFT) algorithms, but for computing a small number of selected frequency components, it is more numerically efficient. The simple structure of the Goertzel algorithm makes it well suited to small processors and embedded applications. The Goertzel algorithm can also be used "in reverse" as a sinusoid synthesis function, which requires only 1 multiplication and 1 subtraction per generated sample. The algorithm The main calculation in the Goertzel algorithm has the form of a digital filter, and for this reason the algorithm is often called a Goertzel filter. The filter operates on an input sequence in a cascade of two stages with a parameter , giving the frequency to be analysed, normalised to radians per sample. The first stage calculates an intermediate sequence, : The second stage applies the following filter to , producing output sequence : The first filter stage can be observed to be a second-order IIR filter with a direct-form structure. This particular structure has the property tha
https://en.wikipedia.org/wiki/Principles%20of%20Mathematical%20Logic
Principles of Mathematical Logic is the 1950 American translation of the 1938 second edition of David Hilbert's and Wilhelm Ackermann's classic text Grundzüge der theoretischen Logik, on elementary mathematical logic. The 1928 first edition thereof is considered the first elementary text clearly grounded in the formalism now known as first-order logic (FOL). Hilbert and Ackermann also formalized FOL in a way that subsequently achieved canonical status. FOL is now a core formalism of mathematical logic, and is presupposed by contemporary treatments of Peano arithmetic and nearly all treatments of axiomatic set theory. The 1928 edition included a clear statement of the Entscheidungsproblem (decision problem) for FOL, and also asked whether that logic was complete (i.e., whether all semantic truths of FOL were theorems derivable from the FOL axioms and rules). The former problem was answered in the negative first by Alonzo Church and independently by Alan Turing in 1936. The latter was answered affirmatively by Kurt Gödel in 1929. In its description of set theory, mention is made of Russell's paradox and the Liar paradox (page 145). Contemporary notation for logic owes more to this text than it does to the notation of Principia Mathematica, long popular in the English speaking world. Notes References David Hilbert and Wilhelm Ackermann (1928). Grundzüge der theoretischen Logik (Principles of Mathematical Logic). Springer-Verlag, . This text went into four subsequent German editions, the last in 1972. Translators: Lewis M. Hammond, George G. Leckie & F. Steinhardt (1999) Hendricks, Neuhaus, Petersen, Scheffler and Wansing (eds.) (2004). First-order logic revisited. Logos Verlag, . Proceedings of a workshop, FOL-75, commemorating the 75th anniversary of the publication of Hilbert and Ackermann (1928). 1928 non-fiction books 1938 non-fiction books Logic books Mathematics textbooks History of logic
https://en.wikipedia.org/wiki/Nitrosomonas
Nitrosomonas is a genus of Gram-negative bacteria, belonging to the Betaproteobacteria. It is one of the five genera of ammonia-oxidizing bacteria and, as an obligate chemolithoautotroph, uses ammonia (NH3) as an energy source and carbon dioxide (CO2) as a carbon source in presence of oxygen. Nitrosomonas are important in the global biogeochemical nitrogen cycle, since they increase the bioavailability of nitrogen to plants and in the denitrification, which is important for the release of nitrous oxide, a powerful greenhouse gas. This microbe is photophobic, and usually generate a biofilm matrix, or form clumps with other microbes, to avoid light. Nitrosomonas can be divided into six lineages: the first one includes the species Nitrosomonas europea, Nitrosomonas eutropha, Nitrosomonas halophila, and Nitrosomonas mobilis. The second lineage presents the species Nitrosomonas communis, N. sp. I and N. sp. II, meanwhile the third lineage includes only Nitrosomonas nitrosa. The fourth lineage includes the species Nitrosomonas ureae and Nitrosomonas oligotropha and the fifth and sixth lineages include the species Nitrosomonas marina, N. sp. III, Nitrosomonas estuarii and Nitrosomonas cryotolerans. Morphology All species included in this genus have ellipsoidal or rod-shaped cells in which are present extensive intracytoplasmic membranes displaying as flattened vesicles. Most species are motile with a flagellum located in the polar region of the bacillus. Three basic morphological types of Nitrosomonas were studied, which are: short rods Nitrosomonas, rods Nitrosomonas and Nitrosomonas with pointed ends. Nitrosomonas species cells have different criteria of size and shape: N. europaea shows short rods with pointed ends cells, which size is (0.8-1.1 x 1.0- 1.7) µm; motility has not been observed. N. eutropha presents rod to pear shaped cells with one or both ends pointed, with a size of (1.0-1.3 x 1.6- 2.3) µm. They show motility. N. halophila cells have a coccoid shap
https://en.wikipedia.org/wiki/Feeding%20frenzy
In ecology, a feeding frenzy occurs when predators are overwhelmed by the amount of prey available. The term is also used as an idiom in the English language. Examples in nature For example, a large school of fish can cause nearby sharks, such as the lemon shark, to enter into a feeding frenzy. This can cause the sharks to go wild, biting anything that moves, including each other or anything else within biting range. Another functional explanation for feeding frenzy is competition amongst predators. This term is most often used when referring to sharks or piranhas. English language uses It has also been used as a term within journalism. The term is occasionally used to describe a plethora of something. For instance, a 2016 Bloomberg News article is entitled: "March Madness Is a Fantasy Sports Feeding Frenzy." In economics the term can be used to describe the economics of the music industry, as large music companies acquired smaller music companies. See also Bait ball Adage Comprehension of idioms Idiom in English language Media feeding frenzy Phrasal verb Metaphor References Eating behaviors Idioms Adages fr:Attaque de requin#La frénésie alimentaire
https://en.wikipedia.org/wiki/Snapshot%20%28computer%20storage%29
In computer systems, a snapshot is the state of a system at a particular point in time. The term was coined as an analogy to that in photography. Rationale A full backup of a large data set may take a long time to complete. On multi-tasking or multi-user systems, there may be writes to that data while it is being backed up. This prevents the backup from being atomic and introduces a version skew that may result in data corruption. For example, if a user moves a file into a directory that has already been backed up, then that file would be completely missing on the backup media, since the backup operation had already taken place before the addition of the file. Version skew may also cause corruption with files which change their size or contents underfoot while being read. One approach to safely backing up live data is to temporarily disable write access to data during the backup, either by stopping the accessing applications or by using the locking API provided by the operating system to enforce exclusive read access. This is tolerable for low-availability systems (on desktop computers and small workgroup servers, on which regular downtime is acceptable). High-availability 24/7 systems, however, cannot bear service stoppages. To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. Most snapshot implementations are efficient and can create snapshots in O(1). In other words, the time and I/O needed to create the snapshot does not increase with the size of the data set; by contrast, the time and I/O required for a direct backup is proportional to the size of the data set. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only, and use a system of pointers to reference the initial snapshot. This method of pointer-based snapshots consumes less disk capacity tha
https://en.wikipedia.org/wiki/Explosimeter
An explosimeter is a gas detector which is used to measure the amount of combustible gases present in a sample. When a percentage of the lower explosive limit (LEL) of an atmosphere is exceeded, an alarm signal on the instrument is activated. The device, also called a combustible gas detector, operates on the principle of resistance proportional to heat—a wire is heated, and a sample of the gas is introduced to the hot wire. Combustible gases burn in the presence of the hot wire, thus increasing the resistance and disturbing a Wheatstone bridge, which gives the reading. A flashback arrestor is installed in the device to avoid the explosimeter igniting the sample external to the device. Note, that the detection readings of an explosimeter are only accurate if the gas being sampled has the same characteristics and response as the calibration gas. Most explosimeters are calibrated to methane or hydrogen. References External links https://web.archive.org/web/20050910075254/http://www.marineengineering.org.uk/testequipment/explosimeter.htm (select explosimeter from the left frame) Explosimetry Explosion protection Gas technologies Measuring instruments
https://en.wikipedia.org/wiki/Sardine%20run
The KwaZulu-Natal sardine run of southern Africa occurs from May through July when billions of sardines – or more specifically the Southern African pilchard Sardinops sagax – spawn in the cool waters of the Agulhas Bank and move northward along the east coast of South Africa. Their sheer numbers create a feeding frenzy along the coastline. The run, containing millions of individual sardines, occurs when a current of cold water heads north from the Agulhas Bank up to Mozambique where it then leaves the coastline and goes further east into the Indian Ocean. In terms of biomass, researchers estimate the sardine run could rival East Africa's great wildebeest migration. However, little is known of the phenomenon. It is believed that the water temperature has to drop below 21 °C in order for the migration to take place. In 2003, the sardines failed to 'run' for the third time in 23 years. While 2005 saw a good run, 2006 marked another non-run. The shoals are often more than 7 km long, 1.5 km wide and 30 metres deep and are clearly visible from spotter planes or from the surface. Sardines group together when they are threatened. This instinctual behaviour is a defence mechanism, as lone individuals are more likely to be eaten than when in large groups. Causes The sardine run is still poorly understood from an ecological point of view. There have been various hypotheses, sometimes contradictory, that try to explain why and how the run occurs. A recent interpretation of the causes is that the sardine run is most likely a seasonal reproductive migration of a genetically distinct subpopulation of sardine that moves along the coast from the eastern Agulhas Bank to the coast of KwaZulu-Natal in most years if not in every year. Genomic and transcriptomic data indicate that the sardines participating in the run originate from South Africa's cool-temperate Atlantic coast. These are attracted to temporary cold-water upwelling off the south-east coast, and eventually find them
https://en.wikipedia.org/wiki/1%3A5%3A200
In the construction industry, the 1:5:200 rule (or 1:5:200 ratio) is a rule of thumb that states that: Rule The rule originated in a Royal Academy of Engineering paper by Evans et al. Sometimes the ratios are given as 1:10:200. The figures are averages and broad generalizations, since construction costs will vary with land costs, building type, and location, and staffing costs will vary with business sector and local economy. The RAE paper started a number of arguments about the basis for the figures: whether they were credible: whether they should be discounted; what is included in each category. These arguments overshadow the principal message of the paper that concentration on first capital cost is not optimising use value: support to the occupier and containment of operating-cost. Study by the Constructing Excellence Be Valuable Task Group, chaired by Richard Saxon, came to the view that there is merit in knowing more about key cost ratios as benchmarks and that we can expect wide variation between building types and even individual examples of the same type. Hughes et al, of the University of Reading School of Construction Management and Engineering, observed that the "Evans ratio" is merely a passing remark in the paper's introduction (talking of "commercial office buildings" and stating that "similar ratios might well apply in other types of building") forming part of a pitch that the proportion of a company's expenditure on a building that is spent directly on the building itself (rather than upon staffing it) is around 3%, and that no data are given to support the ratio and no defence of it is given in the remainder of the paper. In attempting to determine this ratio afresh, from published data on real buildings, they found it impossible to reproduce the 1:5:200 ratio, in part because the data and methodology employed by Evans et al. were not published and in part because the definitions employed in the original paper could not be applied. The ratios th
https://en.wikipedia.org/wiki/E0%20%28cipher%29
E0 is a stream cipher used in the Bluetooth protocol. It generates a sequence of pseudorandom numbers and combines it with the data using the XOR operator. The key length may vary, but is generally 128 bits. Description At each iteration, E0 generates a bit using four shift registers of differing lengths (25, 31, 33, 39 bits) and two internal states, each 2 bits long. At each clock tick, the registers are shifted and the two states are updated with the current state, the previous state and the values in the shift registers. Four bits are then extracted from the shift registers and added together. The algorithm XORs that sum with the value in the 2-bit register. The first bit of the result is output for the encoding. E0 is divided in three parts: Payload key generation Keystream generation Encoding The setup of the initial state in Bluetooth uses the same structure as the random bit stream generator. We are thus dealing with two combined E0 algorithms. An initial 132-bit state is produced at the first stage using four inputs (the 128-bit key, the Bluetooth address on 48 bits and the 26-bit master counter). The output is then processed by a polynomial operation and the resulting key goes through the second stage, which generates the stream used for encoding. The key has a variable length, but is always a multiple of 2 (between 8 and 128 bits). 128 bit keys are generally used. These are stored into the second stage's shift registers. 200 pseudorandom bits are then produced by 200 clock ticks, and the last 128 bits are inserted into the shift registers. It is the stream generator's initial state. Cryptanalysis Several attacks and attempts at cryptanalysis of E0 and the Bluetooth protocol have been made, and a number of vulnerabilities have been found. In 1999, Miia Hermelin and Kaisa Nyberg showed that E0 could be broken in 264 operations (instead of 2128), if 264 bits of output are known. This type of attack was subsequently improved by Kishan Chand Gupta an
https://en.wikipedia.org/wiki/Dual%20format
Dual format is a technique used to allow software for two systems which would normally require different disk formats to be recorded on the same floppy disk. In the late 1980s, the term was used to refer to disks that could be used to boot either an Amiga or Atari ST computer. The layout of the first track of the disk was specially laid out to contain an Amiga and an Atari ST boot sector at the same time by fooling the operating system to think that the track resolved into the format it expected. The technique was used for some commercially available games, and also for the disks covermounted on ST/Amiga Format magazine. Other games came on Amiga and PC dual-format disks, or even "tri-format" disks, which contained the Amiga, Atari ST and PC versions of the game. Most dual and tri-format disks were implemented using technology developed by Rob Computing. Later, the term was used for disks containing both Windows and Macintosh versions. Examples Action Fighter (Amiga/PC dual-format disk) Lethal Xcess - Wings of Death II (Amiga/Atari ST dual-format disks) Monster Business (Amiga/Atari ST dual-format disk) Populous: The Promised Lands (Amiga/Atari ST dual-format disk) Rick Dangerous (Amiga/PC dual-format disk) Rick Dangerous 2 (Amiga/PC dual-format disk) Stone Age (Amiga/Atari ST dual-format disk) Street Fighter (Amiga/PC dual-format disk) StarGlider 2 (Amiga/Atari ST dual-format disk) 3D Pool (Amiga/Atari ST/PC tri-format disk) Stunt Car Racer (Amiga/PC dual-format disk) Bionic Commando (Amiga/PC dual-format disk) Carrier Command (Amiga/PC dual-format disk) Blasteroids (Amiga//PC dual-format disk) E-Motion (Amiga//PC dual-format disk) Indiana Jones and the Last Crusade Action (Amiga//PC dual-format disk) Out Run (Amiga/PC dual-format disk) World Class Leader Board (Amiga/PC dual-format disk) International Soccer Challenge (Amiga/PC dual-format disk) MicroProse Soccer (Amiga/PC dual-format disk) See also References Amiga Atari ST IBM PC compatibles Macintosh
https://en.wikipedia.org/wiki/Boomerang%20attack
In cryptography, the boomerang attack is a method for the cryptanalysis of block ciphers based on differential cryptanalysis. The attack was published in 1999 by David Wagner, who used it to break the COCONUT98 cipher. The boomerang attack has allowed new avenues of attack for many ciphers previously deemed safe from differential cryptanalysis. Refinements on the boomerang attack have been published: the amplified boomerang attack, and the rectangle attack. Due to the similarity of a Merkle–Damgård construction with a block cipher, this attack may also be applicable to certain hash functions such as MD5. The attack The boomerang attack is based on differential cryptanalysis. In differential cryptanalysis, an attacker exploits how differences in the input to a cipher (the plaintext) can affect the resultant difference at the output (the ciphertext). A high probability "differential" (that is, an input difference that will produce a likely output difference) is needed that covers all, or nearly all, of the cipher. The boomerang attack allows differentials to be used which cover only part of the cipher. The attack attempts to generate a so-called "quartet" structure at a point halfway through the cipher. For this purpose, say that the encryption action, E, of the cipher can be split into two consecutive stages, E0 and E1, so that E(M) = E1(E0(M)), where M is some plaintext message. Suppose we have two differentials for the two stages; say, for E0, and for E1−1 (the decryption action of E1). The basic attack proceeds as follows: Choose a random plaintext and calculate . Request the encryptions of and to obtain and Calculate and Request the decryptions of and to obtain and Compare and ; when the differentials hold, . Application to specific ciphers One attack on KASUMI, a block cipher used in 3GPP, is a related-key rectangle attack which breaks the full eight rounds of the cipher faster than exhaustive search (Biham et al., 2005). The attack req
https://en.wikipedia.org/wiki/Prewellordering
In set theory, a prewellordering on a set is a preorder on (a transitive and reflexive relation on ) that is strongly connected (meaning that any two points are comparable) and well-founded in the sense that the induced relation defined by is a well-founded relation. Prewellordering on a set A prewellordering on a set is a homogeneous binary relation on that satisfies the following conditions: Reflexivity: for all Transitivity: if and then for all Total/Strongly connected: or for all for every non-empty subset there exists some such that for all This condition is equivalent to the induced strict preorder defined by and being a well-founded relation. A homogeneous binary relation on is a prewellordering if and only if there exists a surjection into a well-ordered set such that for all if and only if Examples Given a set the binary relation on the set of all finite subsets of defined by if and only if (where denotes the set's cardinality) is a prewellordering. Properties If is a prewellordering on then the relation defined by is an equivalence relation on and induces a wellordering on the quotient The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering. A norm on a set is a map from into the ordinals. Every norm induces a prewellordering; if is a norm, the associated prewellordering is given by Conversely, every prewellordering is induced by a unique regular norm (a norm is regular if, for any and any there is such that ). Prewellordering property If is a pointclass of subsets of some collection of Polish spaces, closed under Cartesian product, and if is a prewellordering of some subset of some element of then is said to be a -prewellordering of if the relations and are elements of where for is said to have the prewellordering property if every set in admits a -prewellordering. The prewellordering property is related to the stronger
https://en.wikipedia.org/wiki/Quantum%20programming
Quantum programming is the process of designing or assembling sequences of instructions, called quantum circuits, using gates, switches, and operators to manipulate a quantum system for a desired outcome or results of a given experiment. Quantum circuit algorithms can be implemented on integrated circuits, conducted with instrumentation, or written in a programming language for use with a quantum computer or a quantum processor. With quantum processor based systems, quantum programming languages help express quantum algorithms using high-level constructs. The field is deeply rooted in the open-source philosophy and as a result most of the quantum software discussed in this article is freely available as open-source software. Quantum computers, such as those based on the KLM protocol, a linear optical quantum computing (LOQC) model, use quantum algorithms (circuits) implemented with electronics, integrated circuits, instrumentation, sensors, and/or by other physical means. Other circuits designed for experimentation related to quantum systems can be instrumentation and sensor based. Quantum instruction sets Quantum instruction sets are used to turn higher level algorithms into physical instructions that can be executed on quantum processors. Sometimes these instructions are specific to a given hardware platform, e.g. ion traps or superconducting qubits. cQASM cQASM, also known as common QASM, is a hardware-agnostic quantum assembly language which guarantees the interoperability between all the quantum compilation and simulation tools. It was introduced by the QCA Lab at TUDelft. Quil Quil is an instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng in A Practical Quantum Instruction Set Architecture. Many quantum algorithms (including quantum teleportation, quantum error correction, simulation, and optimization algorithms) require
https://en.wikipedia.org/wiki/European%20Association%20for%20Theoretical%20Computer%20Science
The European Association for Theoretical Computer Science (EATCS) is an international organization with a European focus, founded in 1972. Its aim is to facilitate the exchange of ideas and results among theoretical computer scientists as well as to stimulate cooperation between the theoretical and the practical community in computer science. The major activities of the EATCS are: Organization of ICALP, the International Colloquium on Automata, Languages and Programming; Publication of the Bulletin of the EATCS; Publication of a series of monographs and texts on theoretical computer science; Publication of the journal Theoretical Computer Science; Publication of the journal Fundamenta Informaticae. EATCS Award Each year, the EATCS Award is awarded in recognition of a distinguished career in theoretical computer science. The first award was assigned to Richard Karp in 2000; the complete list of the winners is given below: Presburger Award Starting in 2010, the European Association of Theoretical Computer Science (EATCS) confers each year at the conference ICALP the Presburger Award to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929. The complete list of the winners is given below: EATCS Fellows The EATCS Fellows Program has been established by the Association to recognize outstanding EATCS Members for their scientific achievements in the field of Theoretical Computer Science. The Fellow status is conferred by the EATCS Fellows-Selection Committee upon a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS commun
https://en.wikipedia.org/wiki/Connection-oriented%20communication
In telecommunications and computer networking, connection-oriented communication is a communication protocol where a communication session or a semi-permanent connection is established before any useful data can be transferred. The established connection ensures that data is delivered in the correct order to the upper communication layer. The alternative is called connectionless communication, such as the datagram mode communication used by Internet Protocol (IP) and User Datagram Protocol, where data may be delivered out of order, since different network packets are routed independently and may be delivered over different paths. Connection-oriented communication may be implemented with a circuit switched connection, or a packet-mode virtual circuit connection. In the latter case, it may use either a transport layer virtual circuit protocol such as the TCP protocol, allowing data to be delivered in order. Although the lower-layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier reducing the overhead of routing decisions on a packet-by-packet basis for the network. Connection-oriented protocol services are often, but not always, reliable network services that provide acknowledgment after successful delivery and automatic repeat request functions in case of missing or corrupted data. Asynchronous Transfer Mode, Frame Relay and MPLS are examples of a connection-oriented, unreliable protocol. SMTP is an example of a connection-oriented protocol in which if a message is not delivered, an error report is sent to the sender which makes SMTP a reliable protocol. Because they can keep track of a conversation, connection-oriented protocols are sometimes described as stateful. Circuit switching Circuit switched communication, for example the public switched telephone network,
https://en.wikipedia.org/wiki/Ordinal%20arithmetic
In the mathematical field of set theory, ordinal arithmetic describes the three usual operations on ordinal numbers: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the result of the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the "natural" arithmetic of ordinals and the nimber operations. Addition The union of two disjoint well-ordered sets S and T can be well-ordered. The order-type of that union is the ordinal that results from adding the order-types of S and T. If two well-ordered sets are not already disjoint, then they can be replaced by order-isomorphic disjoint sets, e.g. replace S by {0} × S and T by {1} × T. This way, the well-ordered set S is written "to the left" of the well-ordered set T, meaning one defines an order on S T in which every element of S is smaller than every element of T. The sets S and T themselves keep the ordering they already have. The definition of addition α + β can also be given by transfinite recursion on β: α + 0 = α , where S denotes the successor function. when β is a limit ordinal. Ordinal addition on the natural numbers is the same as standard addition. The first transfinite ordinal is ω, the set of all natural numbers, followed by ω + 1, ω + 2, etc. The ordinal ω + ω is obtained by two copies of the natural numbers ordered in the usual fashion and the second copy completely to the right of the first. Writing 0' < 1' < 2' < ... for the second copy, ω + ω looks like 0 < 1 < 2 < 3 < ... < 0' < 1' < 2' < ... This is different from ω because in ω only 0 does not have a direct predecessor while in ω + ω the two elements 0 and 0' do not have direct predecessors. Properties Ordinal addition is, in general, not commutative. For example, since the order relation for is
https://en.wikipedia.org/wiki/Finite%20morphism
In algebraic geometry, a finite morphism between two affine varieties is a dense regular map which induces isomorphic inclusion between their coordinate rings, such that is integral over . This definition can be extended to the quasi-projective varieties, such that a regular map between quasiprojective varieties is finite if any point like has an affine neighbourhood V such that is affine and is a finite map (in view of the previous definition, because it is between affine varieties). Definition by schemes A morphism f: X → Y of schemes is a finite morphism if Y has an open cover by affine schemes such that for each i, is an open affine subscheme Spec Ai, and the restriction of f to Ui, which induces a ring homomorphism makes Ai a finitely generated module over Bi. One also says that X is finite over Y. In fact, f is finite if and only if for every open affine subscheme V = Spec B in Y, the inverse image of V in X is affine, of the form Spec A, with A a finitely generated B-module. For example, for any field k, is a finite morphism since as -modules. Geometrically, this is obviously finite since this is a ramified n-sheeted cover of the affine line which degenerates at the origin. By contrast, the inclusion of A1 − 0 into A1 is not finite. (Indeed, the Laurent polynomial ring k[y, y−1] is not finitely generated as a module over k[y].) This restricts our geometric intuition to surjective families with finite fibers. Properties of finite morphisms The composition of two finite morphisms is finite. Any base change of a finite morphism f: X → Y is finite. That is, if g: Z → Y is any morphism of schemes, then the resulting morphism X ×Y Z → Z is finite. This corresponds to the following algebraic statement: if A and C are (commutative) B-algebras, and A is finitely generated as a B-module, then the tensor product A ⊗B C is finitely generated as a C-module. Indeed, the generators can be taken to be the elements ai ⊗ 1, where ai are the given generators
https://en.wikipedia.org/wiki/A%20Course%20of%20Modern%20Analysis
A Course of Modern Analysis: an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions (colloquially known as Whittaker and Watson) is a landmark textbook on mathematical analysis written by Edmund T. Whittaker and George N. Watson, first published by Cambridge University Press in 1902. The first edition was Whittaker's alone, but later editions were co-authored with Watson. History Its first, second, third, and the fourth edition were published in 1902, 1915, 1920, and 1927, respectively. Since then, it has continuously been reprinted and is still in print today. A revised, expanded and digitally reset fifth edition, edited by Victor H. Moll, was published in 2021. The book is notable for being the standard reference and textbook for a generation of Cambridge mathematicians including Littlewood and Godfrey H. Hardy. Mary L. Cartwright studied it as preparation for her final honours on the advice of fellow student Vernon C. Morton, later Professor of Mathematics at Aberystwyth University. But its reach was much further than just the Cambridge school; André Weil in his obituary of the French mathematician Jean Delsarte noted that Delsarte always had a copy on his desk. In 1941 the book was included among a "selected list" of mathematical analysis books for use in universities in an article for that purpose published by American Mathematical Monthly. Notable features Some idiosyncratic but interesting problems from an older era of the Cambridge Mathematical Tripos are in the exercises. The book was one of the earliest to use decimal numbering for its sections, an innovation the authors attribute to Giuseppe Peano. Contents Below are the contents of the fourth edition: Part I. The Process of Analysis Part II. The Transcendental Functions Reception Reviews of the first edition George B. Mathews, in a 1903 review article published in The Mathematical Gazette opens by saying t
https://en.wikipedia.org/wiki/Moore%20space%20%28algebraic%20topology%29
In algebraic topology, a branch of mathematics, Moore space is the name given to a particular type of topological space that is the homology analogue of the Eilenberg–Maclane spaces of homotopy theory, in the sense that it has only one nonzero homology (rather than homotopy) group. Formal definition Given an abelian group G and an integer n ≥ 1, let X be a CW complex such that and for i ≠ n, where denotes the n-th singular homology group of X and is the i-th reduced homology group. Then X is said to be a Moore space. Also, X is by definition simply-connected if n>1. Examples is a Moore space of for . is a Moore space of for . See also Eilenberg–MacLane space, the homotopy analog. Homology sphere References Hatcher, Allen. Algebraic topology, Cambridge University Press (2002), . For further discussion of Moore spaces, see Chapter 2, Example 2.40. A free electronic version of this book is available on the author's homepage. Algebraic topology
https://en.wikipedia.org/wiki/Product%20term
In Boolean logic, a product term is a conjunction of literals, where each literal is either a variable or its negation. Examples Examples of product terms include: Origin The terminology comes from the similarity of AND to multiplication as in the ring structure of Boolean rings. Minterms For a boolean function of variables , a product term in which each of the variables appears once (in either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator. References Fredrick J. Hill, and Gerald R. Peterson, 1974, Introduction to Switching Theory and Logical Design, Second Edition, John Wiley & Sons, NY, Boolean algebra
https://en.wikipedia.org/wiki/Mass%20ratio
In aerospace engineering, mass ratio is a measure of the efficiency of a rocket. It describes how much more massive the vehicle is with propellant than without; that is, the ratio of the rocket's wet mass (vehicle plus contents plus propellant) to its dry mass (vehicle plus contents). A more efficient rocket design requires less propellant to achieve a given goal, and would therefore have a lower mass ratio; however, for any given efficiency a higher mass ratio typically permits the vehicle to achieve higher delta-v. The mass ratio is a useful quantity for back-of-the-envelope rocketry calculations: it is an easy number to derive from either or from rocket and propellant mass, and therefore serves as a handy bridge between the two. It is also a useful for getting an impression of the size of a rocket: while two rockets with mass fractions of, say, 92% and 95% may appear similar, the corresponding mass ratios of 12.5 and 20 clearly indicate that the latter system requires much more propellant. Typical multistage rockets have mass ratios in the range from 8 to 20. The Space Shuttle, for example, has a mass ratio around 16. Derivation The definition arises naturally from Tsiolkovsky's rocket equation: where Δv is the desired change in the rocket's velocity ve is the effective exhaust velocity (see specific impulse) m0 is the initial mass (rocket plus contents plus propellant) m1 is the final mass (rocket plus contents) This equation can be rewritten in the following equivalent form: The fraction on the left-hand side of this equation is the rocket's mass ratio by definition. This equation indicates that a Δv of times the exhaust velocity requires a mass ratio of . For instance, for a vehicle to achieve a of 2.5 times its exhaust velocity would require a mass ratio of (approximately 12.2). One could say that a "velocity ratio" of requires a mass ratio of . Alternative definition Sutton defines the mass ratio inversely as: In this case, the values
https://en.wikipedia.org/wiki/Mason%27s%20mark
A mason's mark is an engraved symbol often found on dressed stone in buildings and other public structures. In stonemasonry Regulations issued in Scotland in 1598 by James VI's Master of Works, William Schaw, stated that on admission to the guild, every mason had to enter his name and his mark in a register. There are three types of marks used by stonemasons. Banker marks were made on stones before they were sent to be used by the walling masons. These marks served to identify the banker mason who had prepared the stones to their paymaster. This system was employed only when the stone was paid for by measure, rather than by time worked. For example, the 1306 contract between Richard of Stow, mason, and the Dean and Chapter of Lincoln Cathedral, specified that the plain walling would be paid for by measure, and indeed banker marks are found on the blocks of walling in this cathedral. Conversely, the masons responsible for walling the eastern parts of Exeter Cathedral were paid by the week, and consequently few banker marks are found on this part of the cathedral. Banker marks make up the majority of masons' marks, and are generally what are meant when the term is used without further specification. Assembly marks were used to ensure the correct installation of important pieces of stonework. For example, the stones on the window jambs in the chancel of North Luffenham church in Rutland are each marked with a Roman numeral, directing the order in which the stones were to be installed. Quarry stones were used to identify the source of a stone, or occasionally the quality. In Freemasonry Freemasonry, a fraternal order that uses an analogy to stonemasonry for much of its structure, also makes use of marks. A Freemason who takes the degree of Mark Master Mason will be asked to create his own Mark, as a type of unique signature or identifying badge. Some of these can be quite elaborate. Gallery of mason's marks See also Benchmark (surveying) Builder's signature
https://en.wikipedia.org/wiki/Adaptec
Adaptec, Inc., was a computer storage company and remains a brand for computer storage products. The company was an independent firm from 1981 to 2010, at which point it was acquired by PMC-Sierra, which itself was later acquired by Microsemi, which itself was later acquired by Microchip Technology. History Larry Boucher, Wayne Higashi, and Bernard Nieman founded Adaptec in 1981. At first, Adaptec focused on devices with Parallel SCSI interfaces. Popular host bus adapters included the 154x/15xx ISA family, the 2940 PCI family, and the 29160/-320 family. Their cross-platform ASPI was an early API for accessing and integrating non-disk devices like tape drives, scanners and optical disks. With advancements in technology, RAID functions were added while interfaces evolved to PCIe and SAS. Adaptec made a number of acquisitions in the mid-1990s to expand their reach in the SCSI peripheral market. In March 1993, they acquired Trantor Systems Ltd. of Fremont, California, for $10 million. In July 1995, they acquired Future Domain Corporation of Irvine, California, for $25 million. On May 10, 2010, PMC-Sierra, Inc. and Adaptec, Inc. announced they had entered into a definitive agreement of PMC-Sierra acquiring Adaptec's channel storage business on May 8, 2010, which included Adaptec's RAID storage product line, the Adaptec brand, a global value added reseller customer base, board logistics capabilities, and SSD cache performance solutions. The transaction was expected to close in approximately 30 days, subject to customary closing conditions. Following the sale, Adaptec would retain its Aristos ASIC technology business, certain real estate assets, more than 200 patents, and approximately $400 million in cash and marketable securities. On June 8, 2010, PMC-Sierra and Adaptec announced the completion of the acquisition. PMC-Sierra renamed the channel storage business "Adaptec by PMC". PMC-Sierra was in turn acquired by Microsemi in January 2016. The old Adaptec, Inc. cha
https://en.wikipedia.org/wiki/Ballblazer
Ballblazer is a futuristic sports game created by Lucasfilm Games and published in 1985 by Epyx. Along with Rescue on Fractalus!, it was one of the initial pair of releases from Lucasfilm Games, Ballblazer was developed and first published for the Atari 8-bit family. The principal creator and programmer was David Levine. The game was called Ballblaster during development; some pirated versions bear this name. It was ported to the Apple II, ZX Spectrum, Amstrad CPC, Commodore 64, and MSX. Atari 5200 and Atari 7800 ports were published by Atari Corporation. A version for the Famicom was released by Pony Canyon. Gameplay Ballblazer is a simple one-on-one sports-style game bearing similarities to basketball and soccer. Each side is represented by a craft called a "rotofoil", which can be controlled by either a human player or a computer-controlled "droid" with ten levels of difficulty. The game allows for human vs. human, human vs. droid, and droid vs. droid matches. The basic objective of the game is to score points by either firing or carrying a floating ball into the opponent's goal. The game takes place on a flat, checkerboard playfield, and each player's half of the screen is presented from a first-person perspective. A player can gain possession of the ball by running into it, at which point it is held in a force field in front of the craft. The opponent can attempt to knock the ball away from the player using the fire button, and the player in possession of the ball can also fire the ball toward the goal. When a player does not have possession of the ball, his or her rotofoil automatically turns at 90-degree intervals to face the ball, while possessing the ball turns the player toward the opponent's goal. The goalposts move from side to side at each end of the playfield, and as goals are scored, the goal becomes narrower. Pushing the ball through the goal scores one point, firing the ball through the posts from close range scores two points, and successfu
https://en.wikipedia.org/wiki/Domain-specific%20modeling
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, such as computer software. It involves systematic use of a domain-specific language to represent the various facets of a system. Domain-specific modeling languages tend to support higher-level abstractions than general-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system. Overview Domain-specific modeling often also includes the idea of code generation: automating the creation of executable source code directly from the domain-specific language models. Being free from the manual creation and maintenance of source code means domain-specific language can significantly improve developer productivity. The reliability of automatic generation compared to manual coding will also reduce the number of defects in the resulting programs thus improving quality. Domain-specific language differs from earlier code generation attempts in the CASE tools of the 1980s or UML tools of the 1990s. In both of these, the code generators and modeling languages were built by tool vendors. While it is possible for a tool vendor to create a domain-specific language and generators, it is more normal for domain-specific language to occur within one organization. One or a few expert developers creates the modeling language and generators, and the rest of the developers use them. Having the modeling language and generator built by the organization that will use them allows a tight fit with their exact domain and in response to changes in the domain. Domain-specific languages can usually cover a range of abstraction levels for a particular domain. For example, a domain-specific modeling language for mobile phones could allow users to specify high-level abstractions for the user interface, as well as lower-level abstractions for storing data such as phone numbers or settings. Likewise, a domain-specific modeling language for financ
https://en.wikipedia.org/wiki/Star%20Force
also released in arcades outside of Japan as Mega Force, is a vertical-scrolling shooter computer game released in 1984 by Tehkan. Gameplay In the game, the player pilots a starship called the Final Star, while shooting various enemies and destroying enemy structures for points. Unlike later vertical scrolling shooters, like Toaplan's Twin Cobra, the Final Star had only two levels of weapon power and no secondary weapons like missiles and/or bombs. Each stage in the game was named after a letter of the Greek alphabet. In certain versions of the game, there is an additional level called "Infinity" (represented by the infinity symbol) which occurs after Omega, after which the game repeats indefinitely. In the NES version, after defeating the Omega target, the player can see a black screen with Tecmo's logo, announcing the future release of the sequel Super Star Force. After that, the infinity target becomes available and the game repeats the same level and boss without increasing the difficulty. Reception In Japan, Game Machine listed Star Force on its December 1, 1984, issue as the fourteenth most-successful table arcade unit at the time. Legacy Sequels Super Star Force: Jikūreki no Himitsu, released in 1986 for the Japanese Nintendo Famicom. Final Star Force, released for arcades in 1992. Ports and related releases Star Force was ported and published in 1985 by Hudson Soft to both the MSX home computer and the Family Computer (Famicom) in Japan. Sales of the game were promoted through the first nationwide video game competition to be called "a caravan", although it was not the first event of its kind organized by Hudson (they had previously promoted Lode Runner with a similar event). The North American and European versions for the Nintendo Entertainment System (NES) were published two years later, in 1987, with significant revisions, and with Tecmo credited rather than Hudson on the title screen and box art. Although the NES version is immediately rec
https://en.wikipedia.org/wiki/Precision%20Time%20Protocol
The Precision Time Protocol (PTP) is a protocol used to synchronize clocks throughout a computer network. On a local area network, it achieves clock accuracy in the sub-microsecond range, making it suitable for measurement and control systems. PTP is employed to synchronize financial transactions, mobile phone tower transmissions, sub-sea acoustic arrays, and networks that require precise timing but lack access to satellite navigation signals. The first version of PTP, IEEE 1588-2002, was published in 2002. IEEE 1588-2008, also known as PTP Version 2 is not backward compatible with the 2002 version. IEEE 1588-2019 was published in November 2019 and includes backward-compatible improvements to the 2008 publication. IEEE 1588-2008 includes a profile concept defining PTP operating parameters and options. Several profiles have been defined for applications including telecommunications, electric power distribution and audiovisual. is an adaptation of PTP for use with Audio Video Bridging and Time-Sensitive Networking. History According to John Eidson, who led the IEEE 1588-2002 standardization effort, "IEEE 1588 is designed to fill a niche not well served by either of the two dominant protocols, NTP and GPS. IEEE 1588 is designed for local systems requiring accuracies beyond those attainable using NTP. It is also designed for applications that cannot bear the cost of a GPS receiver at each node, or for which GPS signals are inaccessible." PTP was originally defined in the IEEE 1588-2002 standard, officially entitled Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems, and published in 2002. In 2008, IEEE 1588-2008 was released as a revised standard; also known as PTP version 2 (PTPv2), it improves accuracy, precision and robustness but is not backward compatible with the original 2002 version. IEEE 1588-2019 was published in November 2019, is informally known as PTPv2.1 and includes backwards-compatible improvements t
https://en.wikipedia.org/wiki/Notation%20in%20probability%20and%20statistics
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory Random variables are usually written in upper case roman letters: , , etc. Particular realizations of a random variable are written in corresponding lower case letters. For example, could be a sample corresponding to the random variable . A cumulative probability is formally written to differentiate the random variable from its realization. The probability is sometimes written to distinguish it from other functions and measure P so as to avoid having to define "P is a probability" and is short for , where is the event space and is a random variable. notation is used alternatively. or indicates the probability that events A and B both occur. The joint probability distribution of random variables X and Y is denoted as , while joint probability mass function or probability density function as and joint cumulative distribution function as . or indicates the probability of either event A or event B occurring ("or" in this case means one or the other or both). σ-algebras are usually written with uppercase calligraphic (e.g. for the set of sets on which we define the probability P) Probability density functions (pdfs) and probability mass functions are denoted by lowercase letters, e.g. , or . Cumulative distribution functions (cdfs) are denoted by uppercase letters, e.g. , or . Survival functions or complementary cumulative distribution functions are often denoted by placing an overbar over the symbol for the cumulative:, or denoted as , In particular, the pdf of the standard normal distribution is denoted by , and its cdf by . Some common operators: : expected value of X : variance of X : covariance of
https://en.wikipedia.org/wiki/Procept
In mathematics education, a procept is an amalgam of three components: a "process" which produces a mathematical "object" and a "symbol" which is used to represent either process or object. It derives from the work of Eddie Gray and David O. Tall. The notion was first published in a paper in the Journal for Research in Mathematics Education in 1994, and is part of the process-object literature. This body of literature suggests that mathematical objects are formed by encapsulating processes, that is to say that the mathematical object 3 is formed by an encapsulation of the process of counting: 1,2,3... Gray and Tall's notion of procept improved upon the existing literature by noting that mathematical notation is often ambiguous as to whether it refers to process or object. Examples of such notations are: : refers to the process of adding as well as the outcome of the process. : refers to the process of summing an infinite sequence, and to the outcome of the process. : refers to the process of mapping x to 3x+2 as well as the outcome of that process, the function . References Gray, E. & Tall, D. (1994) "Duality, Ambiguity, and Flexibility: A "Proceptual" View of Simple Arithmetic", Journal for Research in Mathematics Education 25(2) p.116-40. Available Online as PDF External links Procepts Mathematics education
https://en.wikipedia.org/wiki/Glossary%20of%20probability%20and%20statistics
This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Notation in probability and statistics Probability axioms Glossary of experimental design List of statistical topics List of probability topics Glossary of areas of mathematics Glossary of calculus References External links Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton) Glossary Statistics-related lists Probability and statistics Probability and statistics Wikipedia glossaries using description lists
https://en.wikipedia.org/wiki/Ky%20Fan%20inequality
In mathematics, there are two different results that share the common name of the Ky Fan inequality. One is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval. The result was published on page 5 of the book Inequalities by Edwin F. Beckenbach and Richard E. Bellman (1961), who refer to an unpublished result of Ky Fan. They mention the result in connection with the inequality of arithmetic and geometric means and Augustin Louis Cauchy's proof of this inequality by forward-backward-induction; a method which can also be used to prove the Ky Fan inequality. This Ky Fan inequality is a special case of Levinson's inequality and also the starting point for several generalizations and refinements; some of them are given in the references below. The second Ky Fan inequality is used in game theory to investigate the existence of an equilibrium. Statement of the classical version If with for i = 1, ..., n, then with equality if and only if x1 = x2 = · · · = xn. Remark Let denote the arithmetic and geometric mean, respectively, of x1, . . ., xn, and let denote the arithmetic and geometric mean, respectively, of 1 − x1, . . ., 1 − xn. Then the Ky Fan inequality can be written as which shows the similarity to the inequality of arithmetic and geometric means given by Gn ≤ An. Generalization with weights If xi ∈ [0,½] and γi ∈ [0,1] for i = 1, . . ., n are real numbers satisfying γ1 + . . . + γn = 1, then with the convention 00 := 0. Equality holds if and only if either γixi = 0 for all i = 1, . . ., n or all xi > 0 and there exists x ∈ (0,½] such that x = xi for all i = 1, . . ., n with γi > 0. The classical version corresponds to γi = 1/n for all i = 1, . . ., n. Proof of the generalization Idea: Apply Jensen's inequality to the strictly concave function Detailed proof: (a) If at least one xi is zero, then the left-hand side of the Ky Fan inequality is zero and the inequality is proved. Equality holds if