source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Polar%20distance%20%28astronomy%29
|
In the celestial equatorial coordinate system Σ(α, δ) in astronomy, polar distance (PD) is an angular distance of a celestial object on its meridian measured from the celestial pole, similar to the way declination (dec, δ) is measured from the celestial equator.
Definition
Polar distance in celestial navigation is the angle between the pole and the Position of body on its Declination.
Referring to diagram:
P- Pole , WQE- Equator , Z - Zenith of observer ,
Y- Lower meridian passage of body
X- Upper meridian passage of body
Here body will be on declination circle ( XY). The distance between PY or PX will be the Polar distance of the body.
NP=ZQ=Latitude of observer
NY and NX will be the True altitude of body at that instant.
Polar distance (PD) = 90° ± δ
Polar distances are expressed in degrees and cannot exceed 180° in magnitude. An object on the celestial equator has a PD of 90°.
Polar distance is affected by the precession of the equinoxes.
If the polar distance of the Sun is equal to the observer's latitude, the shadow path of a gnomon's tip on a sundial will be a parabola; at higher latitudes it will be an ellipse and lower, a hyperbola.
References
Astronomical coordinate systems
Angle
|
https://en.wikipedia.org/wiki/Covermount
|
Covermount (sometimes written cover mount) is the name given to storage media (containing software and or audiovisual media) or other products (ranging from toys to flip-flops) packaged as part of a magazine or newspaper. The name comes from the method of packaging; the media or product is placed in a transparent plastic sleeve and mounted on the cover of the magazine with adhesive tape or glue.
History
Audio recordings were distributed in the UK by the use of covermounts in the 1960s by the fortnightly satirical magazine Private Eye though the term "covermount" was not in usage at that time. The Private Eye recordings were pressed onto 7" floppy vinyl (known as "flexi-discs" and "flimsies") and mounted on to the front of the magazine. The weekly pop music paper NME issued audio recordings of rock music on similar 7" flexi-discs as covermounts in the 1970s.
The covermount practice continued with computer magazines in the early era of home computers. In the United Kingdom computer hobbyist magazines began distributing tapes and later floppy disks with their publications. These disks included demo and shareware versions of games, applications, computer drivers, operating systems, computer wallpapers and other (usually free) content. One of the first covermount games to be added as a covermount was the 1984 The Thompson Twins Adventure.
Most magazines backed up by large publishers like Linux Format included a covermount CD or DVD with a Linux distribution and other open-source applications. The distribution of discs with source programs was also common in programming magazines: while the printed version had the code explained, the disk had the code ready to be compiled without forcing the reader to type the whole listing into the computer by hand.
In November 2015, The MagPi magazine brought the concept full circle and attached a free Raspberry Pi Zero on the cover, the first full computer to be included as a covermount on a magazine.
In other places, such as Finl
|
https://en.wikipedia.org/wiki/Elevation
|
The elevation of a geographic location is its height above or below a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic datum § Vertical datum).
The term elevation is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface.
Elevation is not to be confused with the distance from the center of the Earth. Due to the equatorial bulge, the summits of Mount Everest and Chimborazo have, respectively, the largest elevation and the largest geocentric distance.
Aviation
In aviation, the term elevation or aerodrome elevation is defined by the ICAO as the highest point of the landing area. It is often measured in feet and can be found in approach charts of the aerodrome. It is not to be confused with terms such as the altitude or height.
Maps and GIS
GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales. Tools inside the GIS allow for manipulation of data for spatial analysis or cartography.
A topographical map is the main type of map used to depict elevation, often through use of contour lines.
In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS.
USGS (United States Geologic Survey) is developing a 3D Elevation Program (3DEP) to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the c
|
https://en.wikipedia.org/wiki/Ecosystem%20ecology
|
Ecosystem ecology is the integrated study of living (biotic) and non-living (abiotic) components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals.
Ecosystem ecology examines physical and biological structures and examines how these ecosystem characteristics interact with each other. Ultimately, this helps us understand how to maintain high quality water and economically viable commodity production. A major focus of ecosystem ecology is on functional processes, ecological mechanisms that maintain the structure and services produced by ecosystems. These include primary productivity (production of biomass), decomposition, and trophic interactions.
Studies of ecosystem function have greatly improved human understanding of sustainable production of forage, fiber, fuel, and provision of water. Functional processes are mediated by regional-to-local level climate, disturbance, and management. Thus ecosystem ecology provides a powerful framework for identifying ecological mechanisms that interact with global environmental problems, especially global warming and degradation of surface water.
This example demonstrates several important aspects of ecosystems:
Ecosystem boundaries are often nebulous and may fluctuate in time
Organisms within ecosystems are dependent on ecosystem level biological and physical processes
Adjacent ecosystems closely interact and often are interdependent for maintenance of community structure and functional processes that maintain productivity and biodiversity
These characteristics also introduce practical problems into natural resource management. Who will manage which ecosystem? Will timber cutting in the forest degrade recreational fishing in the stream? These questions are difficult for land managers to address while the boundary between ecosystems remains unclear; even though decisions in
|
https://en.wikipedia.org/wiki/List%20of%20Tetris%20variants
|
This is a list of variants of the game Tetris. It includes officially licensed Tetris sequels, as well as unofficial clones.
Official games
Unofficial games
See also
List of puzzle video games
Notes
References
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
Tetris
|
https://en.wikipedia.org/wiki/CBC%20Tower%20%28Mont-Carmel%29
|
The CBC Tower, also known as the WesTower Transmission Tower, was a guyed mast (now after its reconstruction) for FM- and TV-transmission located atop Mont-Carmel near Shawinigan, Quebec, Canada. The tower was built in 1972 and it served for several decades as Quebec's primary CBC transmission point and also served several radio and television stations for the Trois-Rivières market.
2001 Incident
On April 22, 2001 a lone pilot, Gilbert Paquette, flew his Cessna 150 into the tower and was killed. The fuselage of the plane remained wedged in the upper part of the tower, with the pilot's body inside. The crash also knocked the tower several metres off balance. It was decided that due to the structural damage and the need to recover the pilot's body the mast would have to be demolished. Several days later a controlled implosion brought the tower down, not damaging the several buildings nearby. At the time, it was the tallest structure to have ever been demolished with explosives.
In July 2003, a new mast broadcasting with over double the effective radiated power of the original mast (from 4,386 watts to 9,300 watts) was built exactly at the same place, on the same base. Its height is the same, for the tower structure, but the antenna on the top who was is now.
See also
List of masts
References
External links
Implosion World: "Freak Accident Leads to Record-Setting Blast"
Radio masts and towers
Buildings and structures in Shawinigan
Transmitter sites in Canada
Towers in Quebec
Canadian Broadcasting Corporation
Buildings and structures demolished by controlled implosion
Buildings and structures demolished in 2001
Towers completed in 1972
1972 establishments in Quebec
2001 disestablishments in Quebec
Demolished buildings and structures in Canada
|
https://en.wikipedia.org/wiki/Data%20cube
|
In computer programming contexts, a data cube (or datacube) is a multi-dimensional ("n-D") array of values. Typically, the term data cube is applied in contexts where these arrays are massively larger than the hosting computer's main memory; examples include multi-terabyte/petabyte data warehouses and time series of image data.
The data cube is used to represent data (sometimes called facts) along some dimensions of interest.
For example, in online analytical processing (OLAP) such dimensions could be the subsidiaries a company has, the products the company offers, and time; in this setup, a fact would be a sales event where a particular product has been sold in a particular subsidiary at a particular time. In satellite image timeseries dimensions would be latitude and longitude coordinates and time; a fact (sometimes called measure) would be a pixel at a given space and time as taken by the satellite (following some processing that is not of concern here).
Even though it is called a cube (and the examples provided above happen to be 3-dimensional for brevity), a data cube generally is a multi-dimensional concept which can be 1-dimensional, 2-dimensional, 3-dimensional, or higher-dimensional.
In any case, every dimension divides data into groups of cells whereas each cell in the cube represents a single measure of interest. Sometimes cubes hold only few values with the rest being empty, i.e. undefined, sometimes most or all cube coordinates hold a cell value. In the first case such data are called sparse, in the second case they are called dense, although there is no hard delineation between both.
History
Multi-dimensional arrays have long been familiar in programming languages. Fortran offers arbitrarily-indexed 1-D arrays and arrays of arrays, which allows the construction of higher-dimensional arrays, up to 15 dimensions. APL supports n-D arrays with a rich set of operations. All these have in common that arrays must fit into the main memory and are available
|
https://en.wikipedia.org/wiki/Orgueil%20%28meteorite%29
|
Orgueil is a scientifically important carbonaceous chondrite meteorite that fell in southwestern France in 1864.
History
The Orgueil meteorite fell on May 14, 1864, a few minutes after 20:00 local time, near Orgueil in southern France. About 20 stones fell over an area of 5-10 square kilometres. A specimen of the meteorite was analyzed that same year by François Stanislaus Clöez, professor of chemistry at the Musée d'Histoire Naturelle, who focused on the organic matter found in this meteorite. He wrote that it contained carbon, hydrogen, and oxygen, and its composition was very similar to peat from the Somme valley or to the lignite of Ringkohl near Kassel. An intense scientific discussion ensued, continuing into the 1870s, as to whether the organic matter might have a biological origin.
Curation and Distribution
Orgueil specimens are in curation by bodies around the world. Given the large mass, samples are in circulation for nondestructive (and with sufficient justification, destructive) study and test.
Source: Grady, M. M. Catalogue of Meteorites, 5th Edition, Cambridge University Press
Composition and classification
Orgueil is one of five known meteorites belonging to the CI chondrite group (see meteorites classification), and is the largest (). This group has a composition that is essentially identical to that of the sun, excluding gaseous elements like hydrogen and helium. Notably though, the Orgueil meteor is highly enriched in (volatile) mercury - undetectable in the solar photosphere, and this is a major driver of the "mercury paradox" that mercury abundances in meteors do not follow its volatile nature and isotopic ratios based expected behaviour in the solar nebula.
Because of its extraordinarily primitive composition and relatively large mass, Orgueil is one of the most-studied meteorites. One notable discovery in Orgueil was a high concentration of isotopically anomalous xenon called "xenon-HL". The carrier of this gas is extremely fine-grained d
|
https://en.wikipedia.org/wiki/Matrix%20calculus
|
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section.
Scope
Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the d
|
https://en.wikipedia.org/wiki/Global%20Assembly%20Cache
|
The Global Assembly Cache (GAC) is a machine-wide CLI assembly cache for the Common Language Infrastructure (CLI) in Microsoft's .NET Framework. The approach of having a specially controlled central repository addresses the flaws in the shared library concept and helps to avoid pitfalls of other solutions that led to drawbacks like DLL hell.
Requirements
Assemblies residing in the GAC must adhere to a specific versioning scheme which allows for side-by-side execution of different code versions. Specifically, such assemblies must be strongly named.
Usage
There are two ways to interact with the GAC: the Global Assembly Cache Tool (gacutil.exe) and the Assembly Cache Viewer (shfusion.dll).
Global Assembly Cache Tool
gacutil.exe is an older command-line utility that shipped with .NET 1.1 and is still available with the .NET SDK.
One can check the availability of a shared assembly in GAC by using the command:
gacutil.exe /l <assemblyName>
One can register a shared assembly in the GAC by using the command:
gacutil.exe /i <assemblyName>
Or by copying an assembly file into the following location:
%windir%\assembly\
Note that for .NET 4.0 the GAC location is now:
%windir%\Microsoft.NET\assembly\
Other options for this utility will be briefly described if you use the /? flag, i.e.:
gacutil.exe /?
Assembly Cache Viewer
The newer interface, the Assembly Cache Viewer, is integrated into Windows Explorer. Browsing %windir%\assembly\ (for example, C:\WINDOWS\assembly) or %WINDIR%\Microsoft.NET\assembly, displays the assemblies contained in the cache along with their versions, culture, public key token, and processor architecture. Assemblies are installed by dragging and dropping and uninstalled by selecting and pressing the delete key or using the context menu.
With the launch of the .NET Framework 4, the Assembly Cache Viewer shell extension is obsolete.
Example of use
A computer has two CLI assemblies both named AssemblyA, but one is version 1.0 and the other is ve
|
https://en.wikipedia.org/wiki/Option%20ROM
|
An Option ROM for the PC platform (i.e. the IBM PC and derived successor computer systems) is a piece of firmware that resides in ROM on an expansion card (or stored along with the main system BIOS), which gets executed to initialize the device and (optionally) add support for the device to the BIOS. In its usual use, it is essentially a driver that interfaces between the BIOS API and hardware. Technically, an option ROM is firmware that is executed by the BIOS after POST (the testing and initialization of basic system hardware) and before the BIOS boot process, gaining complete control of the system and being generally unrestricted in what it can do. The BIOS relies on each option ROM to return control to the BIOS so that it can either call the next option ROM or commence the boot process. For this reason, it is possible (but not usual) for an option ROM to keep control and preempt the BIOS boot process. The BIOS (at least as originally designed by IBM) generally scans for and initializes (by executing) option ROMs in ascending address order at 2 KB address intervals within two different address ranges above address C0000h in the conventional (20-bit) memory address space; later systems may also scan additional address ranges in the 24-bit or 32-bit extended address space.
Option ROMs are necessary to enable non-Plug and Play peripheral devices to boot and to extend the BIOS to provide support for any non-Plug and Play peripheral device in the same way that standard and motherboard-integrated peripherals are supported. Option ROMs are also used to extend the BIOS or to add other firmware services to the BIOS. In principle, an option ROM could provide any sort of firmware extension, such as a library of video graphics subroutines, or a set of PCM audio processing services, and cause it to be installed into the system RAM and optionally the CPU interrupt system before boot time.
A common option ROM is the video BIOS which gets loaded very early on in the boot p
|
https://en.wikipedia.org/wiki/Reich%20Labour%20Service
|
The Reich Labour Service (Reichsarbeitsdienst; RAD) was a major paramilitary organization established in Nazi Germany as an agency to help mitigate the effects of unemployment on the German economy, militarise the workforce and indoctrinate it with Nazi ideology. It was the official state labour service, divided into separate sections for men and women.
From June 1935 onward, men aged between 18 and 25 may have served six months before their military service. During World War II, compulsory service also included young women, and the RAD developed to an auxiliary formation which provided support for the Wehrmacht armed forces.
Foundation
In the course of the Great Depression, the German government of the Weimar Republic under Chancellor Heinrich Brüning by emergency decree established the Freiwilliger Arbeitsdienst ('Voluntary Labour Service', FAD), on 5 June 1931, two years before the Nazi Party (NSDAP) ascended to national power. The state sponsored employment organisation provided services to civic and land improvement projects, from 16 July 1932 it was headed by Friedrich Syrup in the official rank of a Reichskommissar. As the name stated, participating was voluntary as long as the Weimar Republic existed.
The concept was adopted by Adolf Hitler, who upon the Nazi seizure of power in 1933 appointed Konstantin Hierl state secretary in the Reich Ministry of Labour, responsible for FAD matters. Hierl was already a high-ranking member of the NSDAP and head of the party's labour organisation, the Nationalsozialistischer Arbeitsdienst or NSAD. Hierl developed the concept of a state labour service organisation similar to the Reichswehr army, with a view to implementing a compulsory service. Meant as an evasion of the regulations set by the 1919 Treaty of Versailles, voluntariness initially was maintained after protests by the Geneva World Disarmament Conference.
Hierl's rivalry with Labour Minister Franz Seldte led to the affiliation of his office as a FAD Reichskom
|
https://en.wikipedia.org/wiki/Cooler
|
A cooler, portable ice chest, ice box, cool box, chilly bin (in New Zealand), or esky (Australia) is an insulated box used to keep food or drink cool.
Ice cubes are most commonly placed in it to help the contents inside stay cool. Ice packs are sometimes used, as they either contain the melting water inside, or have a gel sealed inside that stays cold longer than plain ice (absorbing heat as it changes phase).
Coolers are often taken on picnics, and on vacation or holiday. Where summers are hot, they may also be used just for getting cold groceries home from the store, such as keeping ice cream from melting in a hot automobile. Even without adding ice, this can be helpful, particularly if the trip home will be lengthy. Some coolers have built-in cupholders in the lid.
They are usually made with interior and exterior shells of plastic, with a hard foam in between. They come in sizes from small personal ones to large family ones with wheels. Disposable ones are made solely from polystyrene foam (such as is a disposable coffee cup) about 2 cm or one inch thick. Most reusable ones have molded-in handles; a few have shoulder straps. The cooler has developed from just a means of keeping beverages cold into a mode of transportation with the ride-on cooler. A thermal bag, cooler bag or cool bag is very similar in concept, but typically smaller and not rigid.
History
The original inventor of the cooler is unknown, with versions becoming available in various parts of the world throughout the 1950s.
The portable ice chest was patented in the USA by Richard C. Laramy of Joliet, Illinois. On February 24, 1951, Laramy filed an application with the United States Patent Office for a portable ice chest (Serial No. 212,573). The patent (#2,663,157) was issued December 22, 1953.
In 1952, the portable Esky Auto Box was released in Australia by the Sydney refrigeration company Malley’s. Made from steel and finished in baked enamel and chrome, with cork sheeting for insulati
|
https://en.wikipedia.org/wiki/Egorov%27s%20theorem
|
In measure theory, an area of mathematics, Egorov's theorem establishes a condition for the uniform convergence of a pointwise convergent sequence of measurable functions. It is also named Severini–Egoroff theorem or Severini–Egorov theorem, after Carlo Severini, an Italian mathematician, and Dmitri Egorov, a Russian physicist and geometer, who published independent proofs respectively in 1910 and 1911.
Egorov's theorem can be used along with compactly supported continuous functions to prove Lusin's theorem for integrable functions.
Historical note
The first proof of the theorem was given by Carlo Severini in 1910: he used the result as a tool in his research on series of orthogonal functions. His work remained apparently unnoticed outside Italy, probably due to the fact that it is written in Italian, appeared in a scientific journal with limited diffusion and was considered only as a means to obtain other theorems. A year later Dmitri Egorov published his independently proved results, and the theorem became widely known under his name: however, it is not uncommon to find references to this theorem as the Severini–Egoroff theorem. The first mathematicians to prove independently the theorem in the nowadays common abstract measure space setting were , and in : an earlier generalization is due to Nikolai Luzin, who succeeded in slightly relaxing the requirement of finiteness of measure of the domain of convergence of the pointwise converging functions in the ample paper . Further generalizations were given much later by Pavel Korovkin, in the paper , and by Gabriel Mokobodzki in the paper .
Formal statement and proof
Statement
Let (fn) be a sequence of M-valued measurable functions, where M is a separable metric space, on some measure space (X,Σ,μ), and suppose there is a measurable subset A ⊆ X, with finite μ-measure, such that (fn) converges μ-almost everywhere on A to a limit function f. The following result holds: for every ε > 0, there exists a measurable subs
|
https://en.wikipedia.org/wiki/Variscale
|
A variscale is variable length mechanical scale (ruler) designed to directly measure latitude and longitude on USGS maps.
References
External links
Instructions for using the Variscale
Measuring instruments
Cartography
|
https://en.wikipedia.org/wiki/Nicholas%20Kurti
|
Nicholas Kurti, () (14 May 1908 – 24 November 1998) was a Hungarian-born British physicist who lived in Oxford, UK, for most of his life.
Career
Born in Budapest, Kurti went to high school at the Minta Gymnasium, but due to anti-Jewish laws he had to leave the country, gaining his master's degree at the Sorbonne in Paris. He obtained his doctorate in low-temperature physics in Berlin, working with Professor Franz Simon. Kurti and Simon continued to work together during 1931–1933 at the Technische Hochschule in Breslau. However, when Adolf Hitler rose to power, both Simon and Kurti left Germany, joining the Clarendon Laboratory in the University of Oxford, England.
During World War II, Kurti worked on the Manhattan project, returning to Oxford in 1945. In 1955 he won the Fernand Holweck Medal and Prize. In 1956, Simon and Kurti built a laboratory experiment that reached a temperature of one microkelvin. This work attracted worldwide attention, and Kurti was elected a Fellow of the Royal Society. He later became the society's Vice-President from 1965 to 1967.
Kurti became a Fellow of Brasenose College, Oxford, in 1947 and became Professor of Physics at Oxford in 1967, a post he held until his retirement in 1975. He was also Visiting Professor at City College in New York City, the University of California, Berkeley, and Amherst College in Massachusetts.
Nicholas Kurti was elected as a Fellow of the Royal Society (FRS) in 1956, becoming vice-president in 1965, and was appointed as a Commander of the British Empire (CBE) in 1973.
Personal life
Kurti's hobby was cooking, and he was an enthusiastic advocate of applying scientific knowledge to culinary problems, a field known today as gastrophysics. In 1969 he gave a talk at the Royal Institution titled "The physicist in the kitchen", in which he amazed the audience by using the recently invented microwave oven to make a "reverse Baked Alaska" — a Frozen Florida — hot liquor enclosed by a shell of frozen meringue. Ov
|
https://en.wikipedia.org/wiki/PICAXE
|
PICAXE is a microcontroller system based on a range of Microchip PIC microcontrollers. PICAXE devices are Microchip PIC devices with pre-programmed firmware that enables bootloading of code directly from a PC, simplifying hobbyist embedded development (not unlike the Arduino and Parallax BASIC Stamp systems). PICAXE devices have been produced by Revolution Education (Rev-Ed) since 1999.
Hardware
There are currently six (6) PICAXE variants of differing pin counts (8-14-18-20-28-40) and are available as DIL and SMD.
PICAXE microcontrollers are pre-programmed with an interpreter similar to the BASIC Stamp but using internal EEPROM instead, thus reducing cost. This also allows downloads to be made with a simple serial connection which eliminates the need for a PIC programmer. PICAXE is programmed using an RS-232 serial cable or a USB cable which connects a computer to the download circuit, which normally uses a 3.5 mm jack and two resistors.
Programming language
PICAXE microcontrollers are programmed using BASIC.
The PICAXE interpreter features bit-banged communications:
Serial (asynchronous serial)
SPI (synchronous serial)
Infrared (using a 38 kHz carrier, seven data bits and five ID bits)
One-wire
The "readtemp" command reads the temperature from a DS18B20 temperature sensor and converts it into Celsius.
All current PICAXEs have commands for using hardware features of the underlying PIC microcontrollers:
Hardware asynchronous serial
Hardware synchronous serial
Hardware PWM
DAC
ADC
SR Latch
Timers (two on X2/X1 parts which have settable intervals, only one on M2 parts with a fixed interval, older parts have none)
Comparators
Internal temperature measurement
Program space
All current PICAXE chips have at least 2048 bytes of on board program memory available for user programs:
08M2 - 2048 bytes
14M2 - 2048
18M2+ - 2048
20M2 - 2048
20X2 - 4096
28X1 - 4096
40X1 - 4096
28X2 - 4096 per slot with four slots for a total of 16 KiB
40X2 - 4096 per sl
|
https://en.wikipedia.org/wiki/Lunar%20Receiving%20Laboratory
|
The Lunar Receiving Laboratory (LRL) was a facility at NASA's Lyndon B. Johnson Space Center (Building 37) that was constructed to quarantine astronauts and material brought back from the Moon during the Apollo program to reduce the risk of back-contamination. After recovery at sea, crews from Apollo 11, Apollo 12, and Apollo 14 walked from their helicopter to the Mobile Quarantine Facility on the deck of an aircraft carrier and were brought to the LRL for quarantine. Samples of rock and regolith that the astronauts collected and brought back were flown directly to the LRL and initially analyzed in glovebox vacuum chambers.
The quarantine requirement was dropped for Apollo 15 and later missions. The LRL was used for study, distribution, and safe storage of the lunar samples. Between 1969 and 1972, six Apollo space flight missions brought back 382 kilograms (842 pounds) of lunar rocks, core samples, pebbles, sand, and dust from the lunar surface—in all, 2,200 samples from six exploration sites. Other lunar samples were returned to Earth by three automated Soviet spacecraft, Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976, which returned samples totaling 300 grams (about 3/4 pound).
In 1976, some of the samples were moved to Brooks Air Force Base in San Antonio, Texas, for second-site storage. In 1979, a Lunar Sample Laboratory Facility was built to serve as the chief repository for the Apollo samples: permanent storage in a physically secure and non-contaminating environment. The facility includes vaults for the samples and records, and laboratories for sample preparation and study. The Lunar Receiving Laboratory building was later occupied by NASA's Life Sciences division, contained biomedical and environment labs, and was used for experiments involving human adaptation to microgravity.
In September 2019, NASA announced that the Lunar Receiving Laboratory had not been used for two years and would be demolished.
See also
Moon rock
Lunar Sample Laborator
|
https://en.wikipedia.org/wiki/Mensch%20Computer
|
The Mensch Computer is a personal computer system produced by the Western Design Center (WDC). It is based on the WDC 65C265 microcontroller, which implements the instruction sets of two microprocessors: the 16-bit W65C816/65816, and the 8-bit 6502. The computer is named after Bill Mensch, designer of the 6502 and subsequent series of microprocessor.
The system is designed for hobbyists and people who enjoy computer programming, especially at the assembly language level, and includes a basic set of peripherals which can be expanded by the owner. Much software originally written for other computer systems which use the 65816 or 6502 instruction sets (such as the Nintendo Entertainment System, Super Nintendo, or Apple IIGS, among others) can be run on the Mensch Computer (either directly as binary object code or through reassembling the software source code), to the extent that such software does not rely on hardware configurations which differ from the Mensch Computer.
The Mensch Computer includes a read-only memory (ROM) machine code monitor (a type of firmware), and many software routines are available to programmers by calling subroutines in the ROM. Typically, the system runs Mensch Works, a software suite also named after Bill Mensch.
References
External links
Microcomputers
|
https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s%20inequality
|
In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall–Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants.
Grönwall's inequality is an important tool to obtain various estimates in the theory of ordinary and stochastic differential equations. In particular, it provides a comparison theorem that can be used to prove uniqueness of a solution to the initial value problem; see the Picard–Lindelöf theorem.
It is named for Thomas Hakon Grönwall (1877–1932). Grönwall is the Swedish spelling of his name, but he spelled his name as Gronwall in his scientific publications after emigrating to the United States.
The inequality was first proven by Grönwall in 1919 (the integral form below with and being constants).
Richard Bellman proved a slightly more general integral form in 1943.
A nonlinear generalization of the Grönwall–Bellman inequality is known as Bihari–LaSalle inequality. Other variants and generalizations can be found in Pachpatte, B.G. (1998).
Differential form
Let denote an interval of the real line of the form or or with . Let and be real-valued continuous functions defined on . If is differentiable in the interior of (the interval without the end points and possibly ) and satisfies the differential inequality
then is bounded by the solution of the corresponding differential equation :
for all .
Remark: There are no assumptions on the signs of the functions and .
Proof
Define the function
Note that satisfies
with and for all . By the quotient rule
Thus the derivative of the function is non-positive and the function is bounded above by its value at the initial point of the interval :
which is Grönwall's inequality.
Integral form for continuous
|
https://en.wikipedia.org/wiki/List%20of%20ECMAScript%20engines
|
An ECMAScript engine is a program that executes source code written in a version of the ECMAScript language standard, for example, JavaScript.
Just-in-time compilation engines
These are new generation ECMAScript engines for web browsers, all implementing just-in-time compilation (JIT) or variations of that idea. The performance benefits for just-in-time compilation make it much more suitable for web applications written in JavaScript.
Carakan: A JavaScript engine developed by Opera Software ASA, included in the 10.50 release of the Opera web browser, until switching to V8 with Opera 15 (released in 2013).
Chakra (JScript9): A JScript engine used in Internet Explorer. It was first previewed at MIX 10 as part of the Internet Explorer 9 Platform Preview.
Chakra: A JavaScript engine previously used in older versions of Microsoft Edge, before being replaced by V8.
SpiderMonkey: A JavaScript engine in Mozilla Gecko applications, including Firefox. The engine currently includes the IonMonkey compiler and OdinMonkey optimization module, has previously included the TraceMonkey compiler (first JavaScript JIT) and JägerMonkey.
JavaScriptCore: A JavaScript interpreter and JIT originally derived from KJS. It is used in the WebKit project and applications such as Safari. Also known as Nitro, SquirrelFish, and SquirrelFish Extreme.
JScript .NET: A .NET Framework JScript engine used in ASP.NET based on Common Language Runtime and COM Interop. Support was dropped with .NET Core and CoreCLR so its future looks questionable for ASP.NET Core.
Tamarin: An ActionScript and ECMAScript engine used in Adobe Flash.
V8: A JavaScript engine used in Google Chrome and other Chromium-based browsers, Node.js, Deno, and V8.NET.
GNU Guile features an ECMAScript interpreter as of version 1.9
Nashorn: A JavaScript engine used in Oracle Java Development Kit (JDK) since version 8.
iv, ECMAScript Lexer / Parser / Interpreter / VM / method JIT written in C++.
CL-JavaScript: Can compile Java
|
https://en.wikipedia.org/wiki/Citronellal
|
{{chembox
| Watchedfields = changed
| verifiedrevid = 443528634
| Reference =<ref>Citronellal, The Merck Index, 12th Edition</ref>
| Name = Citronellal
| ImageFile_Ref =
| ImageFile = Structural formula of (RS)-Citronellal.svg
| ImageSize = 150
| ImageAlt = Skeletal formula of (+)-citronellal
| ImageFile1 = (+)-Citronellal 3D ball.png
| ImageAlt1 = Ball-and-stick model of the (+)-citronellal molecule
| ImageCaption1 = (+)-Citronellal
| ImageFile2 = (-)-Citronellal 3D ball.png
| ImageAlt2 = Ball-and-stick model of the (-)-citronellal molecule
| ImageCaption2 = (-)-Citronellal
| IUPACName = 3,7-dimethyloct-6-enal
|Section1=
|Section2=
|Section3=
|Section4=
}}
Citronellal or rhodinal (C10H18O) is a monoterpenoid aldehyde, the main component in the mixture of terpenoid chemical compounds that give citronella oil its distinctive lemon scent.
Citronellal is a main isolate in distilled oils from the plants Cymbopogon (excepting C. citratus, culinary lemongrass), lemon-scented gum, and lemon-scented teatree. The (S'')-(−)-enantiomer of citronellal makes up to 80% of the oil from kaffir lime leaves and is the compound responsible for its characteristic aroma.
Citronellal has insect repellent properties, and research shows high repellent effectiveness against mosquitoes. Another research shows that citronellal has strong antifungal qualities.
Compendial status
British Pharmacopoeia
See also
Citral
Citronellol
Citronella oil
Hydroxycitronellal
Perfume allergy
References
Flavors
Alkenals
Monoterpenes
|
https://en.wikipedia.org/wiki/Citral
|
Citral is an acyclic monoterpene aldehyde. Being a monoterpene, it is made of two isoprene units. Citral is a collective term which covers two geometric isomers that have their own separate names; the E-isomer is named geranial (trans-citral; α-citral) or citral A. The Z-isomer is named neral (cis-citral; β-citral) or citral B. These stereoisomers occur as a mixture, not necessarily racemic; e.g. in essential oil of Australian ginger, the neral to geranial ratio is 0.61.
Occurrence
Citral is present in the volatile oils of several plants, including lemon myrtle (90–98%), Litsea citrata (90%), Litsea cubeba (70–85%), lemongrass (65–85%), lemon tea-tree (70–80%), Ocimum gratissimum (66.5%), Lindera citriodora (about 65%), Calypranthes parriculata (about 62%), petitgrain (36%), lemon verbena (30–35%), lemon ironbark (26%), lemon balm (11%), lime (6–9%), lemon (2–5%), and orange. Further, in the lipid fraction (essential oil) of Australian ginger (51–71%) Of the many sources of citral, the Australian myrtaceous tree, lemon myrtle, Backhousia citriodora F. Muell. (of the family Myrtaceae), is considered superior.
Uses
Citral has a strong lemon (citrus) scent and is used as an aroma compound in perfumery. It is used to fortify lemon oil. (Nerol, another perfumery compound, has a less intense but sweeter lemon note.) The aldehydes citronellal and citral are considered key components responsible for the lemon note with citral preferred.
It also has pheromonal effects in acari and insects.
Citral is used in the synthesis of vitamin A, lycopene, ionone and methylionone, and to mask the smell of smoke.
The herb Cymbopogon citratus has shown promising insecticidal and antifungal activity against storage pests.
Food additive
Citral is commonly used as a food additive ingredient.
It has been tested (2016) in vitro against the food-borne pathogen Cronobacter sakazakii.
Medical exploration
In a report (1997), citral is mentioned as cytotoxic to P(388) mouse leukaemia ce
|
https://en.wikipedia.org/wiki/Ionone
|
The ionones, from greek ἴον ion "violet", are a series of closely related chemical substances that are part of a group of compounds known as rose ketones, which also includes damascones and damascenones. Ionones are aroma compounds found in a variety of essential oils, including rose oil. β-Ionone is a significant contributor to the aroma of roses, despite its relatively low concentration, and is an important fragrance chemical used in perfumery. The ionones are derived from the degradation of carotenoids.
The combination of α-ionone and β-ionone is characteristic of the scent of violets and used with other components in perfumery and flavouring to recreate their scent.
The carotenes α-carotene, β-carotene, γ-carotene, and the xanthophyll β-cryptoxanthin, can all be metabolized to β-ionone, and thus have vitamin A activity because they can be converted by plant-eating animals to retinol and retinal. Carotenoids that do not contain the β-ionone moiety cannot be converted to retinol, and thus have no vitamin A activity.
Biosynthesis
Carotenoids are the precursors of important fragrance compounds in several flowers. For example, a 2010 study of ionones in Osmanthus fragrans Lour. var. aurantiacus determined its essential oil contained the highest diversity of carotenoid-derived volatiles among the flowering plants investigated. A cDNA encoding a carotenoid cleavage enzyme, OfCCD1, was identified from transcripts isolated from flowers of O. fragrans Lour. The recombinant enzymes cleaved carotenes to produce α-ionone and β-ionone in in vitro assays.
The same study also discovered that carotenoid content, volatile emissions, and OfCCD1 transcript levels are subject to photorhythmic changes, and principally increased during daylight hours. At the times when OfCCD1 transcript levels reached their maxima, the carotenoid content remained low or slightly decreased. The emission of ionones was also higher during the day; however, emissions decreased at a lower rate tha
|
https://en.wikipedia.org/wiki/Git
|
Git () is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers who are collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows (thousands of parallel branches running on different computers).
Git was originally authored by Linus Torvalds in 2005 for development of the Linux kernel, with other kernel developers contributing to its initial development. Since 2005, Junio Hamano has been the core maintainer. As with most other distributed version control systems, and unlike most client–server systems, every Git directory on every computer is a full-fledged repository with complete history and full version-tracking abilities, independent of network access or a central server. Git is free and open-source software shared under the GPL-2.0-only license.
Since its creation, Git has become the most popular distributed version control system, with nearly 95% of developers reporting it as their primary version control system as of 2022. There are many popular offerings of Git repository services, including GitHub, SourceForge, Bitbucket and GitLab.
History
Git development was started by Torvalds in April 2005 when the proprietary source-control management (SCM) system used for Linux kernel development since 2002, BitKeeper, revoked its free license for Linux development. The copyright holder of BitKeeper, Larry McVoy, claimed that Andrew Tridgell had created SourcePuller by reverse engineering the BitKeeper protocols. The same incident also spurred the creation of another version-control system, Mercurial.
Torvalds wanted a distributed system that he could use like BitKeeper, but none of the available free systems met his needs. He cited an example of a source-control management system needing 30 seconds to apply a patch and update all associated metadata, and noted that this would not s
|
https://en.wikipedia.org/wiki/%28B%2C%20N%29%20pair
|
In mathematics, a (B, N) pair is a structure on groups of Lie type that allows one to give uniform proofs of many results, instead of giving a large number of case-by-case proofs. Roughly speaking, it shows that all such groups are similar to the general linear group over a field. They were introduced by the mathematician Jacques Tits, and are also sometimes known as Tits systems.
Definition
A (B, N) pair is a pair of subgroups B and N of a group G such that the following axioms hold:
G is generated by B and N.
The intersection, T, of B and N is a normal subgroup of N.
The group W = N/T is generated by a set S of elements of order 2 such that
If s is an element of S and w is an element of W then sBw is contained in the union of BswB and BwB.
No element of S normalizes B.
The set S is uniquely determined by B and N and the pair (W,S) is a Coxeter system.
Terminology
BN pairs are closely related to reductive groups and the terminology in both subjects overlaps. The size of S is called the rank. We call
B the (standard) Borel subgroup,
T the (standard) Cartan subgroup, and
W the Weyl group.
A subgroup of G is called
parabolic if it contains a conjugate of B,
standard parabolic if, in fact, it contains B itself, and
a Borel (or minimal parabolic) if it is a conjugate of B.
Examples
Abstract examples of BN pairs arise from certain group actions.
Suppose that G is any doubly transitive permutation group on a set E with more than 2 elements. We let B be the subgroup of G fixing a point x, and we let N be the subgroup fixing or exchanging 2 points x and y. The subgroup T is then the set of elements fixing both x and y, and W has order 2 and its nontrivial element is represented by anything exchanging x and y.
Conversely, if G has a (B, N) pair of rank 1, then the action of G on the cosets of B is doubly transitive. So BN pairs of rank 1 are more or less the same as doubly transitive actions on sets with more than 2 elements.
More concrete examples of BN pairs c
|
https://en.wikipedia.org/wiki/Model%20of%20computation
|
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
Models
Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.
Sequential models
Sequential models include:
Finite state machines
Post machines (Post–Turing machines and tag machines).
Pushdown automata
Register machines
Random-access machines
Turing machines
Decision tree model
Functional models
Functional models include:
Abstract rewriting systems
Combinatory logic
General recursive functions
Lambda calculus
Concurrent models
Concurrent models include:
Actor model
Cellular automaton
Interaction nets
Kahn process networks
Logic gates and digital circuits
Petri nets
Synchronous Data Flow
Some of these models have both deterministic and nondeterministic variants. Nondeterministic models are not useful for practical computation; they are used in the study of computational complexity of algorithms.
Models differ in their expressive power; for example, each function that can be computed by a Finite state machine can also be computed by a Turing machine, but not vice versa.
Uses
In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs fro
|
https://en.wikipedia.org/wiki/Cache-oblivious%20algorithm
|
In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the cache lines, etc.) as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ignoring constant factors). Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. Cache-oblivious algorithms are contrasted with explicit loop tiling, which explicitly breaks a problem into blocks that are optimally sized for a given cache.
Optimal cache-oblivious algorithms are known for matrix multiplication, matrix transposition, sorting, and several other problems. Some more general algorithms, such as Cooley–Tukey FFT, are optimally cache-oblivious under certain choices of parameters. As these algorithms are only optimal in an asymptotic sense (ignoring constant factors), further machine-specific tuning may be required to obtain nearly optimal performance in an absolute sense. The goal of cache-oblivious algorithms is to reduce the amount of such tuning that is required.
Typically, a cache-oblivious algorithm works by a recursive divide-and-conquer algorithm, where the problem is divided into smaller and smaller subproblems. Eventually, one reaches a subproblem size that fits into the cache, regardless of the cache size. For example, an optimal cache-oblivious matrix multiplication is obtained by recursively dividing each matrix into four sub-matrices to be multiplied, multiplying the submatrices in a depth-first fashion. In tuning for a specific machine, one may use a hybrid algorithm which uses loop tiling tuned for the specific cache sizes at the bottom level but otherwise uses the cache-oblivious algorithm.
History
The idea (and nam
|
https://en.wikipedia.org/wiki/Finite%20model%20theory
|
Finite model theory is a subarea of model theory. Model theory is the branch of logic which deals with the relation between a formal language (syntax) and its interpretations (semantics). Finite model theory is a restriction of model theory to interpretations on finite structures, which have a finite universe.
Since many central theorems of model theory do not hold when restricted to finite structures, finite model theory is quite different from model theory in its methods of proof. Central results of classical model theory that fail for finite structures under finite model theory include the compactness theorem, Gödel's completeness theorem, and the method of ultraproducts for first-order logic (FO).
While model theory has many applications to mathematical algebra, finite model theory became an "unusually effective" instrument in computer science. In other words: "In the history of mathematical logic most interest has concentrated on infinite structures. [...] Yet, the objects computers have and hold are always finite. To study computation we need a theory of finite structures." Thus the main application areas of finite model theory are: descriptive complexity theory, database theory and formal language theory.
Axiomatisability
A common motivating question in finite model theory is whether a given class of structures can be described in a given language. For instance, one might ask whether the class of cyclic graphs can be distinguished among graphs by a FO sentence, which can also be phrased as asking whether cyclicity is FO-expressible.
A single finite structure can always be axiomatized in first-order logic, where axiomatized in a language L means described uniquely up to isomorphism by a single L-sentence. Similarly, any finite collection of finite structures can always be axiomatized in first-order logic. Some, but not all, infinite collections of finite structures can also be axiomatized by a single first-order sentence.
Characterisation of a single
|
https://en.wikipedia.org/wiki/Gutmann%20method
|
The Gutmann method is an algorithm for securely erasing the contents of computer hard disk drives, such as files. Devised by Peter Gutmann and Colin Plumb and presented in the paper Secure Deletion of Data from Magnetic and Solid-State Memory in July 1996, it involved writing a series of 35 patterns over the region to be erased.
The selection of patterns assumes that the user does not know the encoding mechanism used by the drive, so it includes patterns designed specifically for three types of drives. A user who knows which type of encoding the drive uses can choose only those patterns intended for their drive. A drive with a different encoding mechanism would need different patterns.
Most of the patterns in the Gutmann method were designed for older MFM/RLL encoded disks. Gutmann himself has noted that more modern drives no longer use these older encoding techniques, making parts of the method irrelevant. He said "In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques".
Since about 2001, some ATA IDE and SATA hard drive manufacturer designs include support for the ATA Secure Erase standard, obviating the need to apply the Gutmann method when erasing an entire drive. The Gutmann method does not apply to USB sticks: an 2011 study reports that 71.7% of data remained available. On solid state drives it resulted in 0.8 - 4.3% recovery.
Background
The delete function in most operating systems simply marks the space occupied by the file as reusable (removes the pointer to the file) without immediately removing any of its contents. At this point the file can be fairly easily recovered by numerous recovery applications. However, once the space is overwritten with other data, there is no known way to use software to recover it. It cannot be done with software alone since the storag
|
https://en.wikipedia.org/wiki/Continuous%20integration
|
In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day. Nowadays it is typically implemented in such a way that it triggers an automated build with testing. Grady Booch first proposed the term CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.
Rationale
When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the source code repository, this copy gradually ceases to reflect the repository code. Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts.
The longer development continues on a branch without merging back to the mainline, the greater the risk of multiple integration conflicts and failures when the developer branch is eventually merged back. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.
Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as "merge hell", or "integration hell", where the time it takes to integrate exceeds the time it took to make their original changes.
Workflows
Run tests locally
CI should be used in combination with automated unit tests written through the practices of test-driven development. All unit tests in the developer's local environment should be run and passed before committing to the mainline. This helps prevent one developer's work-in-progress from breaking another develope
|
https://en.wikipedia.org/wiki/Reed%20bed
|
A reedbed or reed bed is a natural habitat found in floodplains, waterlogged depressions and
estuaries. Reedbeds are part of a succession from young reeds colonising open water or wet ground through a gradation of increasingly dry ground. As reedbeds age, they build up a considerable litter layer that eventually rises above the water level and that ultimately provides opportunities in the form of new areas for larger terrestrial plants such as shrubs and trees to colonise.
Artificial reedbeds are used to remove pollutants from greywater, and are also called constructed wetlands.
Types
Reedbeds vary in the species that they can support, depending upon water levels within the wetland system, climate, seasonal variations, and the nutrient status and salinity of the water. Reed swamps have 20 cm or more of surface water during the summer and often have high invertebrate and bird species use. Reed fens have water levels at or below the surface during the summer and are often more botanically complex. Reeds and similar plants do not generally grow in very acidic water; so, in these situations, reedbeds are replaced by bogs and vegetation such as poor fen.
Although common reeds are characteristic of reedbeds, not all vegetation dominated by this species is characteristic of reedbeds. It also commonly occurs in unmanaged, damp grassland and as an understorey in certain types of damp woodland.
Wildlife
Most European reedbeds mainly comprise common reed (Phragmites australis) but also include many other tall monocotyledons adapted to growing in wet conditions – other grasses such as reed sweet-grass (Glyceria maxima), Canary reed-grass (Phalaris arundinacea) and small-reed (Calamagrostis species), large sedges (species of Carex, Scirpus, Schoenoplectus, Cladium and related genera), yellow flag iris (Iris pseudacorus), reed-mace ("bulrush" – Typha species), water-plantains (Alisma species), and flowering rush (Butomus umbellatus). Many dicotyledons also occur, such a
|
https://en.wikipedia.org/wiki/Windows%20Server
|
Windows Server (formerly Windows NT Server) is a group of operating systems (OS) for servers that Microsoft has been developing since 1993. The first OS that was released for this platform is Windows NT 3.1 Advanced Server. With the release of Windows Server 2003, the brand name was changed to Windows Server. The latest release of Windows Server is Windows Server 2022, which was released in 2021.
Microsoft's history of developing operating systems for servers goes back to Windows NT 3.1 Advanced Server. Windows 2000 Server is the first OS to include Active Directory, DNS Server, DHCP Server, and Group Policy.
Members
Main releases
Main releases include:
Windows NT 3.1 Advanced Server (July 1993)
Windows NT Server 3.5 (September 1994)
Windows NT Server 3.51 (May 1995)
Windows NT 4.0 Server (July 1996)
Windows 2000 Server (December 1999)
Windows Server 2003 (April 2003)
Windows Server 2003 R2 (December 2005)
Windows Server 2008 (February 2008)
Windows Server 2008 R2 (October 2009)
Windows Server 2012 (September 2012)
Windows Server 2012 R2 (October 2013)
Windows Server 2016 (October 2016)
Windows Server 2019 (October 2018)
Windows Server 2022 (August 2021)
Traditionally, Microsoft supports Windows Server for 10 years, with five years of mainstream support and an additional five years of extended support. These releases also offer a complete desktop experience. Starting with Windows Server 2008 R2, Server Core and Nano Server configurations were made available to reduce the OS footprint. Between 2015 and 2021, Microsoft referred to these releases as "long-term support" releases to set them apart from semi-annual releases (see below.)
For sixteen years, Microsoft released a major version of Windows Server every four years, with one minor version released two years after a major release. The minor versions had an "R2" suffix in their names. In October 2018, Microsoft broke this tradition with the release of Windows Server 2019, which should have been "Windows Server
|
https://en.wikipedia.org/wiki/Structurae
|
Structurae is an online database containing pictures and information about structural and civil engineering works, and their associated engineers, architects, and builders.
Overview
Structurae was founded in 1998 by Nicolas Janberg, who had studied civil engineering at Princeton University. In March 2012, Structurae was acquired by , a subsidiary of John Wiley & Sons, Inc., with Janberg joining the company as Structurae's editor-in-chief. At that time, the web site received more than one million pageviews per month, and was available in English, French and German. In 2015, Janberg bought the site back to operate it as a freelancer again.
Buildings in the Structurae database
References
External links
Architecture websites
German websites
Architecture databases
Online databases
Databases in Germany
|
https://en.wikipedia.org/wiki/Douglas%20Houghton%20Campbell
|
Douglas Houghton Campbell (December 19, 1859 – February 24, 1953) was an American botanist and university professor. He was one of the 15 founding professors at Stanford University. His death was described as "the end of an era of a group of great plant morphologists."
Campbell was born and raised in Detroit, Michigan. His father, James V. Campbell, was a member of the Supreme Court of the state of Michigan and a law professor at the University of Michigan. Douglas Campbell graduated from Detroit High School in 1878, going on to study at the University of Michigan. He studied botany, learning new microscopy techniques, and becoming interested in cryptogrammic (deciduous) ferns. He received his master's degree in 1882, and taught botany at Detroit High School while he completed his PhD research. He received his PhD in 1886, then travelled to Germany to learn more microscopy techniques. He developed a technique to embed plant material in paraffin to make fine cross-sections; he was one of the first if not the first to study plant specimens using this technique, which had been newly developed by zoologists. He was also a pioneer in the study of microscopic specimens using vital stains.
When Campbell returned to the United States he took up a professorship at Indiana University (1888 to 1891), writing the textbook Elements of Structural and Systematic Botany. In 1891 he became the founding head of the botany department at Stanford University and remained at Stanford for the remainder of his career, retiring in 1925. He studied mosses and liverworts, producing The Structure and Development of Mosses and Ferns in 1895. This book, together with its subsequent editions in 1905 and 1918, became the authoritative work on the subject and "firmly established Campbell's reputation as one of the leading botanists of the United States." His Lectures on the Evolution of Plants was published in 1899, and became widely used as a botany textbook. University Textbook of Botany was
|
https://en.wikipedia.org/wiki/Reset%20vector
|
In computing, the reset vector is the default location a central processing unit will go to find the first instruction it will execute after a reset. The reset vector is a pointer or address, where the CPU should always begin as soon as it is able to execute instructions. The address is in a section of non-volatile memory initialized to contain instructions to start the operation of the CPU, as the first step in the process of booting the system containing the CPU.
Examples
Below is a list of typically used addresses by different microprocessors:
x86 family (Intel)
The reset vector for the Intel 8086 processor is at physical address FFFF0h (16 bytes below 1 MB). The value of the CS register at reset is FFFFh and the value of the IP register at reset is 0000h to form the segmented address FFFFh:0000h, which maps to physical address FFFF0h.
The reset vector for the Intel 80286 processor is at physical address FFFFF0h (16 bytes below 16 MB). The value of the CS register at reset is F000h with the descriptor base set to FF0000h and the value of the IP register at reset is FFF0h to form the segmented address FF000h:FFF0h, which maps to physical address FFFFF0h in real mode. This was changed to allow sufficient space to switch to protected mode without modifying the CS register.
The reset vector for the Intel 80386 and later x86 processors is physical address FFFFFFF0h (16 bytes below 4 GB). The value of the selector portion of the CS register at reset is F000h, the value of the base portion of the CS register is FFFF0000h, and the value of the IP register at reset is FFF0h to form the segmented address FFFF0000h:FFF0h, which maps to the physical address FFFFFFF0h in real mode.
Others
The reset vector for ARM processors is address 0x0 or 0xFFFF0000. During normal execution RAM is re-mapped to this location to improve performance, compared to the original ROM-based vector table.
The reset vector for MIPS32 processors is at virtual address 0xBFC00000, which is l
|
https://en.wikipedia.org/wiki/Jetronic
|
Jetronic is a trade name of a manifold injection technology for automotive petrol engines, developed and marketed by Robert Bosch GmbH from the 1960s onwards. Bosch licensed the concept to many automobile manufacturers. There are several variations of the technology offering technological development and refinement.
D-Jetronic (1967–1979)
Analogue fuel injection, 'D' is from meaning pressure. Inlet manifold vacuum is measured using a pressure sensor located in, or connected to the intake manifold, in order to calculate the duration of fuel injection pulses. Originally, this system was called Jetronic, but the name D-Jetronic was later created as a retronym to distinguish it from subsequent Jetronic iterations.
D-Jetronic was essentially a further refinement of the Electrojector fuel delivery system developed by the Bendix Corporation in the late 1950s. Rather than choosing to eradicate the various reliability issues with the Electrojector system, Bendix instead licensed the design to Bosch. With the role of the Bendix system being largely forgotten D-Jetronic became known as the first widely successful precursor of modern electronic common rail systems; it had constant pressure fuel delivery to the injectors and pulsed injections, albeit grouped (2 groups of injectors pulsed together) rather than sequential (individual injector pulses) as on later systems.
As in the Electrojector system, D-Jetronic used analogue circuitry, with no microprocessor nor digital logic, the ECU used about 25 transistors to perform all of the processing. Two important factors that led to the ultimate failure of the Electrojector system: the use of paper-wrapped capacitors unsuited to heat-cycling and amplitude modulation (tv/ham radio) signals to control the injectors were superseded. The still present lack of processing power and the unavailability of solid-state sensors meant that the vacuum sensor was a rather expensive precision instrument, rather like a barometer, with brass bello
|
https://en.wikipedia.org/wiki/Elementary%20mathematics
|
Elementary mathematics, also known as primary or secondary school mathematics, is the study of mathematics topics that are commonly taught at the primary or secondary school levels around the world. It includes a wide range of mathematical concepts and skills, including number sense, algebra, geometry, measurement, and data analysis. These concepts and skills form the foundation for more advanced mathematical study and are essential for success in many fields and everyday life. The study of elementary mathematics is a crucial part of a student's education and lays the foundation for future academic and career success.
Strands of elementary mathematics
Number sense and numeration
Number Sense is an understanding of numbers and operations. In the 'Number Sense and Numeration' strand students develop an understanding of numbers by being taught various ways of representing numbers, as well as the relationships among numbers.
Properties of the natural numbers such as divisibility and the distribution of prime numbers, are studied in basic number theory, another part of elementary mathematics.
Elementary Focus
Abacus
LCM and GCD
Fractions and Decimals
Place Value & Face Value
Addition and subtraction
Multiplication and Division
Counting
Counting Money
Algebra
Representing and ordering numbers
Estimating
Approximating
Problem Solving
To have a strong foundation in mathematics and to be able to succeed in the other strands students need to have a fundamental understanding of number sense and numeration.
Spatial sense
'Measurement skills and concepts' or 'Spatial Sense' are directly related to the world in which students live. Many of the concepts that students are taught in this strand are also used in other subjects such as science, social studies, and physical education In the measurement strand students learn about the measurable attributes of objects, in addition to the basic metric system.
Elementary Focus
Standard and non-standard units of measurement
te
|
https://en.wikipedia.org/wiki/Talk%20box
|
A talk box (also spelled talkbox and talk-box) is an effects unit that allows musicians to modify the sound of a musical instrument by shaping the frequency content of the sound and to apply speech sounds (in the same way as singing) onto the sounds of the instrument. Typically, a talk box directs sound from the instrument into the musician's mouth by means of a plastic tube adjacent to a vocal microphone. The musician controls the modification of the instrument's sound by changing the shape of the mouth, "vocalizing" the instrument's output into a microphone.
Overview
A talk box is usually an effects pedal that sits on the floor and contains a speaker attached with an airtight connection to a plastic tube; however, it can come in other forms, including homemade, usually crude, versions, and higher quality custom-made versions. The speaker is generally in the form of a compression driver, the sound-generating part of a horn loudspeaker with the horn replaced by the tube connection.
The box has connectors for the connection to the speaker output of an instrument amplifier and a connection to a normal instrument speaker. A foot-operated switch on the box directs the sound either to the talk box speaker or to the normal speaker. The switch is usually a push-on/push-off type. The other end of the tube is taped to the side of a microphone, extending enough to direct the reproduced sound in or near the performer's mouth.
When activated, the sound from the amplifier is reproduced by the speaker in the talk box and directed through the tube into the performer's mouth. The shape of the mouth filters the sound, with the modified sound being picked up by the microphone. The shape of the mouth changes the harmonic content of the sound in the same way it affects the harmonic content generated by the vocal folds when speaking.
The performer can vary the shape of the mouth and position of the tongue, changing the sound of the instrument being reproduced by the talk box speake
|
https://en.wikipedia.org/wiki/George%20Albert%20Smith%20%28filmmaker%29
|
George Albert Smith (4 January 1864 – 17 May 1959) was an English stage hypnotist, psychic, magic lantern lecturer, Fellow of the Royal Astronomical Society, inventor and a key member of the loose association of early film pioneers dubbed the Brighton School by French film historian Georges Sadoul. He is best known for his controversial work with Edmund Gurney at the Society for Psychical Research, his short films from 1897 to 1903, which pioneered film editing and close-ups, and his development of the first successful colour film process, Kinemacolor.
Biography
Birth and early life
Smith was born in Cripplegate, London in 1864. His father Charles Smith was a ticket-writer and artist. He moved with his family to Brighton, where his mother ran a boarding house on Grand Parade, following the death of his father.
It was in Brighton in the early 1880s that Smith first came to public attention touring the city's performance halls as a stage hypnotist. In 1882 he teamed up with Douglas Blackburn in an act at the Brighton Aquarium involving muscle reading, in which the blindfolded performer identifies objects selected by the audience, and second sight, in which the blindfolded performer finds objects hidden by his assistant somewhere in the theatre.
The Society for Psychical Research (SPR) accepted Smith's claims that the act was genuine and after becoming a member of the society he was appointed private secretary to the Honorary Secretary Edmund Gurney from 1883 to 1888. In 1887, Gurney carried out a number of "hypnotic experiments" in Brighton, with Smith as his "hypnotiser", which in their day made Gurney an impressive figure to the British public.
Since then it has been heavily studied and critiqued by Trevor H. Hall in his study The Strange Case of Edmund Gurney. Hall concluded that Smith (using his stage abilities) faked the results that Gurney trusted in his research papers, and this may have led to Gurney's mysterious death from a narcotic overdose in June 18
|
https://en.wikipedia.org/wiki/Project%20Monterey
|
Project Monterey was an attempt to build a single Unix operating system that ran across a variety of 32-bit and 64-bit platforms, as well as supporting multi-processing. Announced in October 1998, several Unix vendors were involved; IBM provided POWER and PowerPC support from AIX, Santa Cruz Operation (SCO) provided IA-32 support, and Sequent added multi-processing (MP) support from their DYNIX/ptx system. Intel Corporation provided expertise and ISV development funding for porting to their upcoming IA-64 (Itanium Architecture) CPU platform, which was yet to be released at that time. The focus of the project was to create an enterprise-class UNIX for IA-64, which at the time was expected to eventually dominate the UNIX server market.
By March 2001, however, "the explosion in popularity of Linux ... prompted IBM to quietly ditch" this; all involved attempted to find a niche in the rapidly developing Linux market and moved their focus away from Monterey. Sequent was acquired by IBM in 1999. In 2000, SCO's UNIX business was purchased by Caldera Systems, a Linux distributor, who later renamed themselves the SCO Group. In the same year, IBM eventually declared Monterey dead. Intel, IBM, Caldera Systems, and others had also been running a parallel effort to port Linux to IA-64, Project Trillian, which delivered workable code in February 2000. In late 2000, IBM announced a major effort to support Linux.
In May 2001, the project announced the availability of a beta test version AIX-5L for IA-64, basically meeting its original primary goal. However, Intel had missed its delivery date for its first Itanium processor by two years, and the Monterey software had no market.
With the exception of the IA-64 port and Dynix MP improvements, much of the Monterey effort was an attempt to standardize existing versions of Unix into a single compatible system. Such efforts had been undertaken in the past (e.g., 3DA) and had generally failed, as the companies involved were too relia
|
https://en.wikipedia.org/wiki/Charles%20P.%20Thacker
|
Charles Patrick "Chuck" Thacker (February 26, 1943 – June 12, 2017) was an American pioneer computer designer. He designed the Xerox Alto, which is the first computer that used a mouse-driven graphical user interface (GUI).
Biography
Thacker was born in Pasadena, California, on February 26, 1943. His father was Ralph Scott Thacker, born 1906, an electrical engineer (Caltech class of 1928) in the aeronautical industry. His mother was the former (Mattie) Fern Cheek, born 1922 in Oklahoma, a cashier and secretary, who soon raised their two sons on her own.
He received his B.S. in physics from the University of California, Berkeley in 1967. He then joined the university's "Project Genie" in 1968, which developed the pioneering Berkeley Timesharing System on the SDS 940. Butler Lampson, Thacker, and others then left to form the Berkeley Computer Corporation, where Thacker designed the processor and memory system. While BCC was not commercially successful, this group became the core technologists in the Computer Systems Laboratory at Xerox Palo Alto Research Center (PARC).
Thacker worked in the 1970s and 1980s at the PARC, where he served as project leader of the Xerox Alto personal computer system, was co-inventor of the Ethernet LAN, and contributed to many other projects, including the first laser printer.
In 1983, Thacker was a founder of the Systems Research Center (SRC) of Digital Equipment Corporation (DEC), and in 1997, he joined Microsoft Research to help establish Microsoft Research Cambridge in Cambridge, England.
After returning to the United States, Thacker designed the hardware for Microsoft's Tablet PC, based on his experience with the "interim Dynabook" at PARC, and later the Lectrice, a pen-based hand-held computer at DEC SRC.
From 2006–2010 Thacker was a research contributor to the Berkeley Research Accelerator for Multiple Processors (RAMP) based upon the Berkeley Emulation Engine FPGA platform, which sought to explore new processor designs throug
|
https://en.wikipedia.org/wiki/List%20of%20HTML%20editors
|
The following is a list of HTML editors.
Source code editors
Source code editors evolved from basic text editors, but include additional tools specifically geared toward handling code.
ActiveState Komodo
Aptana
Arachnophilia
Atom
BBEdit
BlueFish
Coda
Codelobster
CoffeeCup HTML Editor
CudaText
Dreamweaver
Eclipse with the Web Tools Platform
Emacs
EmEditor
Geany
HTML-Kit
HomeSite
Kate
Microsoft Visual Studio
Microsoft Visual Studio Code
Notepad++
NetBeans IDE
PHPEdit
PhpStorm IDE
PSPad
RJ TextEd
SciTE
Smultron
Sublime Text
TED Notepad
TextMate
TextPad
TextWrangler
UltraEdit
Vim
Visual Studio Code
WebStorm
WYSIWYG editors
HTML editors that support What You See Is What You Get (WYSIWYG) paradigm provide a user interface similar to a word processor for creating HTML documents, as an alternative to manual coding. Achieving true WYSIWYG however is not always possible.
Adobe Dreamweaver
BlueGriffon
Bootstrap Studio
CKEditor
EZGenerator
FirstPage
Freeway
Google Web Designer
HTML-NOTEPAD
Jimdo
KompoZer
Maqetta
Microsoft Expression Web
Microsoft SharePoint Designer
Microsoft Visual Web Developer Express
Microsoft Publisher
Mobirise
NetObjects Fusion
Opera Dragonfly
Quanta Plus
RocketCake
SeaMonkey Composer
Silex website builder
TinyMCE
TOWeb
UltraEdit
Webflow
Wix.com
Word processors
While word processors are not ostensibly HTML editors, the following word processors are capable of editing and saving HTML documents. Results will vary when opening some web pages.
AbiWord
Apache OpenOffice
Apple Pages
AppleWorks
Collabora Online
Kingsoft Office
LibreOffice Writer
Microsoft Word
WordPerfect
WYSIWYM editors
WYSIWYM (what you see is what you mean) is an alternative paradigm to WYSIWYG, in which the focus is on the semantic structure of the document rather than on the presentation. These editors produce more logically structured markup than is typical of WYSIWYG editors, while retaining the advantage in ease of use over hand-coding using a tex
|
https://en.wikipedia.org/wiki/Ready-mix%20concrete
|
Ready-mix concrete (RMC) is concrete that is manufactured in a batch plant, according to each specific job requirement, then delivered to the job site "ready to use".
There are two types with the first being the barrel truck or in–transit mixers. This type of truck delivers concrete in a plastic state to the site. The second is the volumetric concrete mixer. This delivers the ready mix in a dry state and then mixes the concrete on site. However, other sources divide the material into three types: Transit Mix, Central Mix or Shrink Mix concrete.
Ready-mix concrete refers to concrete that is specifically manufactured for customers' construction projects, and supplied to the customer on site as a single product. It is a mixture of Portland or other cements, water and aggregates: sand, gravel, or crushed stone. All aggregates should be of a washed type material with limited amounts of fines or dirt and clay. An admixture is often added to improve workability of the concrete and/or increase setting time of concrete (using retarders) to factor in the time required for the transit mixer to reach the site. The global market size is disputed depending on the source. It was estimated at 650 billion dollars in 2019. However it was estimated at just under 500 billion dollars in 2018.
History
There is some dispute as to when the first ready-mix delivery was made and when the first factory was built. Some sources suggest as early as 1913 in Baltimore. By 1929 there were over 100 plants operating in the United States. The industry did not expand significantly until the 1960s, and has continued to grow since then.
Design
Batch plants combine a precise amount of gravel, sand, water and cement by weight (as per a mix design formulation for the grade of concrete recommended by the structural engineer or architect), allowing specialty concrete mixtures to be developed and implemented on construction sites.
Ready-mix concrete is often used instead of other materials due to the cos
|
https://en.wikipedia.org/wiki/System%20Packet%20Interface
|
The System Packet Interface (SPI) family of Interoperability Agreements from the Optical Internetworking Forum specify chip-to-chip, channelized, packet interfaces commonly used in synchronous optical networking and Ethernet applications. A typical application of such a packet level interface is between a framer (for optical network) or a MAC (for IP network) and a network processor. Another application of this interface might be between a packet processor ASIC and a traffic manager device.
Context
There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities.
Specifications
The agreements are:
SPI-3 – Packet Interface for Physical and Link Layers for OC-48 (2.488 Gbit/s)
SPI-4.1 – System Physical Interface Level 4 (SPI-4) Phase 1: A System Interface for Interconnection Between Physical and Link Layer, or Peer-to-Peer Entities Operating at an OC-192 Rate (10 Gbit/s).
SPI-4.2 – System Packet Interface Level 4 (SPI-4) Phase 2: OC-192 System Interface for Physical and Link Layer Devices.
SPI-5 – Packet Interface for Physical and Link Layers for OC-768 (40 Gbit/s)
SPI-S – Scalable System Packet Interface - useful for interfaces starting with OC-48 and scaling into the Terabit range
History of the specifications
These agreements grew out of the donation to the OIF by PMC-S
|
https://en.wikipedia.org/wiki/Miraculin
|
Miraculin is a taste modifier, a glycoprotein extracted from the fruit of Synsepalum dulcificum. The berry, also known as the miracle fruit, was documented by explorer Chevalier des Marchais, who searched for many different fruits during a 1725 excursion to its native West Africa.
Miraculin itself does not taste sweet. When taste buds are exposed to miraculin, the protein binds to the sweetness receptors. This causes normally sour-tasting acidic foods, such as citrus, to be perceived as sweet. The effect can last for one or two hours.
History
The sweetening properties of Synsepalum dulcificum berries were first noted by des Marchais during expeditions to West Africa in the 18th century. The term miraculin derived from experiments to isolate and purify the active glycoprotein that gave the berries their sweetening effects, results that were published simultaneously by Japanese and Dutch scientists working independently in the 1960s (the Dutch team called the glycoprotein mieraculin). The word miraculin was in common use by the mid-1970s.
Glycoprotein structure
Miraculin was first sequenced in 1989 and was found to be a 24.6 kilodalton glycoprotein consisting of 191 amino acids and 13.9% by weight of various sugars.
The sugars consist of a total of 3.4 kDa, composed of a molar ratio of glucosamine (31%), mannose (30%), fucose (22%), xylose (10%), and galactose (7%).
The native state of miraculin is a tetramer consisting of two dimers, each held together by a disulfide bridge. Both tetramer miraculin and native dimer miraculin in its crude state have the taste-modifying activity of turning sour tastes into sweet tastes. Miraculin belongs to the Kunitz STI protease inhibitor family.
Sweetness properties
Miraculin, unlike curculin (another taste-modifying agent), is not sweet by itself, but it can change the perception of sourness to sweetness, even for a long period after consumption. The duration and intensity of the sweetness-modifying effect depends on vari
|
https://en.wikipedia.org/wiki/Lactisole
|
Lactisole is the sodium salt and commonly supplied form of 2-(4-methoxyphenoxy)propionic acid, a natural carboxylic acid found in roasted coffee beans. Like gymnemic acid, it has the property of masking sweet flavors and is used for this purpose in the food industry.
Chemistry
Chemically, lactisole is a double ether of hydroquinone. Since it contains an asymmetric carbon atom the molecule is chiral, with the S enantiomer predominating in natural sources and being primarily responsible for the sweetness-masking effect. Commercial lactisole is a racemic mixture of the R and S forms.
Natural occurrences
The parent acid of lactisole was discovered in 1989 in roasted Colombian arabica coffee beans in a concentration of 0.5 to 1.2 ppm.
Anti-sweet properties
At concentrations of 100–150 parts per million in food, lactisole largely suppresses the ability to perceive sweet tastes, both from sugar and from artificial sweeteners such as aspartame. A 12% sucrose solution was perceived like a 4% sucrose solution when lactisole was added. However, it is significantly less efficient than gymnemic acid with acesulfame potassium, sucrose, glucose and sodium saccharin. Research found also that it has no effect on the perception of bitterness, sourness and saltiness. According to a recent study, lactisole acts on a sweet taste receptor heteromer of the TAS1R3 sweet protein receptor in humans, but not on its rodent counterpart.
As a food additive
The principal use of lactisole is in jellies, jams, and similar preserved fruit products containing large amounts of sugar. In these products, by suppressing sugar's sweetness, it allows fruit flavors to come through. In the United States, lactisole is designated as generally recognized as safe (GRAS) by the Flavor and Extract Manufacturers Association (Fema number: 3773) and approved for use in food as flavoring agent up to 150 ppm. Currently, lactisole is manufactured and sold by Domino Sugar and its usage levels are between 50 a
|
https://en.wikipedia.org/wiki/Poltergeist%20%28computer%20programming%29
|
In computer programming, a poltergeist (or gypsy wagon) is a short-lived, typically stateless object used to perform initialization or to invoke methods in another, more permanent class. It is considered an anti-pattern. The original definition is by Michael Akroyd 1996 - Object World West Conference:
"As a gypsy wagon or a poltergeist appears and disappears mysteriously, so does this short lived object. As a consequence the code is more difficult to maintain and there is unnecessary resource waste. The typical cause for this anti-pattern is poor object design."
A poltergeist can often be identified by its name; they are often called "manager_", "controller_", "supervisor", "start_process", etc.
Sometimes, poltergeist classes are created because the programmer anticipated the need for a more complex architecture. For example, a poltergeist arises if the same method acts as both the client and invoker in a command pattern, and the programmer anticipates separating the two phases. However, this more complex architecture may actually never materialize.
Poltergeists should not be confused with long-lived, state-bearing objects of a pattern such as model–view–controller, or tier-separating patterns such as business-delegate.
To remove a poltergeist, delete the class and insert its functionality in the invoked class, possibly by inheritance or as a mixin.
See also
Anti-pattern
Factory (object-oriented programming)
YAGNI principle
References
External links
Development AntiPatterns
Anti-patterns
|
https://en.wikipedia.org/wiki/KUNS-TV
|
KUNS-TV (channel 51) is a television station licensed to Bellevue, Washington, United States, serving the Seattle area as an affiliate of the Spanish-language network Univision. It is owned by Sinclair Broadcast Group alongside dual ABC/CW affiliate KOMO-TV (channel 4). Both stations share studios within KOMO Plaza (formerly Fisher Plaza) in the Lower Queen Anne section of Seattle, while KUNS-TV's transmitter is located in the city's Queen Anne neighborhood.
History
On February 10, 1988, the Federal Communications Commission (FCC) issued a construction permit for television station KBEH. However, channel 51 did not begin its broadcasting operation until August 8, 1999, transmitting programs from the ValueVision network, which became ShopNBC in 2001 after NBC (now part of Comcast) acquired a 37% ownership stake in that network. In December 2000, the station changed its call letters to KWOG. Previously locally owned and operated and at one point being minority owned, the station was sold to Fisher Communications on September 29, 2006.
On October 31, 2006, the station changed its call letters one more time, this time to the current KUNS-TV. On January 1, 2007, it rang in the year by going from broadcasting home shopping programs to broadcasting Hispanic programming as a Univision affiliate almost instantly, providing viewers with programs such as Sabado Gigante, Despierta América and El Gordo y La Flaca, in addition to an assortment of telenovelas, along with many other programs. The station also started its own local newscast, Noticias Noroeste with Jaime Méndez and Roxy de la Torre. The newscast originates from a studio at KOMO Plaza (formerly Fisher Plaza) in Seattle.
On August 21, 2012, Fisher Communications signed an affiliation agreement with MundoFox, a Spanish-language competitor to Univision that is owned as a joint venture between Fox International Channels and Colombian broadcaster RCN TV, for KUNS and Portland sister station KUNP to be carried on both
|
https://en.wikipedia.org/wiki/SPI-4.2
|
SPI-4.2 is a version of the System Packet Interface published by the Optical Internetworking Forum. It was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems.
SPI-4 is an interface for packet and cell transfer between a physical layer (PHY) device and a link layer device, for aggregate bandwidths of OC-192 Asynchronous Transfer Mode and Packet over SONET/SDH (POS), as well as 10 Gigabit Ethernet applications.
SPI-4 has two types of transfers—Data when the RCTL signal is deasserted; Control when the RCTL signal is asserted. The transmit and receive data paths include, respectively, (TDCLK, TDAT[15:0],TCTL) and (RDCLK, RDAT[15:0], RCTL). The transmit and receive FIFO status channels include (TSCLK, TSTAT[1:0]) and (RSCLK, RSTAT[1:0]) respectively.
A typical application of SPI-4.2 is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace.
The interface consists of (per direction):
sixteen LVDS pairs for the data path
one LVDS pair for control
one LVDS pair for clock at half of the data rate
two FIFO status lines running at 1/8 of the data rate
one status clock
The clocking is source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets.
PMC-Sierra made the original OIF contribution for SPI-4.2. That contribution was based on the PL-4 specification that was developed by PMC-Sierra in conjunction with the SATURN Development Group.
The physical layer of SPI-4.2 is very similar to the HyperTransport 1.x interface, although the logical layers are very different.
External links
OIF Interoperability Agreements
Network protocols
|
https://en.wikipedia.org/wiki/Electron%20multiplier
|
An electron multiplier is a vacuum-tube structure that multiplies incident charges. In a process called secondary emission, a single electron can, when bombarded on secondary-emissive material, induce emission of roughly 1 to 3 electrons. If an electric potential is applied between this metal plate and yet another, the emitted electrons will accelerate to the next metal plate and induce secondary emission of still more electrons. This can be repeated a number of times, resulting in a large shower of electrons all collected by a metal anode, all having been triggered by just one.
History
In 1930, Russian physicist Leonid Aleksandrovitch Kubetsky proposed a device which used photocathodes combined with dynodes, or secondary electron emitters, in a single tube to remove secondary electrons by increasing the electric potential through the device. The electron multiplier can use any number of dynodes in total, which use a coefficient, σ, and created a gain of σn where n is the number of emitters.
Discrete dynode
Secondary electron emission begins when one electron hits a dynode inside a vacuum chamber and ejects electrons that cascade onto more dynodes and repeats the process over again. The dynodes are set up so that each time an electron hits the next one it will have an increase of about 100 electron Volts greater than the last dynode. Some advantages of using this include a response time in the picoseconds, a high sensitivity, and an electron gain of about 108 electrons.
Continuous dynode
A continuous dynode system uses a horn-shaped funnel of glass coated with a thin film of semiconducting materials. The electrodes have increasing resistance to allow secondary emission. Continuous dynodes use a negative high voltage in the wider end and goes to a positive near ground at the narrow end. The first device of this kind was called a Channel Electron Multiplier (CEM). CEMs required 2-4 kilovolts in order to achieve a gain of 106 electrons.
Microchannel plate
Anot
|
https://en.wikipedia.org/wiki/Faraday%20cup
|
A Faraday cup is a metal (conductive) cup designed to catch charged particles in vacuum. The resulting current can be measured and used to determine the number of ions or electrons hitting the cup. The Faraday cup was named after Michael Faraday who first theorized ions around 1830.
Examples of devices which use Faraday cups include space probes (Voyager 1, & 2, Parker Solar Probe, etc.) and mass spectrometers.
Principle of operation
When a beam or packet of ions hits the metallic body of the cup, the apparatus gains a small net charge while the ions are neutralized as the charge is transferred to the metal walls. The metal part can then be discharged to measure a small current proportional to the number of impinging ions. The Faraday cup is essentially part of a circuit where ions are the charge carriers in vacuum and it is the interface to the solid metal where electrons act as the charge carriers (as in most circuits). By measuring the electric current (the number of electrons flowing through the circuit per second) in the metal part of the circuit, the number of charges being carried by the ions in the vacuum part of the circuit can be determined. For a continuous beam of ions (each with a single charge), the total number of ions hitting the cup per unit time is
where N is the number of ions observed in a time t (in seconds), I is the measured current (in amperes) and e is the elementary charge (about 1.60 × 10−19 C). Thus, a measured current of one nanoamp (10−9 A) corresponds to about 6 billion ions striking the Faraday cup each second.
Similarly, a Faraday cup can act as a collector for electrons in a vacuum (e.g. from an electron beam). In this case, electrons simply hit the metal plate/cup and a current is produced. Faraday cups are not as sensitive as electron multiplier detectors, but are highly regarded for accuracy because of the direct relation between the measured current and number of ions.
In plasma diagnostics
The Faraday cup utilizes a phys
|
https://en.wikipedia.org/wiki/Front%20of%20house
|
In the performing arts, front of house (FOH) is the part of a performance venue that is open to the public. In theatres and live music venues, it consists of the auditorium and foyers, as opposed to the stage and backstage areas. In a theatre, the front of house manager is responsible for welcoming guests, refreshments, and making sure the auditorium is set out properly. By contrast, back of house (BOH) refers to any operations that are not visible to the audience, such as props management, costume design, stage set fabrication, lighting control, and other support functions.
Both terms are also used in the restaurant, hospitality, and retailing industries. "Back of house" refers to any work operations that do not have direct customer contact. Examples include cooking, dishwashing, cleaning, shipping and receiving, maintenance and repairs, accounting, and other indirect support tasks which are not usually visible to customers.
Live venues
Sound operators, excluding the monitor engineers, are normally positioned in a small sectioned-off area front-of-house, surrounded by the audience or at the edge of the audience area. From this position they have unobstructed listening and a clear view of the performance, enabling the operation of the main speaker system, show control consoles and other equipment. In this case "front of house" can refer to both the general audience/public area or to the specific small section from where the show is mixed.
The front of house speakers are the main speakers that cover the audience, and the front of house desk is the desk that generates the front of house audio mix. In smaller venues the front of house desk may also produce foldback (monitor) mixes for the monitor speakers onstage, whereas in larger venues there will normally be a second mixing desk for monitor control positioned just off the side of the main stage. The audio engineer that designs the front of house sound system and puts it into place for the show/event is the syste
|
https://en.wikipedia.org/wiki/Lustre%20%28file%20system%29
|
Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster. Lustre file system software is available under the GNU General Public License (version 2 only) and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world,
including the world's No. 1 ranked TOP500 supercomputer in November 2022, Frontier, as well as previous top supercomputers such as Fugaku, Titan and Sequoia.
Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, hundreds of petabytes (PB) of storage on hundreds of servers, and tens of terabytes per second (TB/s) of aggregate I/O throughput. This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as meteorology, simulation, artificial intelligence and machine learning, oil and gas, life science, rich media, and finance. The I/O performance of Lustre has widespread impact on these applications and has attracted broad attention.
History
The Lustre file system architecture was started as a research project in 1999 by Peter J. Braam, who was a staff of Carnegie Mellon University (CMU) at the time. Braam went on to found his own company Cluster File Systems in 2001, starting from work on the InterMezzo file system in the Coda project at CMU.
Lustre was developed under the Accelerated Strategic Computing Initiative Path Forward project funded by the United States Department of Energy, which included Hewlett-Packard and Intel.
In September 2007, Sun Microsystems acquired the assets of Cluster File Systems Inc. including its ”intellectual property“.
Sun included Lustre with its high-p
|
https://en.wikipedia.org/wiki/Limonene
|
Limonene is a colorless liquid aliphatic hydrocarbon classified as a cyclic monoterpene, and is the major component in the volatile oil of citrus fruit peels. The -isomer, occurring more commonly in nature as the fragrance of oranges, is a flavoring agent in food manufacturing. It is also used in chemical synthesis as a precursor to carvone and as a renewables-based solvent in cleaning products. The less common -isomer has a piny, turpentine-like odor, and is found in the edible parts of such plants as caraway, dill, and bergamot orange plants.
Limonene takes its name from Italian limone ("lemon"). Limonene is a chiral molecule, and biological sources produce one enantiomer: the principal industrial source, citrus fruit, contains -limonene ((+)-limonene), which is the (R)-enantiomer. Racemic limonene is known as dipentene. -Limonene is obtained commercially from citrus fruits through two primary methods: centrifugal separation or steam distillation.
Chemical reactions
Limonene is a relatively stable monoterpene and can be distilled without decomposition, although at elevated temperatures it cracks to form isoprene. It oxidizes easily in moist air to produce carveol, carvone, and limonene oxide. With sulfur, it undergoes dehydrogenation to p-cymene.
Limonene occurs commonly as the (R)-enantiomer, but racemizes to dipentene at 300 °C. When warmed with mineral acid, limonene isomerizes to the conjugated diene α-terpinene (which can also easily be converted to p-cymene). Evidence for this isomerization includes the formation of Diels–Alder adducts between α-terpinene adducts and maleic anhydride.
It is possible to effect reaction at one of the double bonds selectively. Anhydrous hydrogen chloride reacts preferentially at the disubstituted alkene, whereas epoxidation with mCPBA occurs at the trisubstituted alkene.
In another synthetic method Markovnikov addition of trifluoroacetic acid followed by hydrolysis of the acetate gives terpineol.
The most widely practice
|
https://en.wikipedia.org/wiki/List%20of%20office%20suites
|
In computing, an office suite is a collection of productivity software usually containing at least a word processor, spreadsheet and a presentation program. There are many different brands and types of office suites. This wikipedia article is unique for its list of discontinued office suites.
Office suites
Free and open source suites
AndrOpen Office - available for Android
Apache OpenOffice - available for Linux, macOS and Windows
Calligra Suite - available for FreeBSD, Linux, macOS and Windows
Collabora Online - available for Android, ChromeOS, iOS, iPadOS, Linux, macOS, online and Windows
LibreOffice - available for Linux, macOS and Windows, and unofficial: Android, ChromeOS, FreeBSD, Haiku, iOS, iPadOS, OpenBSD, NetBSD and Solaris
NeoOffice - available for macOS
Freeware and proprietary suites
Ability Office - available for Windows
Google Workspace - available for Android, ChromeOS, iOS, iPadOS, Linux, macOS, online and Windows
Hancom Office - available for Windows
iWork - available for iOS, iPadOS, macOS and online
Ichitaro - a Japanese-language suite available for Windows
Microsoft 365 - available for Android, iOS, iPadOS, macOS, online and Windows
MobiSystems OfficeSuite - available for Android, iOS and Windows
ONLYOFFICE - available for Android, iOS, Linux, macOS, online and Windows
Polaris Office - available for iOS, macOS and Windows
SoftMaker Office - available for Android, iOS, iPadOS, Linux, macOS and Windows
Tiki Wiki CMS Groupware - online content management
WordPerfect Office - available for Windows
WPS Office - available for Android, iOS, macOS and Windows
Discontinued office suites
Aster*x
AUIS - an office suite developed by Carnegie Mellon University and named after Andrew Carnegie
Breadbox Office - DOS software
EasyOffice
AppleWorks
Breadbox Office
Corel WordPerfect for DOS
Hancom Office Suite (formerly ThinkFree Office)
IBM Lotus SmartSuite
IBM Lotus Symphony
IBM Works – an office suite for the IBM OS/2 operating
|
https://en.wikipedia.org/wiki/Plantronics
|
Plantronics, Inc. is an American electronics company — branded Poly to reflect its dual Plantronics and Polycom heritage — producing audio communications equipment for business and consumers. Its products support unified communications, mobile use, gaming and music. Plantronics is headquartered in Santa Cruz, California, and most of its products are produced in China and Mexico.
On March 18, 2019, Plantronics announced that it would change its name to Poly following its acquisition of Polycom, although it continues to trade on the New York Stock Exchange as Plantronics, Inc. (POLY; listed as PLT until May 24, 2021).
On March 28, 2022, HP Inc. announced its intent to acquire Poly for $1.7 billion in cash as it looks to bolster its hybrid work offerings, such as headsets and videoconferencing hardware. Including debt, the deal valued at $3.3 billion and closed in August 2022.
History
In the early 1960s, airline headsets were so large and cumbersome that many pilots had switched back to the use of handheld microphones for communications. The speed and complexity of jet airliners caused a need for the introduction of small, lightweight headsets into the cockpit. In 1961, United Airlines solicited new designs from anyone who was interested. Courtney Graham, a United Airlines pilot, was one of the many who thought the heavy headsets should be replaced by something lighter. He collaborated with his pilot friend Keith Larkin to create a small, functional design which was robust enough to pass airlines standards. (Larkin had been working for a small company called Plane-Aids, a Japanese import company which offered spectacles and sunglasses that contained transistor radios in their temple pieces.) The final design, incorporating two small hearing aid-style transducers attached to a headband was submitted to United Airline approval. UAL's approval of the innovative design caused Graham and Larkin to incorporate as Pacific Plantronics (now called Plantronics, Inc.) on May 1
|
https://en.wikipedia.org/wiki/Acetoin
|
Acetoin, also known as 3-hydroxybutanone or acetyl methyl carbinol, is an organic compound with the formula CH3CH(OH)C(O)CH3. It is a colorless liquid with a pleasant, buttery odor. It is chiral. The form produced by bacteria is (R)-acetoin.
Production in bacteria
Acetoin is a neutral, four-carbon molecule used as an external energy store by a number of fermentative bacteria. It is produced by the decarboxylation of alpha-acetolactate, a common precursor in the biosynthesis of branched-chain amino acids. Owing to its neutral nature, production and excretion of acetoin during exponential growth prevents over-acidification of the cytoplasm and the surrounding medium that would result from accumulation of acidic metabolic products, such as acetic acid and citric acid. Once superior carbon sources are exhausted, and the culture enters stationary phase, acetoin can be used to maintain the culture density. The conversion of acetoin into acetyl-CoA is catalysed by the acetoin dehydrogenase complex, following a mechanism largely analogous to the pyruvate dehydrogenase complex; however, as acetoin is not a 2-oxoacid, it does not undergo decarboxylation by the E1 enzyme; instead, a molecule of acetaldehyde is released.
In some bacteria, acetoin can also be reduced to 2,3-butanediol by acetoin reductase/2,3-butanediol dehydrogenase.
The Voges-Proskauer test is a commonly used microbiological test for acetoin production.
Uses
Food ingredients
Acetoin, along with diacetyl, is one of the compounds that gives butter its characteristic flavor. Because of this, manufacturers of partially hydrogenated oils typically add artificial butter flavor – acetoin and diacetyl – (along with beta carotene for the yellow color) to the final product.
Acetoin can be found in apples, yogurt, asparagus, blackcurrants, blackberries, wheat, broccoli, brussels sprouts, cantaloupes, and maple syrup.
Acetoin is used as a food flavoring (in baked goods) and as a fragrance.
Electronic cigarettes
|
https://en.wikipedia.org/wiki/List%20of%20collaborative%20software
|
This list is divided into proprietary or free software, and open source software, with several comparison tables of different product and vendor characteristics. It also includes a section of project collaboration software, which is a standard feature in collaboration platforms.
Collaborative software
Comparison of notable software
Systems listed on a light purple background are no longer in active development.
General Information
Comparison of unified communications features
Comparison of collaborative software features
Comparison of targets
Open source software
The following are open source applications for collaboration:
Standard client–server software
Access Grid, for audio and video-based collaboration
Axigen
Citadel/UX, with support for native groupware clients (Kontact, Novell Evolution, Microsoft Outlook) and web interface
Cyn.in
EGroupware, with support for native groupware clients (Kontact, Novell Evolution, Microsoft Outlook) and web interface
Group-Office groupware and CRM
Kolab, various native PIM clients
Kopano
OpenGroupware.org
phpGroupWare
Scalix
SOGo, integrated email, calendaring with Apple iCal, Mozilla Thunderbird and native Outlook compatibility
Teambox, Basecamp-style project management software with focus on GTD task management and conversations. (Only V3 and prior are open-source.)
Zarafa
Zentyal, with support for native groupware clients (Kontact, Novell Evolution) natively for Microsoft Outlook and web interface
Zimbra
Zulip
Groupware: Web-based software
Axigen
Bricolage, content management system
BigBlueButton, Web meetings
Collabora Online, Enterprise-ready edition of LibreOffice enabling real-time collaborative editing of documents, spreadsheets, presentations and graphics
DotNetNuke, also called DNN: module-based, evolved from ASP 1.0 demo applications
EGroupware, a free open source groupware software intended for businesses from small to enterprises
EtherPad, collaborative drafting with chat
Feng Office Community Edition
Fu
|
https://en.wikipedia.org/wiki/Spearmint%20%28flavour%29
|
Spearmint is a flavour that is either naturally or artificially created to taste like the oil of the herbaceous Mentha spicata (spearmint) plant.
Uses
The most common uses for spearmint flavor is in chewing gum and toothpaste. However, it is also used in a number of other products, mainly confectionery. It is also popular as a seasonal (usually around St. Patrick's Day) milkshake flavoring in Canada and the U.S.
Trademark in the UK
The words "WRIGLEY'S SPEARMINT" are trademarked in the UK. In 1959, skiffle artist Lonnie Donegan renamed his cover version of the 1924 Rose, Breuer, and Bloom song "Does the Spearmint Lose its Flavor on the Bedpost Overnight?" as the BBC, not wanting to risk breaching trademark laws, refused to play it. Donegan renamed the song "Does Your Chewing Gum Lose Its Flavor (On the Bedpost Overnight)", which then went on to become a top-10 hit in the UK and US.
References
See also
Carvone
Flavors
Chewing gum
|
https://en.wikipedia.org/wiki/Heirloom%20plant
|
An heirloom plant, heirloom variety, heritage fruit (Australia and New Zealand), or heirloom vegetable (especially in Ireland and the UK) is an old cultivar of a plant used for food that is grown and maintained by gardeners and farmers, particularly in isolated communities of the Western world. These were commonly grown during earlier periods in human history, but are not used in modern large-scale agriculture.
In some parts of the world, it is illegal to sell seeds of cultivars that are not listed as approved for sale. The Henry Doubleday Research Association, now known as Garden Organic, responded to this legislation by setting up the Heritage Seed Library to preserve seeds of as many of the older cultivars as possible. However, seed banks alone have not been able to provide sufficient insurance against catastrophic loss. In some jurisdictions, like Colombia, laws have been proposed that would make seed saving itself illegal.
Many heirloom vegetables have kept their traits through open pollination, while fruit varieties such as apples have been propagated over the centuries through grafts and cuttings. The trend of growing heirloom plants in gardens has been returning in popularity in North America and Europe.
Origin
Before the industrialization of agriculture, a much wider variety of plant foods were grown for human consumption, largely due to farmers and gardeners saving seeds and cuttings for future planting. From the 16th century through the early 20th centuries, the diversity was huge. Old nursery catalogues were filled with plums, peaches, pears and apples of numerous varieties and seed catalogs offered legions of vegetable varieties. Valuable and carefully selected seeds were sold and traded using these catalogs along with useful advice on cultivation. Since World War II, agriculture in the industrialized world has mostly consisted of food crops which are grown in large, monocultural plots. In order to maximize consistency, few varieties of each type of
|
https://en.wikipedia.org/wiki/Applied%20behavior%20analysis
|
Applied behavior analysis (ABA), also called behavioral engineering, is a psychological intervention that applies approaches based upon the principles of respondent and operant conditioning to change behavior of social significance. It is the applied form of behavior analysis; the other two forms are radical behaviorism (or the philosophy of the science) and the experimental analysis of behavior (or basic experimental laboratory research).
The name applied behavior analysis has replaced behavior modification because the latter approach suggested attempting to change behavior without clarifying the relevant behavior-environment interactions. In contrast, ABA changes behavior by first assessing the functional relationship between a targeted behavior and the environment. Further, the approach often seeks to develop socially acceptable alternatives for aberrant behaviors.
Although service delivery providers overwhelmingly specialize in utilizing structured and naturalistic early behavioral interventions for individuals with autism, ABA has also been utilized in a range of other areas.
ABA is controversial, especially among members of the autism rights movement, for a number of reasons. Some ABA interventions emphasize normalization instead of acceptance, and there is a history of, in some forms of ABA and its predecessors, the use of aversives, such as electric shocks. ABA is also controversial due to concerns about its evidence base. In the last few years, there have been reforms in some types of ABA interventions to address these criticisms and concerns, especially regarding masking.
Definition
ABA is an applied science devoted to developing procedures which will produce observable changes in behavior. It is to be distinguished from the experimental analysis of behavior, which focuses on basic experimental laboratory research, but it uses principles developed by such research, in particular operant conditioning and classical conditioning. Behavior analysis adopts
|
https://en.wikipedia.org/wiki/PC/104
|
PC/104 (or PC104) is a family of embedded computer standards which define both form factors and computer buses by the PC/104 Consortium. Its name derives from the 104 pins on the interboard connector (ISA) in the original PC/104 specification and has been retained in subsequent revisions, despite changes to connectors. PC/104 is intended for specialized environments where a small, rugged computer system is required. The standard is modular, and allows consumers to stack together boards from a variety of COTS manufacturers to produce a customized embedded system.
The original PC/104 form factor is somewhat smaller than a desktop PC motherboard at . Unlike other popular computer form factors such as ATX, which rely on a motherboard or backplane, PC/104 boards are stacked on top of each other like building blocks. The PC/104 specification defines four mounting holes at the corners of each module, which allow the boards to be fastened to each other using standoffs. The stackable bus connectors and use of standoffs provides a more rugged mounting than slot boards found in desktop PCs. The compact board size further contributes to the ruggedness of the form factor by reducing the possibility of PCB flexing under shock and vibration.
A typical PC/104 system (commonly referred to as a "stack") will include a CPU board, power supply board, and one or more peripheral boards, such as a data acquisition module, GPS receiver, or Wireless LAN controller. A wide array of peripheral boards are available from various vendors. Users may design a stack that incorporates boards from multiple vendors. The overall height, weight, and power consumption of the stack can vary depending on the number of boards that are used.
PC/104 is sometimes referred to as a "stackable PC", as most of the architecture derives from the desktop PC. The majority of PC/104 CPU boards are x86 compatible and include standard PC interfaces such as Serial Ports, USB, Ethernet, and VGA. A x86 PC/104 s
|
https://en.wikipedia.org/wiki/Single-letter%20second-level%20domain
|
Single-letter second-level domains are domains in which the second-level domain of the domain name consists of only one letter, such as . In 1993, the Internet Assigned Numbers Authority (IANA) explicitly reserved all single-letter and single-digit second-level domains under the top-level domains com, net, and org, and grandfathered those that had already been assigned. In December 2005, ICANN considered auctioning these domain names.
Active single-letter domains
On December 1, 1993, the Internet Assigned Numbers Authority (IANA) explicitly reserved the remaining single-letter and single-digit domain names. The few domains that were already assigned were grandfathered in and continued to exist.
The six single-letter domains in existence at that time under .com, .net and .org were the following:
The .org TLD was subsequently reopened for single-letter domain registrations. These and selected other gTLD and ccTLD single-letter domain names currently in use, typically as shortcuts, are listed below.
Many other single-letter second-level domains have been registered under country code top-level domains. The list of country code top-level domains which have been identified to allow single-letter domains are:
.ac
.af
.ag
.ai
.am
.bo
.by
.bz
.cm
.cn
.co
.cr
.cx
.cz
.de
.dk
.fm
.fr
.gd
.gg
.gl
.gp
.gs
.gt
.gy
.hn
.hr
.ht
.ie<ref>One and Two Letter .IE Domains Now Available "The release of short .ie domain names " Dublin, 12 October 2015</ref>
.im
.io
.is
.je
.kg
.ki
.kw
.la
.lb
.lc
.ly
.md
.mg
.mk
.mp
.ms
.mw
.mx
.mu
.nf
.np
.nz
.pe
.ph
.pk
.pl
.pn
.pr
.pw
.ro
.sh
.st
.tc
.tl
.tt
.to
.tv
.ua
.vc
.vg
.vn
.vu
.ws
Non-ASCII single-character domains
Single-character non-ASCII second-level domains also exist (as seen below), also known as Internationalized domain names (IDN), these domains are actually registered as their Punycode translations (which are more than a single character) for DNS purposes.
|
https://en.wikipedia.org/wiki/Input%20kludge
|
In computer programming, an input kludge is a type of failure in software (an anti-pattern) where simple user input is not handled. For example, if a computer program accepts free text input from the user, an ad hoc algorithm will mishandle many combinations of legal and illegal input strings. Input kludges are usually difficult for a programmer to detect in a unit test, but very easy for the end user to find. The evidence exists that the end user can easily crash software that fails to correctly handle user input. Indeed, the buffer overflow security hole is an example of the problems caused.
To remedy input kludges, one may use input validation algorithms to handle user input. A monkey test can be used to detect an input kludge problem. A common first test to discover this problem is to roll one's hand across the computer keyboard or to 'mash' the keyboard to produce a large junk input, but such an action often lacks reproducibility. Greater systematicity and reproducibility may be obtained by using fuzz testing software.
See also
Garbage in, garbage out
Guard (computer science)
Kludge
Anti-patterns
Software bugs
|
https://en.wikipedia.org/wiki/Sweetness
|
Sweetness is a basic taste most commonly perceived when eating foods rich in sugars. Sweet tastes are generally regarded as pleasurable. In addition to sugars like sucrose, many other chemical compounds are sweet, including aldehydes, ketones, and sugar alcohols. Some are sweet at very low concentrations, allowing their use as non-caloric sugar substitutes. Such non-sugar sweeteners include saccharin and aspartame. Other compounds, such as miraculin, may alter perception of sweetness itself.
The perceived intensity of sugars and high-potency sweeteners, such as aspartame and neohesperidin dihydrochalcone, are heritable, with gene effect accounting for approximately 30% of the variation.
The chemosensory basis for detecting sweetness, which varies between both individuals and species, has only begun to be understood since the late 20th century. One theoretical model of sweetness is the multipoint attachment theory, which involves multiple binding sites between a sweetness receptor and a sweet substance.
Studies indicate that responsiveness to sugars and sweetness has very ancient evolutionary beginnings, being manifest as chemotaxis even in motile bacteria such as E. coli. Newborn human infants also demonstrate preferences for high sugar concentrations and prefer solutions that are sweeter than lactose, the sugar found in breast milk. Sweetness appears to have the highest taste recognition threshold, being detectable at around 1 part in 200 of sucrose in solution. By comparison, bitterness appears to have the lowest detection threshold, at about 1 part in 2 million for quinine in solution. In the natural settings that human primate ancestors evolved in, sweetness intensity should indicate energy density, while bitterness tends to indicate toxicity. The high sweetness detection threshold and low bitterness detection threshold would have predisposed our primate ancestors to seek out sweet-tasting (and energy-dense) foods and avoid bitter-tasting foods. Even amongst
|
https://en.wikipedia.org/wiki/Stirling%20transform
|
In combinatorial mathematics, the Stirling transform of a sequence { an : n = 1, 2, 3, ... } of numbers is the sequence { bn : n = 1, 2, 3, ... } given by
where is the Stirling number of the second kind, also denoted S(n,k) (with a capital S), which is the number of partitions of a set of size n into k parts.
The inverse transform is
where s(n,k) (with a lower-case s) is a Stirling number of the first kind.
Berstein and Sloane (cited below) state "If an is the number of objects in some class with points labeled 1, 2, ..., n (with all labels distinct, i.e. ordinary labeled structures), then bn is the number of objects with points labeled 1, 2, ..., n (with repetitions allowed)."
If
is a formal power series, and
with an and bn as above, then
Likewise, the inverse transform leads to the generating function identity
See also
Binomial transform
Generating function transformation
List of factorial and binomial topics
References
.
Khristo N. Boyadzhiev, Notes on the Binomial Transform, Theory and Table, with Appendix on the Stirling Transform (2018), World Scientific.
Factorial and binomial topics
Transforms
|
https://en.wikipedia.org/wiki/Time%20derivative
|
A time derivative is a derivative of a function with respect to time, usually interpreted as the rate of change of the value of the function. The variable denoting time is usually written as .
Notation
A variety of notations are used to denote the time derivative. In addition to the normal (Leibniz's) notation,
A very common short-hand notation used, especially in physics, is the 'over-dot'. I.E.
(This is called Newton's notation)
Higher time derivatives are also used: the second derivative with respect to time is written as
with the corresponding shorthand of .
As a generalization, the time derivative of a vector, say:
is defined as the vector whose components are the derivatives of the components of the original vector. That is,
Use in physics
Time derivatives are a key concept in physics. For example, for a changing position , its time derivative is its velocity, and its second derivative with respect to time, , is its acceleration. Even higher derivatives are sometimes also used: the third derivative of position with respect to time is known as the jerk. See motion graphs and derivatives.
A large number of fundamental equations in physics involve first or second time derivatives of quantities. Many other fundamental quantities in science are time derivatives of one another:
force is the time derivative of momentum
power is the time derivative of energy
electric current is the time derivative of electric charge
and so on.
A common occurrence in physics is the time derivative of a vector, such as velocity or displacement. In dealing with such a derivative, both magnitude and orientation may depend upon time.
Example: circular motion
For example, consider a particle moving in a circular path. Its position is given by the displacement vector , related to the angle, θ, and radial distance, r, as defined in the figure:
For this example, we assume that . Hence, the displacement (position) at any time t is given by
This form shows the motion described
|
https://en.wikipedia.org/wiki/Bioerosion
|
Bioerosion describes the breakdown of hard ocean substrates – and less often terrestrial substrates – by living organisms. Marine bioerosion can be caused by mollusks, polychaete worms, phoronids, sponges, crustaceans, echinoids, and fish; it can occur on coastlines, on coral reefs, and on ships; its mechanisms include biotic boring, drilling, rasping, and scraping. On dry land, bioerosion is typically performed by pioneer plants or plant-like organisms such as lichen, and mostly chemical (e.g. by acidic secretions on limestone) or mechanical (e.g. by roots growing into cracks) in nature.
Bioerosion of coral reefs generates the fine and white coral sand characteristic of tropical islands. The coral is converted to sand by internal bioeroders such as algae, fungi, bacteria (microborers) and sponges (Clionaidae), bivalves (including Lithophaga), sipunculans, polychaetes, acrothoracican barnacles and phoronids, generating extremely fine sediment with diameters of 10 to 100 micrometres. External bioeroders include sea urchins (such as Diadema) and chitons. These forces in concert produce a great deal of erosion. Sea urchin erosion of calcium carbonate has been reported in some reefs at annual rates exceeding 20 kg/m2.
Fish also erode coral while eating algae. Parrotfish cause a great deal of bioerosion using well developed jaw muscles, tooth armature, and a pharyngeal mill, to grind ingested material into sand-sized particles. Bioerosion of coral reef aragonite by parrotfish can range from 1017.7±186.3 kg/yr (0.41±0.07 m3/yr) for Chlorurus gibbus and 23.6±3.4 kg/yr (9.7 10−3±1.3 10−3 m2/yr) for Chlorurus sordidus (Bellwood, 1995).
Bioerosion is also well known in the fossil record on shells and hardgrounds (Bromley, 1970), with traces of this activity stretching back well into the Precambrian (Taylor & Wilson, 2003). Macrobioerosion, which produces borings visible to the naked eye, shows two distinct evolutionary radiations. One was in the Middle Ordovician (the Or
|
https://en.wikipedia.org/wiki/Bonini%27s%20paradox
|
Bonini's paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).
Statements
In modern discourse, the paradox was articulated by John M. Dutton and William H. Starbuck: "As a model of a complex system becomes more complete, it becomes less understandable. Alternatively, as a model grows more realistic, it also becomes just as difficult to understand as the real-world processes it represents."
This paradox may be used by researchers to explain why complete models of the human brain and thinking processes have not been created and will undoubtedly remain difficult for years to come.
This same paradox was observed earlier from a quote by philosopher-poet Paul Valéry (1871–1945): "Ce qui est simple est toujours faux. Ce qui ne l’est pas est inutilisable". ("If it's simple, it's always false. If it's not, it's unusable.")
Also, the same topic has been discussed by Richard Levins in his classic essay "The Strategy of Model Building in Population Biology", in stating that complex models have 'too many parameters to measure, leading to analytically insoluble equations that would exceed the capacity of our computers, but the results would have no meaning for us even if they could be solved.
Related issues
Bonini's paradox can be seen as a case of the map–territory relation: simpler maps are less accurate though more useful representations of the territory. An extreme form is given in the fictional stories Sylvie and Bruno Concluded and "On Exactitude in Science", which imagine a map of a scale of 1:1 (the same size as the territory), which is precise but unusable, illustrating one extreme of Bonini's paradox.
Isaac Asimov's fictional science of "Psychohistory" in his Foundation series also faces with this dilemma; Asimov even had one of his psychohistorians discuss the paradox.
See also
References
Eponymous paradoxes
Sys
|
https://en.wikipedia.org/wiki/Emission%20intensity
|
An emission intensity (also carbon intensity or C.I.) is the emission rate of a given pollutant relative to the intensity of a specific activity, or an industrial production process; for example grams of carbon dioxide released per megajoule of energy produced, or the ratio of greenhouse gas emissions produced to gross domestic product (GDP). Emission intensities are used to derive estimates of air pollutant or greenhouse gas emissions based on the amount of fuel combusted, the number of animals in animal husbandry, on industrial production levels, distances traveled or similar activity data. Emission intensities may also be used to compare the environmental impact of different fuels or activities. In some case the related terms emission factor and carbon intensity are used interchangeably. The jargon used can be different, for different fields/industrial sectors; normally the term "carbon" excludes other pollutants, such as particulate emissions. One commonly used figure is carbon intensity per kilowatt-hour (CIPK), which is used to compare emissions from different sources of electrical power.
Methodologies
Different methodologies can be used to assess the carbon intensity of a process. Among the most used methodologies there are:
The whole life-cycle assessment (LCA): this includes not only the carbon emissions due to a specific process, but also those due to the production and end-of-life of materials, plants and machineries used for the considered process. This is a quite complex method, requiring a big set of variables.
The well-to-wheels (WTW), commonly used in the Energy and Transport sectors: this is a simplified LCA considering the emissions of the process itself, the emissions due to the extraction and refining of the material (or fuel) used in the process (also "Upstream emissions"), but excluding the emissions due to the production and end-of-life of plants and machineries. This methodology is used, in the US, by the GREET model and in Europe in the J
|
https://en.wikipedia.org/wiki/Ralink
|
Ralink Technology, Corp. is a Wi-Fi chipset manufacturer mainly known for their IEEE 802.11 (Wireless LAN) chipsets. Ralink was founded in 2001 in Cupertino, California, then moved its headquarters to Hsinchu, Taiwan. On 5 May 2011, Ralink was acquired by MediaTek.
Some of Ralink's 802.11n RT2800 chipsets have been accepted into the Wi-Fi Alliance 802.11n draft 2.0 core technology testbed. They have also been selected in the Wi-Fi Protected Setup (WPS) and Wireless Multimedia Extensions Power Save (WMM-PS) testbeds. Ralink was a participant in the Wi-Fi Alliance and the IEEE 802.11 standards committees.
Ralink chipsets are used in various consumer-grade routers made by Gigabyte Technology, Linksys, D-Link, Asus and Belkin, as well as Wi-Fi adaptors for USB, PCI, ExpressCard, PC Card, and PCI Express interfaces. An example of an adapter is the Nintendo Wi-Fi USB Connector which uses the Ralink RT2570 chipset to allow a Nintendo DS or Wii to be internetworked via a home computer.
Operating systems support
Ralink provides some documentation without a non-disclosure agreement. This includes datasheets of their PCI and PCIe chipsets, but for now does not include documentation of their system on a chip used in Wireless routers.
Linux
Drivers for MediaTek Ralink wireless network interface controllers were mainlined into the Linux kernel version 2.6.24. (See Comparison of open-source wireless drivers.) Ralink provides GNU General Public License-licensed (GPL) drivers for the Linux kernel. While Linux drivers for the older RT2500 chipsets are no longer updated by Ralink, these are now being maintained by Serialmonkey's rt2x00 project. Current Ralink chipsets require a firmware to be loaded. Ralink allows the use and redistribution of firmware, but does not allow its modification.
In February 2011 Greg Kroah-Hartman praised Ralink for their change in attitude towards the Linux kernel developer community:
See also
List of companies of Taiwan
References
External link
|
https://en.wikipedia.org/wiki/Mills%27%20constant
|
In number theory, Mills' constant is defined as the smallest positive real number A such that the floor function of the double exponential function
is a prime number for all positive natural numbers n. This constant is named after William Harold Mills who proved in 1947 the existence of A based on results of Guido Hoheisel and Albert Ingham on the prime gaps. Its value is unproven, but if the Riemann hypothesis is true, it is approximately 1.3063778838630806904686144926... .
Mills primes
The primes generated by Mills' constant are known as Mills primes; if the Riemann hypothesis is true, the sequence begins
.
If ai denotes the i th prime in this sequence, then ai can be calculated as the smallest prime number larger than . In order to ensure that rounding , for n = 1, 2, 3, …, produces this sequence of primes, it must be the case that . The Hoheisel–Ingham results guarantee that there exists a prime between any two sufficiently large cube numbers, which is sufficient to prove this inequality if we start from a sufficiently large first prime . The Riemann hypothesis implies that there exists a prime between any two consecutive cubes, allowing the sufficiently large condition to be removed, and allowing the sequence of Mills primes to begin at a1 = 2.
For all a > , there is at least one prime between and . This upper bound is much too large to be practical, as it is infeasible to check every number below that figure. However, the value of Mills' constant can be verified by calculating the first prime in the sequence that is greater than that figure.
As of April 2017, the 11th number in the sequence is the largest one that has been proved prime. It is
and has 20562 digits.
, the largest known Mills probable prime (under the Riemann hypothesis) is
, which is 555,154 digits long.
Numerical calculation
By calculating the sequence of Mills primes, one can approximate Mills' constant as
Caldwell and Cheng used this method to compute 6850 base 10 digits of Mills
|
https://en.wikipedia.org/wiki/Matrix%20method
|
The matrix method is a structural analysis method used as a fundamental principle in many applications in civil engineering.
The method is carried out, using either a stiffness matrix or a flexibility matrix.
See also
Direct stiffness method
Flexibility method
Structural analysis
|
https://en.wikipedia.org/wiki/ZTE
|
ZTE Corporation is a Chinese partially state-owned technology company that specializes in telecommunication. Founded in 1985, ZTE is listed on both the Hong Kong and Shenzhen Stock Exchanges.
ZTE's core business is wireless, exchange, optical transmission, data telecommunications gear, telecommunications software, and mobile phones. ZTE primarily sells products under its own name, but it is also an OEM.
The company has faced criticism in the United States, India, and Sweden over ties to the Chinese government that could enable mass surveillance. In 2017, ZTE was fined for illegally exporting U.S. technology to Iran and North Korea in violations of economic sanctions. In April 2018, after the company failed to properly reprimand the employees involved, the U.S. Department of Commerce banned U.S. companies (semiconductors) from exporting to ZTE for seven years. The ban was lifted in July 2018 after ZTE replaced its senior management, and agreed to pay additional fines and establish an internal compliance team for 10 years. In June 2020, the Federal Communications Commission (FCC) designated ZTE a national security threat. In 2023, the European Commission banned ZTE from providing telecommunication services.
History
ZTE, initially founded as Zhongxing Semiconductor Co., Ltd in Shenzhen, Guangdong province, in 1985, was incorporated by a group of investors associated with China's Ministry of Aerospace Industry. In March 1993, Zhongxing Semiconductor changed its name to Zhongxing New Telecommunications Equipment Co., Ltd with capital of RMB 3 million, and created a new business model as a "state-owned and private-operating" economic entity. ZTE made an initial public offering (IPO) on the Shenzhen stock exchange in 1997 and another on the Hong Kong stock exchange in December 2004.
While the company initially profited from domestic sales, it vowed to use proceeds of its 2004 Hong Kong IPO to further expand R&D, overseas sales to developed nations, and overseas produc
|
https://en.wikipedia.org/wiki/Semiconductor%20intellectual%20property%20core
|
In electronic design, a semiconductor intellectual property core (SIP core), IP core, or IP block is a reusable unit of logic, cell, or integrated circuit layout design that is the intellectual property of one party. IP cores can be licensed to another party or owned and used by a single party. The term comes from the licensing of the patent or source code copyright that exists in the design. Designers of system on chip (SoC), application-specific integrated circuits (ASIC) and systems of field-programmable gate array (FPGA) logic can use IP cores as building blocks.
History
The licensing and use of IP cores in chip design came into common practice in the 1990s. There were many licensors and also many foundries competing on the market. In 2013, the most widely licensed IP cores were from Arm Holdings (43.2% market share), Synopsys Inc. (13.9% market share), Imagination Technologies (9% market share) and Cadence Design Systems (5.1% market share).
Types of IP cores
The use of an IP core in chip design is comparable to the use of a library for computer programming or a discrete integrated circuit component for printed circuit board design. Each is a reusable component of design logic with a defined interface and behavior that has been verified by its creator and is integrated into a larger design.
Soft cores
IP cores are commonly offered as synthesizable RTL in a hardware description language such as Verilog or VHDL. These are analogous to low-level languages such as C in the field of computer programming. IP cores delivered to chip designers as RTL permit chip designers to modify designs at the functional level, though many IP vendors offer no warranty or support for modified designs.
IP cores are also sometimes offered as generic gate-level netlists. The netlist is a boolean-algebra representation of the IP's logical function implemented as generic gates or process-specific standard cells. An IP core implemented as generic gates can be compiled for any proce
|
https://en.wikipedia.org/wiki/Helen%20%28unit%29
|
A helen is a humorous unit of measurement based on the concept that Helen of Troy had a "face that launched a thousand ships". The helen is thus used to measure quantities of beauty in terms of the theoretical action that could be accomplished by the wielder of such beauty.
Origin
The classic reference to Helen's beauty is Marlowe's lines from the 1592 play The Tragical History of Doctor Faustus, "Was this the face that launch'd a thousand ships / And burnt the topless towers of Ilium?" In the tradition of humorous pseudounits, then, 1 millihelen is the amount of beauty needed to launch a single ship.
In his 1992 collection of jokes and limericks, Isaac Asimov claimed to have invented the term in the 1940s as a graduate student. In a 1958 letter to the New Scientist, R.C. Winton proposes the millihelen as the amount of beauty required to launch one ship. In response, P. Lockwood noted that the unit had been independently proposed by Edgar J. Westbury and extended by the pair to negative values, where −1 millihelen was the amount of ugliness required to sink a battleship.
The earliest known print citation is found in Punch magazine dated June 23 1954 and attributed to an unnamed “professor of natural philosophy”.
Derived units
The Catalogue of Ships from Book II of The Iliad, which describes in detail the commanders who came to fight for Helen and the ships they brought with them, details a total of 1,186 ships which came to fight the Trojan War. As such, Helen herself has a beauty rating of 1.186 helen, capable of launching more than one thousand ships.
The "system" has been expanded by some writers, such as conceiving of negative values as measures of the ugliness required to beach a thousand ships. Writing a humorous article about the concept, David Goines considered a range of metric prefixes to the unit, ranging from the attohelen (ah) which could merely "light up a Lucky while strolling past a shipyard", to the terahelen (Th) able to "launch the equival
|
https://en.wikipedia.org/wiki/Eb/N0
|
{{DISPLAYTITLE:Eb/N0}}
In digital communication or data transmission, (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.
As the description implies, is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (not the channel symbol rate). If signal power is in watts and bit rate is in bits per second, is in units of joules (watt-seconds). is the noise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules.
These are the same units as so the ratio is dimensionless; it is frequently expressed in decibels. directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use of spread spectrum). This also avoids any confusion as to which of several definitions of "bandwidth" to apply to the signal.
But when the signal bandwidth is well defined, is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the "gross" link spectral efficiency in bit/s⋅Hz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type.
must be used with care on interference-limited channels since additive white noise (with constant noise density ) is assumed, and interference is not always noise-like. In spread spectrum systems (e.g., CDMA), the interference is sufficiently noise-like that it can be represented as and added to the thermal noise to produce the overall ratio .
Relation to carrier-to-noise ratio
is closely related to the carrier-to-noise ratio (CNR or ), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection
|
https://en.wikipedia.org/wiki/Canada/USA%20Mathcamp
|
Canada/USA Mathcamp is a five-week academic summer program for middle and high school students in mathematics.
Mathcamp was founded in 1993 by Dr. George Thomas, who believed that students interested in mathematics frequently lacked the resources and camaraderie to pursue their interest. Mira Bernstein became the director when Thomas left in 2002 to found MathPath, a program for younger students.
Mathcamp is held each year at a college campus in the United States or Canada. Past locations have included the University of Toronto, the University of Washington, Colorado College, Reed College, University of Puget Sound, Colby College, the University of British Columbia, Mount Holyoke College, and the Colorado School of Mines. Mathcamp enrolls about 120 students yearly, 45–55 returning and 65–75 new.
The application process for new students includes an entrance exam (the "Qualifying Quiz"), personal essay, and two letters of recommendation, but no grade reports. The process is intended to ensure that the students who are most passionate about math come to camp. Admission is selective: in 2016, the acceptance rate was 15%.
Mathcamp courses cover various branches of recreational and college-level mathematics. Classes at Mathcamp come in four difficulty levels. The easier classes often include basic proof techniques, number theory, graph theory, and combinatorial game theory, while the more difficult classes cover advanced topics in abstract algebra, topology, theoretical computer science, category theory, and mathematical analysis. There are generally four class periods each day and five classes offered during each period intended for varying student interests and backgrounds. Graduate student mentors teach most of the classes, while undergraduate junior counselors, all of them Mathcamp alumni, do most of the behind-the-scenes work. Mathcamp has had a number of renowned guest speakers, including John Conway, Avi Wigderson, and Serge Lang.
Culture
In 2004, some campe
|
https://en.wikipedia.org/wiki/List%20of%20version-control%20software
|
This is a list of notable software for version control.
Local data model
In the local-only approach, all developers must use the same file system.
Open source
Revision Control System (RCS) – stores the latest version and backward deltas for fastest access to the trunk tip compared to SCCS and an improved user interface, at the cost of slow branch tip access and missing support for included/excluded deltas.
Source Code Control System (SCCS) – part of UNIX; based on interleaved deltas, can construct versions as arbitrary sets of revisions. Extracting an arbitrary version takes essentially the same time and is thus more useful in environments that rely heavily on branching and merging with multiple "current" and identical versions.
Proprietary
The Librarian – Around since 1969, source control for IBM mainframe computers; from Applied Data Research, later acquired by Computer Associates
Panvalet – Around since the 1970s, source and object control for IBM mainframe computers.
Client–server model
In the client–server model, developers use one shared repository.
Open source
Concurrent Versions System (CVS) – originally built on RCS, licensed under the GPL.
CVSNT – cross-platform port of CVS that allows case insensitive file names among other changes
OpenCVS – unreleased CVS clone under a BSD license, emphasising security and source code correctness
Subversion (SVN) – versioning control system inspired by CVS
Vesta – build system with a versioning file system and support for distributed repositories
Proprietary
AccuRev – source configuration management tool with integrated issue tracking based on "Streams" that efficiently manages parallel and global development; replication server is also available. Now owned by Micro Focus.
Autodesk Vault – Version control tool specifically designed for Autodesk applications managing the complex relationships between design files such as AutoCAD and Autodesk Inventor.
CADES – Designer productivity and version contr
|
https://en.wikipedia.org/wiki/Histiocyte
|
A histiocyte is a vertebrate cell that is part of the mononuclear phagocyte system (also known as the reticuloendothelial system or lymphoreticular system). The mononuclear phagocytic system is part of the organism's immune system. The histiocyte is a tissue macrophage or a dendritic cell (histio, diminutive of histo, meaning tissue, and cyte, meaning cell). Part of their job is to clear out neutrophils once they've reached the end of their lifespan.
Development
Histiocytes are derived from the bone marrow by multiplication from a stem cell. The derived cells migrate from the bone marrow to the blood as monocytes. They circulate through the body and enter various organs, where they undergo differentiation into histiocytes, which are part of the mononuclear phagocytic system (MPS).
However, the term histiocyte has been used for multiple purposes in the past, and some cells called "histocytes" do not appear to derive from monocytic-macrophage lines. The term Histiocyte can also simply refer to a cell from monocyte origin outside the blood system, such as in a tissue (as in rheumatoid arthritis as palisading histiocytes surrounding fibrinoid necrosis of rheumatoid nodules).
Some sources consider Langerhans cell derivatives to be histiocytes. The Langerhans cell histiocytosis embeds this interpretation into its name.
Structure
Histiocytes have common histological and immunophenotypical characteristics (demonstrated by immunostains). Their cytoplasm is eosinophilic and contains variable amounts of lysosomes. They bear membrane receptors for opsonins, such as IgG and the fragment C3b of complement. They express LCAs (leucocyte common antigens) CD45, CD14, CD33, and CD4 (also expressed by T helper cells).
Macrophages and dendritic cells
These histiocytes are part of the immune system by way of two distinct functions: phagocytosis and antigen presentation. Phagocytosis is the main process of macrophages and antigen presentation the main property of dendritic cel
|
https://en.wikipedia.org/wiki/Active%20shutter%203D%20system
|
An active shutter 3D system (a.k.a. alternate frame sequencing, alternate image, AI, alternating field, field sequential or eclipse method) is a technique of displaying stereoscopic 3D images. It works by only presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image.
Modern active shutter 3D systems generally use liquid crystal shutter glasses (also called "LC shutter glasses" or "active shutter glasses"). Each eye's glass contains a liquid crystal layer which has the property of becoming opaque when voltage is applied, being otherwise transparent. The glasses are controlled by a timing signal that allows the glasses to alternately block one eye, and then the other, in synchronization with the refresh rate of the screen. The timing synchronization to the video equipment may be achieved via a wired signal, or wirelessly by either an infrared or radio frequency (e.g. Bluetooth, DLP link) transmitter. Historic systems also used spinning discs, for example the Teleview system.
Active shutter 3D systems are used to present 3D films in some theaters, and they can be used to present 3D images on CRT, plasma, LCD, projectors and other types of video displays.
Advantages and disadvantages
Although virtually all ordinary unmodified video and computer systems can be used to display 3D by adding a plug-in interface and active shutter glasses, disturbing levels of flicker or ghosting may be apparent with systems or displays not designed for such use. The rate of alternation required to eliminate noticeable flicker depends on image brightness and other factors, but is typically well over 30 image pair cycles per second, the maximum possible with a 60 Hz display. A 120 Hz display, allowing 60 images per second per eye, is widely accepted as flicker-fr
|
https://en.wikipedia.org/wiki/Heart%20rate%20variability
|
Heart rate variability (HRV) is the physiological phenomenon of variation in the time interval between heartbeats. It is measured by the variation in the beat-to-beat interval.
Other terms used include "cycle length variability", "R–R variability" (where R is a point corresponding to the peak of the QRS complex of the ECG wave; and RR is the interval between successive Rs), and "heart period variability".
Methods used to detect beats include ECG, blood pressure,
ballistocardiograms,
and the pulse wave signal derived from a photoplethysmograph (PPG). ECG is considered the gold standard for HRV measurement because it provides a direct reflection of cardiac electric activity.
Clinical significance
Reduced HRV has been shown to be a predictor of mortality after myocardial infarction although others have shown that the information in HRV relevant to acute myocardial infarction survival is fully contained in the mean heart rate.
A range of other outcomes and conditions may also be associated with modified (usually lower) HRV, including congestive heart failure, diabetic neuropathy, post–cardiac-transplant depression, susceptibility to SIDS and poor survival in premature babies, as well as fatigue severity in chronic fatigue syndrome.
Psychological and social aspects
There is interest in HRV in the field of psychophysiology. For example, HRV is related to emotional arousal. High-frequency (HF) activity has been found to decrease under conditions of acute time pressure and emotional strain and elevated anxiety state, presumably related to focused attention and motor inhibition. HRV has been shown to be reduced in individuals reporting to worry more. In individuals with post-traumatic stress disorder (PTSD), HRV and its HF component (see below) is reduced whilst the low-frequency (LF) component is elevated. Furthermore, PTSD patients demonstrated no LF or HF reactivity to recalling a traumatic event.
The neurovisceral integration is a model of HRV that views the cent
|
https://en.wikipedia.org/wiki/Floris%20Takens
|
Floris Takens (12 November 1940 – 20 June 2010) was a Dutch mathematician known for contributions to the theory of chaotic dynamical systems.
Together with David Ruelle, he predicted that fluid turbulence could develop through a strange attractor, a term they coined, as opposed to the then-prevailing theory of accretion of modes. The prediction was later confirmed by experiment. Takens also established the result now known as the Takens's theorem, which shows how to reconstruct a dynamical system from an observed time-series. He was the first to show how chaotic attractors could be learned by neural networks.
Takens was born in Zaandam in the Netherlands. He attended schools in The Hague and in Zaandam before serving in the Dutch army for one year (1960–1961). At the University of Amsterdam he concluded his undergraduate and graduate studies. He was granted a doctorate in mathematics in 1969 under the supervision of Nicolaas Kuiper for a thesis entitled The minimal number of critical points of a function on a compact manifold and the Lusternik–Schnirelmann category.
After his graduate work, Takens spent a year at the Institut des Hautes Études Scientifiques, in Bures-sur-Yvette, near Paris, where he worked with David Ruelle, René Thom, and Jacob Palis. His friendship with Palis has taken him many times to the Instituto de Matemática Pura e Aplicada (IMPA) in Rio de Janeiro, Brazil. Their collaboration produced several joint publications.
Takens was a professor at the University of Groningen, in Groningen, the Netherlands from 1972 until he retired from teaching in 1999.
Takens was member of:
The Royal Netherlands Academy of Arts and Sciences (since 1991)
The Brazilian Academy of Sciences (since 1981), and
The editorial board for the Springer-Verlag's Lecture Notes in Mathematics.
Selected publications
See also
Bogdanov–Takens bifurcation
Notes
References
Floris Takens - Academia Brasileira de Ciências. Accessed on 26 January 2010.
External links
19
|
https://en.wikipedia.org/wiki/ITnet
|
ITnet (Institute of Technology Network) was a PoS based multi Mbit/s network created for the Institutes of Technology in Ireland. ITnet used 45 Mbit/s links to each of the institutions and an international link of 310 Mbit/s via HEAnet.
The system was proposed in 1991 between Regional Technical College, Cork and Regional Technical College, Carlow, as they were then called. In 1993 an agreement was reached to create RTCnet. The system was running by 1994 with its hub in University College Dublin and service management at Carlow. All Regional Technical Colleges were connected to the system.
In 1998 RTCnet became ITnet to reflect the change in status of the Regional Technical College system to Institutes of Technology. The hub of the system and service management moved to Institute of Technology, Tallaght where an international link was provided via HEAnet.
ITnet was incorporated into HEAnet during 2007 and 2008 and has now ceased to exist as an independent network for the IoTs. All internet services are now provided directly to the Institutes by HEAnet.
See also
Communications in Ireland
List of higher education institutions in the Republic of Ireland
External links
Official website - ITnet
Academic computer network organizations
Education in the Republic of Ireland
Internet in Ireland
|
https://en.wikipedia.org/wiki/Methylotroph
|
Methylotrophs are a diverse group of microorganisms that can use reduced one-carbon compounds, such as methanol or methane, as the carbon source for their growth; and multi-carbon compounds that contain no carbon-carbon bonds, such as dimethyl ether and dimethylamine. This group of microorganisms also includes those capable of assimilating reduced one-carbon compounds by way of carbon dioxide using the ribulose bisphosphate pathway. These organisms should not be confused with methanogens which on the contrary produce methane as a by-product from various one-carbon compounds such as carbon dioxide.
Some methylotrophs can degrade the greenhouse gas methane, and in this case they are called methanotrophs. The abundance, purity, and low price of methanol compared to commonly used sugars make methylotrophs competent organisms for production of amino acids, vitamins, recombinant proteins, single-cell proteins, co-enzymes and cytochromes.
Metabolism
The key intermediate in methylotrophic metabolism is formaldehyde, which can be diverted to either assimilatory or dissimilatory pathways. Methylotrophs produce formaldehyde through oxidation of methanol and/or methane. Methane oxidation requires the enzyme methane monooxygenase (MMO). Methylotrophs with this enzyme are given the name methanotrophs. The oxidation of methane (or methanol) can be assimilatory or dissimilatory in nature (see figure). If dissimilatory, the formaldehyde intermediate is oxidized completely into CO2 to produce reductant and energy. If assimilatory, the formaldehyde intermediate is used to synthesize a 3-Carbon (C3) compound for the production of biomass. Many methylotrophs use multi-carbon compounds for anabolism, thus limiting their use of formaldehyde to dissimilatory processes, however methanotrophs are generally limited to only C1metabolism.
Catabolism
Methylotrophs use the electron transport chain to conserve energy produced from the oxidation of C1 compounds. An additional activation ste
|
https://en.wikipedia.org/wiki/Traitor%20tracing
|
Traitor tracing schemes help trace the source of leaks when secret or proprietary data is sold to many customers.
In a traitor tracing scheme, each customer is given a different personal decryption key.
(Traitor tracing schemes are often combined with conditional access systems so that, once the traitor tracing algorithm identifies a personal decryption key associated with the leak, the content distributor can revoke that personal decryption key, allowing honest customers to continue to watch pay television while the traitor and all the unauthorized users using the traitor's personal decryption key are cut off.)
Traitor tracing schemes are used in pay television to discourage pirate decryption – to discourage legitimate subscribers from giving away decryption keys.
Traitor tracing schemes are ineffective if the traitor rebroadcasts the entire (decrypted) original content.
There are other kinds of schemes that discourages pirate rebroadcast – i.e., discourages legitimate subscribers from giving away decrypted original content. These other schemes use tamper-resistant digital watermarking to generate different versions of the original content. Traitor tracing key assignment schemes can be translated into such digital watermarking schemes.
Traitor tracing is a copyright infringement detection system which works by tracing the source of leaked files rather than by direct copy protection. The method is that the distributor adds a unique salt to each copy given out. When a copy of it is leaked to the public, the distributor can check the value on it and trace it back to the "leak".
Primary methods
Activation controls
The main concept is that each licensee (the user) is given a unique key which unlocks the software or allows the media to be decrypted.
If the key is made public, the content owner then knows exactly who did it from their database of assigned codes.
A major attack on this strategy is the key generator (keygen). By reverse engineering the software, the c
|
https://en.wikipedia.org/wiki/Loss%20of%20heterozygosity
|
Loss of heterozygosity (LOH) is a type of genetic abnormality in diploid organisms in which one copy of an entire gene and its surrounding chromosomal region are lost. Since diploid cells have two copies of their genes, one from each parent, a single copy of the lost gene still remains when this happens, but any heterozygosity (slight differences between the versions of the gene inherited from each parent) is no longer present.
In cancer
The loss of heterozygosity is a common occurrence in cancer development. Originally, a heterozygous state is required and indicates the absence of a functional tumor suppressor gene copy in the region of interest. However, many people remain healthy with such a loss, because there still is one functional gene left on the other chromosome of the chromosome pair. The remaining copy of the tumor suppressor gene can be inactivated by a point mutation or via other mechanisms, resulting in a loss of heterozygosity event, and leaving no tumor suppressor gene to protect the body. Loss of heterozygosity does not imply a homozygous state (which would require the presence of two identical alleles in the cell).
Knudson two-hit hypothesis of tumorigenesis
First Hit: The first hit is classically thought of as a point mutation, but generally arises due to epigenetic events which inactivate one copy of a tumor suppressor gene (TSG), such as Rb1. In hereditary cancer syndromes, individuals are born with the first hit. The individual does not develop cancer at this point because the remaining TSG allele on the other locus is still functioning normally.
Second Hit: While the second hit is commonly assumed to be a deletion that results in loss of the remaining functioning TSG allele, the original published mechanism of RB1 LOH was mitotic recombination/gene conversion/copy-neutral LOH, not deletion. There is a critical difference between deletion and CN-LOH, as the latter mechanism cannot be detected by comparative genomic hybridization (CGH)-based
|
https://en.wikipedia.org/wiki/Smart%20antenna
|
Smart antennas (also known as adaptive array antennas, digital antenna arrays, multiple antennas and, recently, MIMO) are antenna arrays with smart signal processing algorithms used to identify spatial signal signatures such as the direction of arrival (DOA) of the signal, and use them to calculate beamforming vectors which are used to track and locate the antenna beam on the mobile/target. Smart antennas should not be confused with reconfigurable antennas, which have similar capabilities but are single element antennas and not antenna arrays.
Smart antenna techniques are used notably in acoustic signal processing, track and scan radar, radio astronomy and radio telescopes, and mostly in cellular systems like W-CDMA, UMTS, and LTE and 5G-NR.
Smart antennas have many functions: DOA estimation, beamforming, interference nulling, and constant modulus preservation.
Direction of arrival (DOA) estimation
The smart antenna system estimates the direction of arrival of the signal, using techniques such as MUSIC (MUltiple SIgnal Classification), estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, Matrix Pencil method or one of their derivatives. They involve finding a spatial spectrum of the antenna/sensor array, and calculating the DOA from the peaks of this spectrum. These calculations are computationally intensive.
Matrix Pencil is very efficient in case of real time systems, and under the correlated sources.
Beamforming
Beamforming is the method used to create the radiation pattern of the antenna array by adding constructively the phases of the signals in the direction of the targets/mobiles desired, and nulling the pattern of the targets/mobiles that are undesired/interfering targets.
This can be done with a simple Finite Impulse Response (FIR) tapped delay line filter. The weights of the FIR filter may also be changed adaptively, and used to provide optimal beamforming, in the sense that it reduces the Minimum Mean Square Er
|
https://en.wikipedia.org/wiki/Dingo%20Fence
|
The Dingo Fence or Dog Fence is a pest-exclusion fence in Australia to keep dingoes out of the relatively fertile south-east part of the continent (where they have largely been exterminated) and protect the sheep flocks of southern Queensland. It is one of the longest structures in the world. It stretches from Jimbour on the Darling Downs near Dalby through thousands of kilometres of arid land ending west of Eyre peninsula on cliffs of the Nullarbor Plain above the Great Australian Bight near Nundroo. It has been partly successful, though dingoes can still be found in parts of the southern states. Although the fence has helped reduce losses of sheep to predators, this has been countered by holes in fences found in the 1990s through which dingo offspring have passed and by increased pasture competition from rabbits and kangaroos.
History
The earliest pest exclusion fences in Australia were created to protect small plots of cropland from the predation by marsupials. In the 1860s and 1870s, introduced rabbit populations began to spread rapidly across southern Australia. By 1884, a rabbit-proof fence was built. Having been unsuccessful at keeping rabbits out, and more successful at keeping out pigs, kangaroos, emus and brumbies, and as more sheep farms were established, the interest for a dingo-proof barrier increased enough that government funds were being used to heighten and expand the fence. In 1930, an estimated 32,000 km of dog netting in Queensland alone was being used on top of rabbit fences. Prior to 1948, the idea of a Dingo Barrier Fence Scheme had not come into fruition as a statewide project for which annual maintenance and repair were kept. Since this time, there have been pushes to move away from a method of barrier-exclusion to complete extinction of the dingo and wild-dog cross-breeds. Poisoning the species with compound 1080 (sodium monofluoroacetate) baits has been seen as a much cheaper alternative than fence maintenance. A compromise in the
|
https://en.wikipedia.org/wiki/Chemotroph
|
A chemotroph is an organism that obtains energy by the oxidation of electron donors in their environments. These molecules can be organic (chemoorganotrophs) or inorganic (chemolithotrophs). The chemotroph designation is in contrast to phototrophs, which use photons. Chemotrophs can be either autotrophic or heterotrophic. Chemotrophs can be found in areas where electron donors are present in high concentration, for instance around hydrothermal vents.
Chemoautotroph
Chemoautotrophs, in addition to deriving energy from chemical reactions, synthesize all necessary organic compounds from carbon dioxide. Chemoautotrophs can use inorganic energy sources such as hydrogen sulfide, elemental sulfur, ferrous iron, molecular hydrogen, and ammonia or organic sources to produce energy. Most chemoautotrophs are extremophiles, bacteria or archaea that live in hostile environments (such as deep sea vents) and are the primary producers in such ecosystems. Chemoautotrophs generally fall into several groups: methanogens, sulfur oxidizers and reducers, nitrifiers, anammox bacteria, and thermoacidophiles. An example of one of these prokaryotes would be Sulfolobus. Chemolithotrophic growth can be dramatically fast, such as Hydrogenovibrio crunogenus with a doubling time around one hour.
The term "chemosynthesis", coined in 1897 by Wilhelm Pfeffer, originally was defined as the energy production by oxidation of inorganic substances in association with autotrophy—what would be named today as chemolithoautotrophy. Later, the term would include also the chemoorganoautotrophy, that is, it can be seen as a synonym of chemoautotrophy.
Chemoheterotroph
Chemoheterotrophs (or chemotrophic heterotrophs) are unable to fix carbon to form their own organic compounds. Chemoheterotrophs can be chemolithoheterotrophs, utilizing inorganic electron sources such as sulfur, or, much more commonly, chemoorganoheterotrophs, utilizing organic electron sources such as carbohydrates, lipids, and proteins.
|
https://en.wikipedia.org/wiki/Extreme%20Engineering
|
Extreme Engineering is a documentary television series that aired on the Discovery Channel and the Science Channel. The program featured futuristic and ongoing engineering projects. After ending of season 3 it airs under the Build It Bigger name. The series last season aired in July 2011. Danny Forster first hosted the series in season 4 and has been the host since season 6.
Origins of the show
Engineering the Impossible was a 2-hour special, created and written by Alan Lindgren and produced by Powderhouse Productions for the Discovery Channel. It focused on three incredible, yet physically possible, engineering projects: the Gibraltar Bridge, the 170-story Millennium Tower and the over Freedom Ship. This program won the Beijing International Science Film Festival Silver Award, and earned Discovery's second-highest weeknight rating for 2002. After the success of this program, Discovery commissioned Powderhouse to produce the first season of the 10-part series, Extreme Engineering, whose episodes were written by Alan Lindgren, Ed Fields and several other Powderhouse writer-producers. Like Engineering the Impossible, the first season of Extreme Engineering focused on extreme projects of the future. Season 2 (and all seasons since) featured projects already in construction around the world.
Episodes
Series overview
Pilot
Season one episode 4, Icarus' Dream. Master's of Engineering. Amazon Prime Television.
Season 1: 2003
Season 2: 2004
Season 2 was the first season produced in HDTV for HD Theater.
Season 3: 2005–06
Season 4: 2006
Powderhouse Productions produced six episodes for season 4 with host Danny Forster. After ending of season 3 it airs under the Build It Bigger name on HD Theater, The Science Channel, and Discovery Channel.
Season 5: 2006
Season 6: 2007
Season 7: 2009
Season 8: 2010
Season 9: 2011
See also
Mega Builders
Megastructures, a similar show on the National Geographic Channel
Impossible Engineering
References
Extern
|
https://en.wikipedia.org/wiki/Linkage%20principle
|
The linkage principle is a finding of auction theory. It states that auction houses have an incentive to pre-commit to revealing all available information about each lot, positive or negative. The linkage principle is seen in the art market with the tradition of auctioneers hiring art experts to examine each lot and pre-commit to provide a truthful estimate of its value.
The discovery of the linkage principle was most useful in determining optimal strategy for countries in the process of auctioning off drilling rights (as well as other natural resources, such as logging rights in Canada). An independent assessment of the land in question is now a standard feature of most auctions, even if the seller country may believe that the assessment is likely to lower the value of the land rather than confirm or raise a pre-existing valuation.
Failure to reveal information leads to the winning bidder incurring the discovery costs himself and lowering his maximum bid due to the expenses incurred in acquiring information. If he is not able to get an independent assessment, then his bids will take into account the possibility of downside risk. Both scenarios can be shown to lower the expected revenue of the seller. The expected sale price is raised by lowering these discovery costs of the winning bidder, and instead providing information to all bidders for free.
Use in FCC auction
Speaking of FCC spectrum auctions, Evan Kwerel said, "In the end, the FCC chose an ascending bid mechanism, largely because we believed that providing bidders with more information would likely increase efficiency and, as shown by Paul Milgrom and Robert J. Weber, mitigate the winner's curse. (Kwerel, 2004, p.xvii)
The result alluded to by Kwerel is known as the linkage principle and was developed by Milgrom and Weber (1982). Milgrom (2004) recasts the linkage principle as the 'publicity effect.' It provided a theoretical foundation for the intuition driving the major design choice by the FCC betwe
|
https://en.wikipedia.org/wiki/Two-level%20game%20theory
|
Two-level game theory is a political model, derived from game theory, that illustrates the domestic-international interactions between states. It was originally, introduced in 1988 by Robert D. Putnam, in his publication "Diplomacy and Domestic Politics: The Logic of Two-Level Games".
Putnam had been involved in research around the G7 summits between 1976 and 1979. However, at the fourth summit, held in Bonn in 1978, he observed a qualitative shift in how the negotiations worked. He noted that attending countries agreed to adopt policies in contrast to what they might have in the absence of their international counterparts. However, the agreement was only viable due to strong domestic influence - within each international government - in favour of implementing the agreement internationally. This culminated in international policy co-ordination as a result of the entanglement of international and domestic agendas.
The Model
The model views international negotiations between states as consisting of simultaneous negotiations at two levels.
Level 1: The international level (between governments), and
Level 2: The intranational level (domestic).
At the international level, the national government (i.e., chief negotiator) seeks an agreement, with an opposing country, relating to topics of concern. At the domestic level, societal actors pressure the chief negotiator for favourable policies. The chief negotiator absorbs the concern of societal actors and builds coalitions with them. Simultaneously, the chief negotiator then seeks to maximise the domestic concerns, yet minimise the impact of any contrary views from the opposing country.
Win-Sets
At the international level, countries will approach negotiations with a defined set of objectives. It is expected that chief negotiators of both states arrive at a range of outcomes where their objectives overlap. However, before committing to this, the chief negotiator must seek approval from domestic actors. This ratificat
|
https://en.wikipedia.org/wiki/Combo%20television%20unit
|
A combo television unit, or a TV/VCR combo, sometimes known as a televideo, is a television with a VCR, DVD player, or sometimes both, built into a single unit. These converged devices have the advantages (compared to a separate TV and VCR) of saving space and increasing portability. Such units entered the market during the mid-to-late 1980s when VCRs had become ubiquitous household devices. By this time, the VHS format had become standard; thus the vast majority of TV/VCR combos are VHS-based.
Most combo units have composite inputs on the front and/or back to connect a home video game console, a second VCR, a DVD player, or a camcorder. Some units may also include a headphone jack or S-Video inputs.
Though nearly all TV/VCR combination sets have monaural (mono) sound though with stereo soundtrack compatibility, there are a large number of TV/VCR combos with a stereo TV tuner, but a mono VCR (some may even include a mono sound input alongside a composite video input. Some models from Panasonic also included an FM tuner.
When DVDs were released, brands such as Toshiba introduced a TV/DVD/VCR combo. However, many of these units turned out to have unreliable DVD players and never had a large place in the TV market.
Modern televisions tend to be mostly composed of solid state components, while VCRs require physical movement for both the inner workings of the VCR and the actual viewing of a VHS tape. VCRs also tend to require occasional service to upkeep the VCR, including things like cleaning the heads, capstan, and pinch rollers. For this reason, it is not uncommon for the included VCR to cease functioning or to become unreliable years before a similar fate befalls the television component due to lack of easy maintenance.
As late as 2006, flat-panel TVs with integrated DVD players appeared on the market, and integrated TV/DVD sets started overtaking the TV/VCR market. This is due to both the low price and overwhelming availability of DVDs and more compact form f
|
https://en.wikipedia.org/wiki/Borel%E2%80%93Carath%C3%A9odory%20theorem
|
In mathematics, the Borel–Carathéodory theorem in complex analysis shows that an analytic function may be bounded by its real part. It is an application of the maximum modulus principle. It is named for Émile Borel and Constantin Carathéodory.
Statement of the theorem
Let a function be analytic on a closed disc of radius R centered at the origin. Suppose that r < R. Then, we have the following inequality:
Here, the norm on the left-hand side denotes the maximum value of f in the closed disc:
(where the last equality is due to the maximum modulus principle).
Proof
Define A by
If f is constant c, the inequality follows from , so we may assume f is nonconstant. First let f(0) = 0. Since Re f is harmonic, Re f(0) is equal to the average of its values around any circle centered at 0. That is,
Since f is regular and nonconstant, we have that Re f is also nonconstant. Since Re f(0) = 0, we must have Re for some z on the circle , so we may take . Now f maps into the half-plane P to the left of the x=A line. Roughly, our goal is to map this half-plane to a disk, apply Schwarz's lemma there, and make out the stated inequality.
sends P to the standard left half-plane. sends the left half-plane to the circle of radius R centered at the origin. The composite, which maps 0 to 0, is the desired map:
From Schwarz's lemma applied to the composite of this map and f, we have
Take |z| ≤ r. The above becomes
so
,
as claimed. In the general case, we may apply the above to f(z)-f(0):
which, when rearranged, gives the claim.
Alternative result and proof
We start with the following result:
Applications
Borel–Carathéodory is often used to bound the logarithm of derivatives, such as in the proof of Hadamard factorization theorem.
The following example is a strengthening of Liouville's theorem.
References
Sources
Lang, Serge (1999). Complex Analysis (4th ed.). New York: Springer-Verlag, Inc. .
Titchmarsh, E. C. (1938). The theory of functions. Oxford Univer
|
https://en.wikipedia.org/wiki/Tandem%20repeat%20locus
|
Variable number of tandem repeat locus (VNTR locus) is any DNA sequence that exist in multiple copies strung together in a variety of tandem lengths. The number of repeat copies present at a locus can be visualized by means of a Multi-locus or Multiple Loci VNTR Analysis (MLVA). In short, oligonucleotide primers are developed for each specific tandem repeat locus, followed by PCR and agarose gel electrophoresis. When the length of the repeat and the size of the flanking regions is known, the number of repeats can be calculated. Analysis of multiple loci will result in a genotype.
References
Genetics
|
https://en.wikipedia.org/wiki/Sasa%20%28video%20game%29
|
Sasa is an arcade video game released for the MSX1 in 1984, and later for the Family Computer as in 1985.
This video game involved obtaining capsules with an 'E' on them, sometimes suspended by balloons. The main character could only use bullets to propel himself, and when the bullet count reaches 0, the game ends. A player can also lose bullets by colliding with an enemy, the other player, or the other player's bullets.
References
External links
1984 video games
ASCII Corporation games
Japan-exclusive video games
MSX games
Nintendo Entertainment System games
Scrolling shooters
Video games developed in Japan
|
https://en.wikipedia.org/wiki/Richard%20Friederich%20Arens
|
Richard Friederich Arens (24 April 1919 – 3 May 2000) was an American mathematician. He was born in Iserlohn, Germany. He emigrated to the United States in 1925.
Arens received his Ph.D. in 1945 from Harvard University. He was several times was a visiting scholar at the Institute for Advanced Study (1945–46, 1946–47, and 1953–54). He was an Invited Speaker at the ICM in 1950 in Cambridge, Massachusetts.
Arens worked in functional analysis, and was a professor at UCLA for more than 40 years. He served on the editorial board of the Pacific Journal of Mathematics for 14 years 1965–1979. There are three topological spaces named for Arens in the book Counterexamples in Topology, including Arens–Fort space.
Arens died in Los Angeles, California.
See also
Arens square
Mackey–Arens theorem
References
External links
Richard Friederich Arens at the Mathematics Genealogy Project
Obituary (PDF) from the Pacific Journal of Mathematics
20th-century American mathematicians
Mathematical analysts
Harvard University alumni
Institute for Advanced Study visiting scholars
Functional analysts
1919 births
2000 deaths
Emigrants from the Weimar Republic to the United States
20th-century German mathematicians
People from Iserlohn
Putnam Fellows
|
https://en.wikipedia.org/wiki/Inner%20class
|
In object-oriented programming (OOP), an inner class or nested class is a class declared entirely within the body of another class or interface. It is distinguished from a subclass.
Overview
An instance of a normal or top-level class can exist on its own. By contrast, an instance of an inner class cannot be instantiated without being bound to a top-level class.
Let us take the abstract notion of a Car with four Wheels. Our Wheels have a specific feature that relies on being part of our Car. This notion does not represent the Wheels as Wheels in a more general form that could be part of any vehicle. Instead, it represents them as specific to a Car. We can model this notion using inner classes as follows:
We have the top-level class Car. Instances of class Car are composed of four instances of the class Wheel. This particular implementation of Wheel is specific to a car, so the code does not model the general notion of a wheel that would be better represented as a top-level class. Therefore, it is semantically connected to the class Car and the code of Wheel is in some way coupled to its outer class, being a composition unit of a car. The wheel for a particular car is unique to that car, but for generalization, the wheel is an aggregation unit to the car.
Inner classes provide a mechanism to accurately model this connection. We can refer to our Wheel class as Car.Wheel, Car being the top-level class and Wheel being the inner class.
Inner classes therefore allow for the object orientation of certain parts of the program that would otherwise not be encapsulated into a class.
Larger segments of code within a class might be better modeled or refactored as a separate top-level class, rather than an inner class. This would make the code more general in its application and therefore more re-usable but potentially might be premature generalization. This may prove more effective, if code has many inner classes with the shared functionality.
Types of nested classes in Ja
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.