source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Thromboangiitis%20obliterans
|
Thromboangiitis obliterans, also known as Buerger disease (English ; ) or Winiwarter-Buerger disease, is a recurring progressive inflammation and thrombosis (clotting) of small and medium arteries and veins of the hands and feet. It is strongly associated with use of tobacco products, primarily from smoking, but is also associated with smokeless tobacco.
Signs and symptoms
There is a recurrent acute and chronic inflammation and thrombosis of arteries and veins of the hands and feet. The main symptom is pain in the affected areas, at rest and while walking (claudication). The impaired circulation increases sensitivity to cold. Peripheral pulses are diminished or absent. There are color changes in the extremities. The colour may range from cyanotic blue to reddish blue. Skin becomes thin and shiny. Hair growth is reduced. Ulcerations and gangrene in the extremities are common complications, often resulting in the need for amputation of the involved extremity.
Pathophysiology
There are characteristic pathologic findings of acute inflammation and thrombosis (clotting) of arteries and veins of the hands and feet (the lower limbs being more common). The mechanisms underlying Buerger's disease are still largely unknown, but smoking and tobacco consumption are major factors associated with it. It has been suggested that the tobacco may trigger an immune response in susceptible persons or it may unmask a clotting defect, either of which could incite an inflammatory reaction of the vessel wall. This eventually leads to vasculitis and ischemic changes in distal parts of limbs.
A possible role for Rickettsia in this disease has been proposed.
Diagnosis
A concrete diagnosis of thromboangiitis obliterans is often difficult as it relies heavily on exclusion of other conditions. The commonly followed diagnostic criteria are outlined below although the criteria tend to differ slightly from author to author. Olin (2000) proposes the following criteria:
Typically between 20 and
|
https://en.wikipedia.org/wiki/Hierarchical%20INTegration
|
Hierarchical INTegration, or HINT for short, is a computer benchmark that ranks a computer system as a whole (i.e. the entire computer instead of individual components). It measures the full range of performance, mostly based on the amount of work a computer can perform over time. A system with a very fast processor would likely be rated poorly if the buses were very poor compared to those of another system that had both an average processor and average buses. For example, in the past, Macintosh computers with relatively slow processor speeds (800 MHz) used to perform better than x86 based systems with processors running at nearly 2 GHz.
HINT is known for being almost immune to artificial optimization and can be used by many computers ranging from a calculator to a supercomputer. It was developed at the U.S. Department of Energy's Ames Laboratory and is licensed under the terms of the GNU General Public License.
HINT is intended to be "scalable" to run on any size computer, from small serial systems to highly parallel supercomputers.
The person using the HINT benchmark can use any floating-point or integer type.
HINT benchmark results have been published comparing a variety of parallel and uniprocessor systems.
A related tool ANALYTIC HINT can be used as a design tool to estimate the benefits of using more memory, a faster processor, or improved communications (bus speed) within the system.
See also
John Gustafson (scientist)
References
External links
official site
article discussing HINT benchmark
benchmark sources download
benchmark sources download
Benchmarks (computing)
|
https://en.wikipedia.org/wiki/Game%20Boy%20Sound%20System
|
The Game Boy Sound System (GBS) is a file format containing Nintendo Game Boy sound driver data designed for the Game Boy sound hardware.
GBS rips are an arduous task often involving debuggers and compiled assembly code, as there was no uniform sound driver for each Game Boy game. As a result, GBS players and the files themselves emulate just enough of the original hardware and ROM data to play back the music driver and data.
Players
Kobarin Media Player - Includes a front end, supports GBS and many chiptune formats natively, has a Winamp plugin converter.
Audacious - *nix player that accepts GBS.
Chipamp - Winamp plug-in bundle compiled by OverClocked ReMix allowing playback of over 40 chiptune and tracker formats
gbsplay - Open source player (Linux, *nix) (and XMMS plugin in older versions)
NEZPlug++ - Winamp plug-in that currently supports the most up-to-date implementation of the GBS format.
Audio Overload - A media player capable of playing a variety of audio formats from vintage consoles and computers.
Noise Entertainment System - a NSF/e (NES Sound File), GBS, VGM and SPC player for the iPhone and iPod touch.
VLC Media Player
References
Digital audio
Video game music file formats
|
https://en.wikipedia.org/wiki/TURBOchannel
|
TURBOchannel is an open computer bus developed by DEC by during the late 1980s and early 1990s. Although it is open for any vendor to implement in their own systems, it was mostly used in Digital's own systems such as the MIPS-based DECstation and DECsystem systems, in the VAXstation 4000, and in the Alpha-based DEC 3000 AXP. Digital abandoned the use of TURBOchannel in favor of the EISA and PCI buses in late 1994, with the introduction of their AlphaStation and AlphaServer systems.
History
TURBOchannel was developed in the late 1980s by Digital and was continuously revised through the early 1990s by the TURBOchannel Industry Group, an industry group set up by Digital to develop promote the bus. TURBOchannel has been an open bus from the beginning, the specification was publicly available at an initial purchase cost for the reproduction of material for third-party implementation, as were the mechanical specifications, for both implementation in both systems and in options. TURBOchannel was selected by the failed ACE (Advanced Computing Environment) for use as the industry standard bus in ARC (Advanced RISC Computing) compliant machines. Digital initially expected TURBOchannel to gain widespeard industry acceptance due to its status as an ARC standard, although ultimately Digital was the only major user of the TURBOchannel in their own DEC 3000 AXP, DECstation 5000 Series, DECsystem and VAXstation 4000 systems. While no third parties implemented TURBOchannel in systems, they did implement numerous TURBOchannel option modules for Digital's systems.
Although the main developer and promoter of TURBOchannel was the TURBOchannel Industry Group, Digital's TRI/ADD Program, an initiative to provide technical and marketing support to third parties implementing peripherals based on open interfaces such as FutureBus+, SCSI, VME and TURBOchannel for Digital's systems, was also involved in promoting TURBOchannel implementation and sales. The TRI/ADD Program was discontinued
|
https://en.wikipedia.org/wiki/Topology%20optimization
|
Topology optimization (TO) is a mathematical method that optimizes material layout within a given design space, for a given set of loads, boundary conditions and constraints with the goal of maximizing the performance of the system. Topology optimization is different from shape optimization and sizing optimization in the sense that the design can attain any shape within the design space, instead of dealing with predefined configurations.
The conventional topology optimization formulation uses a finite element method (FEM) to evaluate the design performance. The design is optimized using either gradient-based mathematical programming techniques such as the optimality criteria algorithm and the method of moving asymptotes or non gradient-based algorithms such as genetic algorithms.
Topology optimization has a wide range of applications in aerospace, mechanical, bio-chemical and civil engineering. Currently, engineers mostly use topology optimization at the concept level of a design process. Due to the free forms that naturally occur, the result is often difficult to manufacture. For that reason the result emerging from topology optimization is often fine-tuned for manufacturability. Adding constraints to the formulation in order to increase the manufacturability is an active field of research. In some cases results from topology optimization can be directly manufactured using additive manufacturing; topology optimization is thus a key part of design for additive manufacturing.
Problem statement
A topology optimization problem can be written in the general form of an optimization problem as:
The problem statement includes the following:
An objective function . This function represents the quantity that is being minimized for best performance. The most common objective function is compliance, where minimizing compliance leads to maximizing the stiffness of a structure.
The material distribution as a problem variable. This is described by the density of the m
|
https://en.wikipedia.org/wiki/Epson
|
Seiko Epson Corporation, commonly known as Epson, is a Japanese multinational electronics company and one of the world's largest manufacturers of printers and information- and imaging-related equipment. Headquartered in Suwa, Nagano, Japan, the company has numerous subsidiaries worldwide and manufactures inkjet, dot matrix, thermal and laser printers for consumer, business and industrial use, scanners, laptop and desktop computers, video projectors, watches, point of sale systems, robots and industrial automation equipment, semiconductor devices, crystal oscillators, sensing systems and other associated electronic components.
The company has developed as one of manufacturing and research and development (formerly known as Seikosha) of the former Seiko Group, a name traditionally known for manufacturing Seiko timepieces. Seiko Epson was one of the major companies in the Seiko Group, but is neither a subsidiary nor an affiliate of Seiko Group Corporation.
History
Origins
The roots of Seiko Epson Corporation go back to a company called Daiwa Kogyo, Ltd. which was founded in May 1942 by Hisao Yamazaki, a local clock shop owner and former employee of K. Hattori, in Suwa, Nagano. Daiwa Kogyo was supported by an investment from the Hattori family (founder of the Seiko Group) and began as a manufacturer of watch parts for Daini Seikosha (currently Seiko Instruments). The company started operation in a renovated miso storehouse with 22 employees.
In 1943, Daini Seikosha established a factory in Suwa for manufacturing Seiko watches with Daiwa Kogyo. In 1959, the Suwa Factory was split up and merged into Daiwa Kogyo to form Suwa Seikosha Co., Ltd: the forerunner of the Seiko Epson Corporation. The company has developed many timepiece technologies, such as the world's first portable quartz timer (Seiko QC-951) in 1963, the world's first quartz watch (Seiko Quartz Astron 35SQ) in 1969, the first automatic power-generating quartz watch (Seiko Auto-Quartz) in 1988, and the S
|
https://en.wikipedia.org/wiki/Super%20Harvard%20Architecture%20Single-Chip%20Computer
|
The Super Harvard Architecture Single-Chip Computer (SHARC) is a high performance floating-point and fixed-point DSP from Analog Devices. SHARC is used in a variety of signal processing applications ranging from audio processing, to single-CPU guided artillery shells to 1000-CPU over-the-horizon radar processing computers. The original design dates to about January 1994.
SHARC processors are typically intended to have a good number of serial links to other SHARC processors nearby, to be used as a low-cost alternative to SMP.
Architecture
The SHARC is a Harvard architecture word-addressed VLIW processor; it knows nothing of 8-bit or 16-bit values since each address is used to point to a whole 32-bit word, not just an octet. It is thus neither little-endian nor big-endian, though a compiler may use either convention if it implements 64-bit data and/or some way to pack multiple 8-bit or 16-bit values into a single 32-bit word. In C, the characters are 32-bit as they are the smallest addressable words by standard.
The word size is 48-bit for instructions, 32-bit for integers and normal floating-point, and 40-bit for extended floating-point. Code and data are normally fetched from on-chip memory, which the user must split into regions of different word sizes as desired. Small data types may be stored in wider memory, simply wasting the extra space. A system that does not use 40-bit extended floating-point might divide the on-chip memory into two sections, a 48-bit one for code and a 32-bit one for everything else. Most memory-related CPU instructions can not access all the bits of 48-bit memory, but a special 48-bit register is provided for this purpose. The special 48-bit register may be accessed as a pair of smaller registers, allowing movement to and from the normal registers.
Off-chip memory can be used with the SHARC. This memory can only be configured for one single size. If the off-chip memory is configured as 32-bit words to avoid waste, then only the on-chi
|
https://en.wikipedia.org/wiki/Logic%20synthesis
|
In computer engineering, logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates, typically by a computer program called a synthesis tool. Common examples of this process include synthesis of designs specified in hardware description languages, including VHDL and Verilog. Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs, while others target the creation of ASICs. Logic synthesis is one step in circuit design in the electronic design automation, the others are place and route and verification and validation.
History
The roots of logic synthesis can be traced to the treatment of logic by George Boole (1815 to 1864), in what is now termed Boolean algebra. In 1938, Claude Shannon showed that the two-valued Boolean algebra can describe the operation of switching circuits. In the early days, logic design involved manipulating the truth table representations as Karnaugh maps. The Karnaugh map-based minimization of logic is guided by a set of rules on how entries in the maps can be combined. A human designer can typically only work with Karnaugh maps containing up to four to six variables.
The first step toward automation of logic minimization was the introduction of the Quine–McCluskey algorithm that could be implemented on a computer. This exact minimization technique presented the notion of prime implicants and minimum cost covers that would become the cornerstone of two-level minimization. Nowadays, the much more efficient Espresso heuristic logic minimizer has become the standard tool for this operation. Another area of early research was in state minimization and encoding of finite-state machines (FSMs), a task that was the bane of designers. The applications for logic synthesis lay primarily in digital computer design. Hence, IBM and Bell Labs played a pivotal role in the early
|
https://en.wikipedia.org/wiki/Magnetic%20helicity
|
In plasma physics, magnetic helicity is a measure of the linkage, twist, and writhe of a magnetic field. In ideal magnetohydrodynamics, magnetic helicity is conserved. When a magnetic field contains magnetic helicity, it tends to form large-scale structures from small-scale ones. This process can be referred to as an inverse transfer in Fourier space.
This second property makes magnetic helicity special: three-dimensional turbulent flows tend to "destroy" structure, in the sense that large-scale vortices break up into smaller and smaller ones (a process called "direct energy cascade", described by Lewis Fry Richardson and Andrey Nikolaevich Kolmogorov). At the smallest scales, the vortices are dissipated in heat through viscous effects. Through a sort of "inverse cascade of magnetic helicity", the opposite happens: small helical structures (with a non-zero magnetic helicity) lead to the formation of large-scale magnetic fields. This is for example visible in the heliospheric current sheet, a large magnetic structure in the Solar System.
Magnetic helicity is of great relevance in several astrophysical systems, where the resistivity is typically very low so that magnetic helicity is conserved to a very good approximation. For example: magnetic helicity dynamics are important in solar flares and coronal mass ejections. Magnetic helicity is present in the solar wind. Its conservation is significant in dynamo processes. It also plays a role in fusion research, for example in reversed field pinch experiments.
Mathematical definition
Generally, the helicity of a smooth vector field confined to a volume is the standard measure of the extent to which the field lines wrap and coil around one another. It is defined as the volume integral over of the scalar product of and its curl, :
Magnetic helicity
Magnetic helicity is the helicity of a magnetic vector potential where is the associated magnetic field confined to a volume . Magnetic helicity can then be expressed
|
https://en.wikipedia.org/wiki/Immunoisolate
|
In general, immunoisolation is the process of protecting implanted material such as biopolymers, cells, or drug release carriers from an immune reaction. The most prominent means of accomplishing this is through the use of cell encapsulation.
References
Immunology
|
https://en.wikipedia.org/wiki/Motility
|
Motility is the ability of an organism to move independently, using metabolic energy.
Definitions
Motility, the ability of an organism to move independently, using metabolic energy, can be contrasted with sessility, the state of organisms that do not possess a means of self-locomotion and are normally immobile.
Motility differs from mobility, the ability of an object to be moved.
The term vagility encompasses both motility and mobility; sessile organisms including plants and fungi often have vagile parts such as fruits, seeds, or spores which may be dispersed by other agents such as wind, water, or other organisms.
Motility is genetically determined, but may be affected by environmental factors such as toxins. The nervous system and musculoskeletal system provide the majority of mammalian motility.
In addition to animal locomotion, most animals are motile, though some are vagile, described as having passive locomotion. Many bacteria and other microorganisms, and multicellular organisms are motile; some mechanisms of fluid flow in multicellular organs and tissue are also considered instances of motility, as with gastrointestinal motility. Motile marine animals are commonly called free-swimming, and motile non-parasitic organisms are called free-living.
Motility includes an organism's ability to move food through its digestive tract. There are two types of intestinal motility – peristalsis and segmentation. This motility is brought about by the contraction of smooth muscles in the gastrointestinal tract which mix the luminal contents with various secretions (segmentation) and move contents through the digestive tract from the mouth to the anus (peristalsis).
Cellular level
At the cellular level, different modes of movement exist:
amoeboid movement, a crawling-like movement, which also makes swimming possible
filopodia, enabling movement of the axonal growth cone
flagellar motility, a swimming-like motion (observed for example in spermatozoa, propelled by
|
https://en.wikipedia.org/wiki/GSC%20bus
|
GSC is a bus used in many of the HP 9000 workstations and servers. The acronym has various explanations, including Gecko System Connect (Gecko being the codename of the 712 workstation), Gonzo System Connect and General System Connect.
GSC was a general 32-bit I/O bus, similar to NuBus or Sun's SBus, although it was also used as a processor bus with the PA-7100LC and PA-7300LC processors. Several variations were produced over time, the later ones running at 40 MHz:
GSC-1X The original GSC bus implemented on PCX-L and used in the Gecko (712), Mirage (715) and Electra computers. Peak Bandwidth 142MB/s w/DMA, 106 MB/s with PIO writes.
GSC+, a.k.a. "Extended GSC" or "EGSC" Enhancements added for KittyHawk/SkyHawk (U2 chip) that allow for pending transactions. GSC+ enhancements are orthogonal to the GSC-1.5X and GSC-2X enhancements.
GSC-1.5X GSC-1X with an additional variable length write transaction.
GSC-2X GSC-1.5X with a protocol enhancement to allow data to be sent at double the GCLK rate, with a peak bandwidth of 256 MB/s.
HSC High Speed Connect.
Four types of GSC cards were produced: the GIO cards fit into the larger of the two IO card sockets in the 712 workstation. Several were produced, including a second RS-232 serial port, a serial/10BaseT combo, a second graphics card and a Token Ring card.
The 715/Mirage, 725, 735, 755, B-, C-, and J-class workstations, and the D- and R-class servers, used the so-called "EISA form factor". Many different types of card were produced, including Gigabit Ethernet, single and dual 100Mbit Ethernet, Ultra-2 SCSI, ATM and graphics.
The K- and T-class servers both used the "3x5" form factor, although the different brackets prevent the cards being interchangeable. Fibre channel and Gigabit Ethernet cards both exist.
See also
List of device bandwidths
References
PA-RISC LINUX - Glossary
Computer buses
Hewlett-Packard products
|
https://en.wikipedia.org/wiki/HIL%20bus
|
The HP-HIL (Hewlett-Packard Human Interface Link) is the name of a computer bus used by Hewlett-Packard to connect keyboards, mice, trackballs, digitizers, tablets, barcode readers, rotary knobs, touchscreens, and other human interface peripherals to their HP 9000 workstations. The bus was in use until the mid-1990s, when HP substituted PS/2 technology for HIL. The PS/2 peripherals were themselves replaced with USB-connected models.
The HIL bus is a daisy-chain of up to 7 devices, running at a raw clock speed of 8 MHz. Each HIL device typically has an output connector, and an input connector to which the next device in the chain plugs; the exception is the mouse which has only the output connector.
HIL buses can be found on HP PA-RISC and m68k based machines, some early HP Vectra computers, as well as in some HP/Agilent Logic Analyzers. HP-UX, OpenBSD, Linux and NetBSD include drivers for the HIL bus and HIL devices.
The HP-HIL bus uses specific 4-pin, 6-pin, or 8-pin SDL connectors, somewhat similar to the 8P8C 8-pin modular connector commonly (though incorrectly) called the RJ-45. The bus can reportedly also use a 9-pin D-subminiature DE-9 connector.
A HIL to PS/2 converter is available, namely the HP A4220-62001.
Specification
HP-HIL Technical Reference Manual, HP P/N 45918A
External links
HP HIL Linux driver suite
Vectra RS/20 with HIL
Serial buses
Hewlett-Packard products
|
https://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa%20matrix
|
In the Standard Model of particle physics, the Cabibbo–Kobayashi–Maskawa matrix, CKM matrix, quark mixing matrix, or KM matrix is a unitary matrix which contains information on the strength of the flavour-changing weak interaction. Technically, it specifies the mismatch of quantum states of quarks when they propagate freely and when they take part in the weak interactions. It is important in the understanding of CP violation. This matrix was introduced for three generations of quarks by Makoto Kobayashi and Toshihide Maskawa, adding one generation to the matrix previously introduced by Nicola Cabibbo. This matrix is also an extension of the GIM mechanism, which only includes two of the three current families of quarks.
The matrix
Predecessor – the Cabibbo matrix
In 1963, Nicola Cabibbo introduced the Cabibbo angle () to preserve the universality of the weak interaction.
Cabibbo was inspired by previous work by Murray Gell-Mann and Maurice Lévy,
on the effectively rotated nonstrange and strange vector and axial weak currents, which he references.
In light of current concepts (quarks had not yet been proposed), the Cabibbo angle is related to the relative probability that down and strange quarks decay into up quarks ( || and || , respectively). In particle physics jargon, the object that couples to the up quark via charged-current weak interaction is a superposition of down-type quarks, here denoted by .
Mathematically this is:
or using the Cabibbo angle:
Using the currently accepted values for || and || (see below), the Cabibbo angle can be calculated using
When the charm quark was discovered in 1974, it was noticed that the down and strange quark could decay into either the up or charm quark, leading to two sets of equations:
or using the Cabibbo angle:
This can also be written in matrix notation as:
or using the Cabibbo angle
where the various || represent the probability that the quark of flavor decays into a quark of flavor . This 2×2 r
|
https://en.wikipedia.org/wiki/BEEP
|
The Blocks Extensible Exchange Protocol (BEEP) is a framework for creating network application protocols. BEEP includes building blocks like framing, pipelining, multiplexing, reporting and authentication for connection and message-oriented peer-to-peer (P2P) protocols with support of asynchronous full-duplex communication.
Message syntax and semantics is defined with BEEP profiles associated to one or more BEEP channels, where each channel is a full-duplex pipe. A framing-mechanism enables simultaneous and independent communication between peers.
BEEP is defined in independently from the underlying transport mechanism. The mapping of BEEP onto a particular transport service is defined in a separate series of documents.
Overview
Profiles, channels and a framing mechanism are used in BEEP to exchange different kinds of messages. Only content type and encoding are defaulted by the specification leaving the full flexibility of using a binary or textual format open to the protocol designer. Profiles define the functionality of the protocol and the message syntax and semantics. Channels are full-duplex pipes connected to a particular profile. Messages sent through different channels are independent from each other (asynchronous). Multiple channels can use the same profile through one connection.
BEEP also includes TLS for encryption and SASL for authentication.
History
In 1998 Marshall T. Rose, who also worked on the POP3, SMTP, and SNMP protocols, designed the BXXP protocol and subsequently handed it over to the Internet Engineering Task Force (IETF) workgroup in summer 2000. In 2001 the IETF published BEEP () and BEEP on TCP () with some enhancements to BXXP. The three most notable are:
Using application/octet-stream as the default "Content-Type"
Support multi-reply for messages
Changing the name from BXXP to BEEP
BEEP session
To start a BEEP session, an initiating peer connects to the listening peer. Each peer sends a reply containing a greeting elemen
|
https://en.wikipedia.org/wiki/Gravimetry
|
Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of a gravitational field or the properties of matter responsible for its creation are of interest.
Units of measurement
Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is 1 metre per second squared (abbreviated as m/s2). Other units include the cgs gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn is defined approximately equal to the acceleration due to gravity at the Earth's surface (although the value of g varies by location).
Gravimeters
An instrument used to measure gravity is known as a gravimeter. For a small body, general relativity predicts gravitational effects indistinguishable from the effects of acceleration by the equivalence principle. Thus, gravimeters can be regarded as special-purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), but is more commonly made in units of gals or cm/s2.
Researchers use more sophisticated gravimeters when precise measurements are needed. When measuring the Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to define gravity anomalies.
Besides precision, stability is also an important property of a gravimeter, as it allows the monitor
|
https://en.wikipedia.org/wiki/Singulation
|
Singulation is a method by which an RFID reader identifies a tag with a specific serial number from a number of tags in its field. This is necessary because if multiple tags respond simultaneously to a query, they will jam each other. In a typical commercial application, such as scanning a bag of groceries, potentially hundreds of tags might be within range of the reader.
When all the tags cooperate with the tag reader and follow the same anti-collision protocol, also called singulation protocol,
then the tag reader can read data from each and every tag without interference from the other tags.
Collision avoidance
Generally, a collision occurs when two entities require the same resource; for example, two ships with crossing courses in a narrows. In wireless technology, a collision occurs when two transmitters transmit at the same time with the same modulation scheme on the same frequency. In RFID technology, various strategies have been developed to overcome this situation.
Tree walking
There are different methods of singulation, but the most common is tree walking, which involves asking all tags with a serial number that starts with either a 1 or 0 to respond. If more than one responds, the reader might ask for all tags with a serial number that starts with 01 to respond, and then 010. It keeps doing this until it finds the tag it is looking for. Note that if the reader has some idea of what tags it wishes to interrogate, it can considerably optimise the search order. For example with some designs of tags, if a reader already suspects certain tags to be present then those tags can be instructed to remain silent, then tree walking can proceed without interference from these.
This simple protocol leaks considerable information because anyone able to eavesdrop on the tag reader alone can determine all but the last bit of a tag's serial number. Thus a tag can be (largely) identified so long as the reader's signal is receivable, which is usually possible at much
|
https://en.wikipedia.org/wiki/The%20Rocklopedia%20Fakebandica
|
The Rocklopedia Fakebandica, by T. Mike Childs, is an illustrated encyclopedia of fictional musical groups and musicians, as seen in movies and television. It was officially released November 6, 2004. The book catalogs such better-known fake bands as Spinal Tap, The Blues Brothers, The Rutles, and The Chipmunks, along with dozens of less well known ones. The book takes a light-hearted, humorous approach, often pointing out the discrepancies between the experiences of real bands and musicians and the unlikely adventures fictional ones have.
The book grew out of a website started by the author in 2000. The website includes fictional bands from other sources, such as books and TV commercials, as well as many bands not found in the book.
External links
Official site
2004 non-fiction books
Online encyclopedias
Popular culture books
Encyclopedias of music
|
https://en.wikipedia.org/wiki/Wafer%20testing
|
Wafer testing is a step performed during semiconductor device fabrication after BEOL process is finished. During this step, performed before a wafer is sent to die preparation, all individual integrated circuits that are present on the wafer are tested for functional defects by applying special test patterns to them. The wafer testing is performed by a piece of test equipment called a wafer prober. The process of wafer testing can be referred to in several ways: Wafer Final Test (WFT), Electronic Die Sort (EDS) and Circuit Probe (CP) are probably the most common.
Wafer prober
A wafer prober is a machine used for integrated circuits verification against designed functionality. It's either manual or automatic test equipment. For electrical testing a set of microscopic contacts or probes called a probe card are held in place whilst the wafer, vacuum-mounted on a wafer chuck, is moved into electrical contact. When a die (or array of dice) have been electrically tested the prober moves the wafer to the next die (or array) and the next test can start. The wafer prober is usually responsible for loading and unloading the wafers from their carrier (or cassette) and is equipped with automatic pattern recognition optics capable of aligning the wafer with sufficient accuracy to ensure accurate registration between the contact pads on the wafer and the tips of the probes.
For today's multi-die packages such as stacked chip-scale package (SCSP) or system in package (SiP) – the development of non-contact (RF) probes for identification of known tested die (KTD) and known good die (KGD) are critical to increasing overall system yield.
The wafer prober also exercises any test circuitry on the wafer scribe lines.
Some companies get most of their information about device performance from these scribe line test structures.
When all test patterns pass for a specific die, its position is remembered for later use during IC packaging. Sometimes a die has internal spare resources ava
|
https://en.wikipedia.org/wiki/FreeJ
|
FreeJ is a modular software vision mixer for Linux systems. It is capable of real-time video manipulation, for amateur and professional uses. It can be used as an instrument in the fields of dance theater, veejaying and television. FreeJ supports the input of multiple layers of video footage, which can be filtered through special-effect-chains, and then mixed for output.
History
Denis Rojo (aka Jaromil) is the original author, and as of 2013 is the current maintainer. Since 0.7 was released, Silvano Galliani (aka kysucix) joined the core development team, implementing several new enhancements.
Features
FreeJ can be operated in real-time from a command line console (S-Lang), and also remotely operated over the network via an SSH connection. The software provides an interface for behavior-scripting (currently accessible through JavaScript). Also, it can be used to render media to multiple screens, remote setups, encoders, and live Internet stream servers.
FreeJ can overlay, mask, transform and filter multiple layers of footage on the screen. It supports an unlimited number of layers that can be mixed, regardless of the source. It can read input from varied sources: movie files, webcams, TV cards, images, renders and Adobe Flash animations.
FreeJ can produce a stream to an icecast server with the video being mixed and audio grabbed from soundcard. The resulting video is accessible to any computer able to decode media encoded with the theora codec.
The console interface of FreeJ is accessible via SSH and can be run as a background process. The remote interface offers simultaneous access from multiple remote locations.
References
External links
Free video software
Television technology
|
https://en.wikipedia.org/wiki/Blackfin
|
The Blackfin is a family of 16-/32-bit microprocessors developed, manufactured and marketed by Analog Devices. The processors have built-in, fixed-point digital signal processor (DSP) functionality supplied by 16-bit multiply–accumulates (MACs), accompanied on-chip by a microcontroller. It was designed for a unified low-power processor architecture that can run operating systems while simultaneously handling complex numeric tasks such as real-time H.264 video encoding.
Architecture details
Blackfin processors use a 32-bit RISC microcontroller programming model on a SIMD architecture, which was co-developed by Intel and Analog Devices, as MSA (Micro Signal Architecture).
The architecture was announced in December 2000, and first demonstrated at the Embedded Systems Conference in June, 2001.
It incorporates aspects of ADI's older SHARC architecture and Intel's XScale architecture into a single core, combining digital signal processing (DSP) and microcontroller functionality. There are many differences in the core architecture between Blackfin/MSA and XScale/ARM or SHARC, but the combination was designed to improve performance, programmability and power consumption over traditional DSP or RISC architecture designs.
The Blackfin architecture encompasses various CPU models, each targeting particular applications. The BF-7xx series, introduced in 2014, comprise the Blackfin+ architecture, which expands on the Blackfin architecture with some new processor features and instructions.
Architecture features
Core features
What is regarded as the Blackfin "core" is contextually dependent. For some applications, the DSP features are central. Blackfin has two 16-bit hardware MACs, two 40-bit ALUs and accumulators, a 40-bit barrel shifter, and four 8-bit video ALUs; Blackfin+ processors add a 32-bit MAC and 72-bit accumulator. This allows the processor to execute up to three instructions per clock cycle, depending on the level of optimization performed by the compiler or pro
|
https://en.wikipedia.org/wiki/Ambiguous%20viewpoint
|
Object-oriented analysis and design (OOAD) models are often presented without clarifying the viewpoint represented by the model. By default, these models denote an implementation viewpoint that visualises the structure of a computer program. Mixed viewpoints do not support the fundamental separation of interfaces from implementation details, which is one of the primary benefits of the object-oriented paradigm.
In object-oriented analysis and design there are three viewpoints: The business viewpoint (the information that is domain specific and matters to the end user), the specification viewpoint (which defines the exposed interface elements of a class), and the implementation viewpoint (which deals with the actual internal implementation of the class). If the viewpoint becomes mixed then these elements will blend together in a way which makes it difficult to separate out and maintain the internals of an object without changing the interface, one of the core tenets of object-oriented analysis and design.
See also
Class-Responsibility-Collaboration card
Unified Modeling Language
Object-oriented analysis
Object-oriented design
References
Object-oriented programming
Software design
|
https://en.wikipedia.org/wiki/Araldite
|
Araldite is a registered trademark of Huntsman Advanced Materials (previously part of Ciba-Geigy) referring to their range of engineering and structural epoxy, acrylic, and polyurethane adhesives. Swiss manufacturers originally launched Araldite DIY adhesive products in 1946. The first batches of Araldite epoxy resins, for which the brand is best known, were made in Duxford, England in 1950.
Araldite adhesive sets by the interaction of an epoxy resin with a hardener. Mixing an epoxy resin and hardener together starts a chemical reaction that produces heat - an exothermic reaction.
It is claimed that after curing the bond is impervious to boiling water and to all common organic solvents.
Araldite is available in many different types of pack, the most common containing two different tubes, one each for the resin and the hardener. Other variations include double syringe-type packages which automatically measure equal parts for mixing.
History
Aero Research Limited (ARL), founded in the UK in 1934, developed a new synthetic-resin adhesive for bonding metals, glass, porcelain, china and other materials. The name "Araldite" recalls the ARL brand: ARaLdite.
De Trey Frères SA of Switzerland carried out the first production of epoxy resins. They licensed the process to Ciba AG in the early 1940s and Ciba first demonstrated a product under the tradename "Araldite" at the Swiss Industries Fair in 1945. Ciba went on to become one of the three major epoxy-resin producers worldwide. Ciba's epoxy business was spun off and later sold in the late 1990s and became the advanced materials business unit of Huntsman Corporation of the US.
Notable applications
Despite a widespread myth, Araldite was not used in the production of the De Havilland Mosquito aircraft in the 1940s. Another Aero Research Limited glue was used, called Aerolite, which was not an epoxy resin, but a gap-filling urea-formaldehyde adhesive.
Araldite adhesive is used to join together the two sections of ca
|
https://en.wikipedia.org/wiki/Thomas%20Simpson
|
Thomas Simpson FRS (20 August 1710 – 14 May 1761) was a British mathematician and inventor known for the eponymous Simpson's rule to approximate definite integrals. The attribution, as often in mathematics, can be debated: this rule had been found 100 years earlier by Johannes Kepler, and in German it is called Keplersche Fassregel.
Biography
Simpson was born in Sutton Cheney, Leicestershire. The son of a weaver, Simpson taught himself mathematics. At the age of nineteen, he married a fifty-year old widow with two children. As a youth, he became interested in astrology after seeing a solar eclipse. He also dabbled in divination and caused fits in a girl after 'raising a devil' from her. After this incident, he and his wife had to flee to Derby. He moved with his wife and children to London at age twenty-five, where he supported his family by weaving during the day and teaching mathematics at night.
From 1743, he taught mathematics at the Royal Military Academy, Woolwich. Simpson was a fellow of the Royal Society. In 1758, Simpson was elected a foreign member of the Royal Swedish Academy of Sciences.
He died in Market Bosworth, and was laid to rest in Sutton Cheney. A plaque inside the church commemorates him.
Early work
Simpson's treatise entitled The Nature and Laws of Chance and The Doctrine of Annuities and Reversions were based on the work of De Moivre and were attempts at making the same material more brief and understandable. Simpson stated this clearly in The Nature and Laws of Chance, referring to De Moivre's Doctrine of Chances: "tho' it neither wants Matter nor Elegance to recommend it, yet the Price must, I am sensible, have put it out of the Power of many to purchase it". In both works, Simpson cited De Moivre's work and did not claim originality beyond the presentation of some more accurate data. While he and De Moivre initially got along, De Moivre eventually felt that his income was threatened by Simpson's work and in his second edition of Ann
|
https://en.wikipedia.org/wiki/Imaging%20radar
|
Imaging radar is an application of radar which is used to create two-dimensional images, typically of landscapes. Imaging radar provides its light to illuminate an area on the ground and take a picture at radio wavelengths. It uses an antenna and digital computer storage to record its images. In a radar image, one can see only the energy that was reflected back towards the radar antenna. The radar moves along a flight path and the area illuminated by the radar, or footprint, is moved along the surface in a swath, building the image as it does so.
Digital radar images are composed of many dots. Each pixel in the radar image represents the radar backscatter for that area on the ground: brighter areas represent high backscatter, darker areas represents low backscatter.
The traditional application of radar is to display the position and motion of typically highly reflective objects (such as aircraft or ships) by sending out a radiowave signal, and then detecting the direction and delay of the reflected signal. Imaging radar on the other hand attempts to form an image of one object (e.g. a landscape) by furthermore registering the intensity of the reflected signal to determine the amount of scattering. The registered electromagnetic scattering is then mapped onto a two-dimensional plane, with points with a higher reflectivity getting assigned usually a brighter color, thus creating an image.
Several techniques have evolved to do this. Generally they take advantage of the Doppler effect caused by the rotation or other motion of the object and by the changing view of the object brought about by the relative motion between the object and the back-scatter that is perceived by the radar of the object (typically, a plane) flying over the earth. Through recent improvements of the techniques, radar imaging is getting more accurate. Imaging radar has been used to map the Earth, other planets, asteroids, other celestial objects and to categorize targets for military systems.
|
https://en.wikipedia.org/wiki/Test%20automation
|
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
General approaches
There are many approaches to test automation, however below are the general approaches used widely:
Graphical user interface testing. A testing framework that generates user interface events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of the program is correct.
API driven testing. A testing framework that uses a programming interface to the application to validate the behaviour under test. Typically API driven testing bypasses application user interface altogether. It can also be testing public (usually) interfaces to classes, modules or libraries are tested with a variety of input arguments to validate that the results that are returned are correct.
Model-based testing
One way to generate test cases automatically is model-based testing through use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so. In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.
Factors to consider for the decision to implement test automation
What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make. A multi-vocal literature review of 52 practitioner a
|
https://en.wikipedia.org/wiki/Code%20reuse
|
In software development (and computer programming in general), code reuse, also called software reuse, is the use of existing software, or software knowledge, to build new software, following the reusability principles.
Code reuse may be achieved by different ways depending on a complexity of a programming language chosen and range from a lower-level approaches like code copy-pasting (e.g. via snippets), simple functions (procedures or subroutines) or a bunch of objects or functions organized into modules (e.g. libraries) or custom namespaces, and packages, frameworks or software suites in higher-levels.
Code reuse implies dependencies which can make code maintainability harder. At least one study found that code reuse reduces technical debt.
Overview
Ad hoc code reuse has been practiced from the earliest days of programming. Programmers have always reused sections of code, templates, functions, and procedures. Software reuse as a recognized area of study in software engineering, however, dates only from 1968 when Douglas McIlroy of Bell Laboratories proposed basing the software industry on reusable components.
Code reuse aims to save time and resources and reduce redundancy by taking advantage of assets that have already been created in some form within the software product development process. The key idea in reuse is that parts of a computer program written at one time can be or should be used in the construction of other programs written at a later time.
Code reuse may imply the creation of a separately maintained version of the reusable assets. While code is the most common resource selected for reuse, other assets generated during the development cycle may offer opportunities for reuse: software components, test suites, designs, documentation, and so on.
The software library is a good example of code reuse. Programmers may decide to create internal abstractions so that certain parts of their program can be reused, or may create custom libraries for th
|
https://en.wikipedia.org/wiki/Sound%20energy%20density
|
Sound energy density or sound density is the sound energy per unit volume. The SI unit of sound energy density is the pascal (Pa), which is 1 kg⋅m−1⋅s−2 in SI base units or 1 joule per cubic metre (J/m3).
Mathematical definition
Sound energy density, denoted w, is defined by
where
p is the sound pressure;
v is the particle velocity in the direction of propagation;
c is the speed of sound.
The terms instantaneous energy density, maximum energy density, and peak energy density have meanings analogous to the related terms used for sound pressure. In speaking of average energy density, it is necessary to distinguish between the space average (at a given instant) and the time average (at a given point).
Sound energy density level
The sound energy density level gives the ratio of a sound incidence as a sound energy value in comparison to the reference level of 1 pPa (= 10−12 pascals). It is a logarithmic measure of the ratio of two sound energy densities. The unit of the sound energy density level is the decibel (dB), a non-SI unit accepted for use with the SI Units.
The sound energy density level, L(E), for a given sound energy density, E1, in pascals, is
,
where E0 is the standard reference sound energy density
.
See also
Particle velocity level
Sound intensity level
References
External links
Conversion: sound intensity to sound intensity level
Ohm's law as acoustic equivalent - calculations
Relationships of acoustic quantities associated with a plane progressive acoustic sound wave - pdf
Acoustics
Sound
Sound measurements
Physical quantities
|
https://en.wikipedia.org/wiki/Frobenius%20normal%20form
|
In linear algebra, the Frobenius normal form or rational canonical form of a square matrix A with entries in a field F is a canonical form for matrices obtained by conjugation by invertible matrices over F. The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for A (i.e., spanned by some vector and its repeated images under A). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix B is similar to A if and only if it has the same rational canonical form as A. Since this form can be found without any operations that might change when extending the field F (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius.
Some authors use the term rational canonical form for a somewhat different form that is more properly called the primary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined over F, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field of F. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.
Motivation
When trying to find out whether two square matrices A and B are similar, one approach is to try, for each of them, to decompose the vector space as far as possible into a direct sum of stable subspaces, and compare the respective actions on these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces (for which the action is as simple as it can get, namely by a scalar),
|
https://en.wikipedia.org/wiki/Honeytoken
|
In the field of computer security, honeytokens are honeypots that are not computer systems. Their value lies not in their use, but in their abuse. As such, they are a generalization of such ideas as the honeypot and the canary values often used in stack protection schemes. Honeytokens do not necessarily prevent any tampering with the data, but instead give the administrator a further measure of confidence in the data integrity.
Honeytokens are fictitious words or records that are added to legitimate databases. They allow administrators to track data in situations they wouldn't normally be able to track, such as cloud-based networks. If data is stolen, honey tokens allow administrators to identify who it was stolen from or how it was leaked. If there are three locations for medical records, different honey tokens in the form of fake medical records could be added to each location. Different honeytokens would be in each set of records.
If they are chosen to be unique and unlikely to ever appear in legitimate traffic, they can also be detected over the network by an intrusion-detection system (IDS), alerting the system administrator to things that would otherwise go unnoticed. This is one case where they go beyond merely ensuring integrity, and with some reactive security mechanisms, may actually prevent the malicious activity, e.g. by dropping all packets containing the honeytoken at the router. However, such mechanisms have pitfalls because it might cause serious problems if the honeytoken was poorly chosen and appeared in otherwise legitimate network traffic, which was then dropped.
The term was first coined by Augusto Paes de Barros in 2003.
Uses
Honeytokens can exist in many forms, from a dead, fake account to a database entry that would only be selected by malicious queries, making the concept ideally suited to ensuring data integrity. A particular example of a honeytoken is a fake email address used to track if a mailing list has been stolen.
See also
|
https://en.wikipedia.org/wiki/Mahler%20measure
|
In mathematics, the Mahler measure of a polynomial with complex coefficients is defined as
where factorizes over the complex numbers as
The Mahler measure can be viewed as a kind of height function. Using Jensen's formula, it can be proved that this measure is also equal to the geometric mean of for on the unit circle (i.e., ):
By extension, the Mahler measure of an algebraic number is defined as the Mahler measure of the minimal polynomial of over . In particular, if is a Pisot number or a Salem number, then its Mahler measure is simply .
The Mahler measure is named after the German-born Australian mathematician Kurt Mahler.
Properties
The Mahler measure is multiplicative:
where is the norm of .
Kronecker's Theorem: If is an irreducible monic integer polynomial with , then either or is a cyclotomic polynomial.
(Lehmer's conjecture) There is a constant such that if is an irreducible integer polynomial, then either or .
The Mahler measure of a monic integer polynomial is a Perron number.
Higher-dimensional Mahler measure
The Mahler measure of a multi-variable polynomial is defined similarly by the formula
It inherits the above three properties of the Mahler measure for a one-variable polynomial.
The multi-variable Mahler measure has been shown, in some cases, to be related to special values
of zeta-functions and -functions. For example, in 1981, Smyth proved the formulas
where is the Dirichlet L-function, and
where is the Riemann zeta function. Here is called the logarithmic Mahler measure.
Some results by Lawton and Boyd
From the definition, the Mahler measure is viewed as the integrated values of polynomials over the torus (also see Lehmer's conjecture). If vanishes on the torus , then the convergence of the integral defining is not obvious, but it is known that does converge and is equal to a limit of one-variable Mahler measures, which had been conjectured by Boyd.
This is formulated as follows: Let denote the int
|
https://en.wikipedia.org/wiki/Army%20One
|
Army One is the callsign of any United States Army aircraft carrying the president of the United States. An Army aircraft carrying the vice president is designated Army Two.
From 1957 until 1976, the callsign was usually used for an Army helicopter transporting the president. Prior to 1976, responsibility for helicopter transportation of the president was divided between the Army and the U.S. Marine Corps, under the call sign Marine One, until the Marine Corps was given the sole responsibility of transporting the president by helicopter.
The best known helicopters that used the Army One callsign were Sea King VH-3A helicopters that were part of the presidential fleet from 1961 to 1976. During its presidential service, the helicopter was known either as Army One or Marine One, depending on whether Army or Marine pilots were operating the craft. The helicopter, with seats for sixteen, has a seat reserved for the president and the first lady, and single, smaller seats for the two Secret Service agents who always flew with the presidential party.
See also
Transportation of the president of the United States
References
External links
https://web.archive.org/web/20160906141830/https://www.nixonlibrary.gov/themuseum/helicopter.php
https://web.archive.org/web/20120723163133/http://www.genetboyer.com/Book.html
Presidential aircraft
Call signs
United States Army aviation
Transportation of the president of the United States
|
https://en.wikipedia.org/wiki/Scalable%20Link%20Interface
|
Scalable Link Interface (SLI) is the brand name for a now discontinued multi-GPU technology developed by Nvidia for linking two or more video cards together to produce a single output. SLI is a parallel processing algorithm for computer graphics, meant to increase the available processing power.
The initialism SLI was first used by 3dfx for Scan-Line Interleave, which was introduced to the consumer market in 1998 and used in the Voodoo2 line of video cards. After buying out 3dfx, Nvidia acquired the technology but did not use it. Nvidia later reintroduced the SLI name in 2004 and intended for it to be used in modern computer systems based on the PCI Express (PCIe) bus; however, the technology behind the name SLI has changed dramatically.
Implementation
SLI allows two, three, or four graphics processing units (GPUs) to share the workload when rendering real-time 3D computer graphics. Ideally, identical GPUs are installed on the motherboard that contains enough PCI Express slots, set up in a master–slave configuration. All graphics cards are given an equal workload to render, but the final output of each card is sent to the master card via a connector called the SLI bridge. For example, in a two graphics card setup, the master works on the top half of the scene, the slave the bottom half. Once the slave is done, it sends its render to the master to combine into one image before sending it to the monitor.
The SLI bridge is used to reduce bandwidth constraints and send data between both graphics cards directly. It is possible to run SLI without using the bridge connector on a pair of low-end to mid-range graphics cards (e.g., 7100GS or 6600GT) with Nvidia's Forceware drivers 80.XX or later. Since these graphics cards do not use as much bandwidth, data can be relayed through just the chipsets on the motherboard. However, if there are two high-end graphics cards installed and the SLI bridge is omitted, the performance will suffer severely, as the chipset does not have
|
https://en.wikipedia.org/wiki/Navy%20One
|
Navy One is the call sign of any United States Navy aircraft carrying the president of the United States.
There has only been one aircraft designated as Navy One: a Lockheed S-3 Viking, BuNo 159387, assigned to the "Blue Wolves" of VS-35, which transported President George W. Bush to the aircraft carrier USS Abraham Lincoln off the coast of San Diego, California, on 1 May 2003. The pilot was Commander John "Skip" Lussier, then VS-35's executive officer; and the flight officer was Lieutenant Ryan "Wilson" Phillips. The S-3 used for the flight was retired from service and placed on display at the National Naval Aviation Museum in Pensacola, Florida on 17 July 2003.
A Navy aircraft carrying the vice president would be designated Navy Two.
See also
Transportation of the president of the United States
2003 Mission Accomplished speech
References
Presidential aircraft
United States naval aviation
Call signs
Transportation of the president of the United States
Vehicles of the United States
|
https://en.wikipedia.org/wiki/Miller%20index
|
Miller indices form a notation system in crystallography for lattice planes in crystal (Bravais) lattices.
In particular, a family of lattice planes of a given (direct) Bravais lattice is determined by three integers h, k, and ℓ, the Miller indices. They are written (hkℓ), and denote the family of (parallel) lattice planes (of the given Bravais lattice) orthogonal to , where are the basis or primitive translation vectors of the reciprocal lattice for the given Bravais lattice. (Note that the plane is not always orthogonal to the linear combination of direct or original lattice vectors because the direct lattice vectors need not be mutually orthogonal.) This is based on the fact that a reciprocal lattice vector (the vector indicating a reciprocal lattice point from the reciprocal lattice origin) is the wavevector of a plane wave in the Fourier series of a spatial function (e.g., electronic density function) which periodicity follows the original Bravais lattice, so wavefronts of the plane wave are coincident with parallel lattice planes of the original lattice. Since a measured scattering vector in X-ray crystallography, with as the outgoing (scattered from a crystal lattice) X-ray wavevector and as the incoming (toward the crystal lattice) X-ray wavevector, is equal to a reciprocal lattice vector as stated by the Laue equations, the measured scattered X-ray peak at each measured scattering vector is marked by Miller indices. By convention, negative integers are written with a bar, as in for −3. The integers are usually written in lowest terms, i.e. their greatest common divisor should be 1. Miller indices are also used to designate reflections in X-ray crystallography. In this case the integers are not necessarily in lowest terms, and can be thought of as corresponding to planes spaced such that the reflections from adjacent planes would have a phase difference of exactly one wavelength (2), regardless of whether there are atoms on all these planes or not.
|
https://en.wikipedia.org/wiki/GNU%20Radio
|
GNU Radio is a free software development toolkit that provides signal processing blocks to implement software-defined radios and signal processing systems. It can be used with external radio frequency (RF) hardware to create software-defined radios, or without hardware in a simulation-like environment. It is widely used in hobbyist, academic, and commercial environments to support both wireless communications research and real-world radio systems.
Overview
The GNU Radio software provides the framework and tools to build and run software radio or just general signal-processing applications. The GNU Radio applications themselves are generally known as "flowgraphs", which are a series of signal processing blocks connected together, thus describing a data flow.
As with all software-defined radio systems, reconfigurability is a key feature. Instead of using different radios designed for specific but disparate purposes, a single, general-purpose, radio can be used as the radio front-end, and the signal-processing software (here, GNU Radio), handles the processing specific to the radio application.
These flowgraphs can be written in either C++ or Python. The GNU Radio infrastructure is written entirely in C++, and many of the user tools (such as GNU Radio Companion) are written in Python.
GNU Radio is a signal processing package and part of the GNU Project. It is distributed under the terms of the GNU General Public License (GPL), and most of the project code is copyrighted by the Free Software Foundation.
History
First published in 2001, GNU Radio is an official GNU package. Philanthropist John Gilmore initiated GNU Radio with the funding of $320,000 (US) to Eric Blossom for code creation and project-management duties. One of the first applications was building an ATSC receiver in software.
The GNU Radio software began as a fork of the Pspectra code that was developed by the SpectrumWare project at the Massachusetts Institute of Technology (MIT). In 2004, a comple
|
https://en.wikipedia.org/wiki/Random%20sample%20consensus
|
Random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers, when outliers are to be accorded no influence on the values of the estimates. Therefore, it also can be interpreted as an outlier detection method. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles at SRI International in 1981. They used RANSAC to solve the Location Determination Problem (LDP), where the goal is to determine the points in the space that project onto an image into a set of landmarks with known locations.
RANSAC uses repeated random sub-sampling. A basic assumption is that the data consists of "inliers", i.e., data whose distribution can be explained by some set of model parameters, though may be subject to noise, and "outliers" which are data that do not fit the model. The outliers can come, for example, from extreme values of the noise or from erroneous measurements or incorrect hypotheses about the interpretation of data. RANSAC also assumes that, given a (usually small) set of inliers, there exists a procedure which can estimate the parameters of a model that optimally explains or fits this data.
Example
A simple example is fitting a line in two dimensions to a set of observations. Assuming that this set contains both inliers, i.e., points which approximately can be fitted to a line, and outliers, points which cannot be fitted to this line, a simple least squares method for line fitting will generally produce a line with a bad fit to the data including inliers and outliers. The reason is that it is optimally fitted to all points, including the outliers. RANSAC, on the other hand, attempts to exclude the outliers and find a linear model that only uses the inliers in its calculation. This is done by fitting
|
https://en.wikipedia.org/wiki/Modular%20lattice
|
In the branch of mathematics called order theory, a modular lattice is a lattice that satisfies the following self-dual condition,
Modular law implies
where are arbitrary elements in the lattice, ≤ is the partial order, and ∨ and ∧ (called join and meet respectively) are the operations of the lattice. This phrasing emphasizes an interpretation in terms of projection onto the sublattice , a fact known as the diamond isomorphism theorem. An alternative but equivalent condition stated as an equation (see below) emphasizes that modular lattices form a variety in the sense of universal algebra.
Modular lattices arise naturally in algebra and in many other areas of mathematics. In these scenarios, modularity is an abstraction of the 2nd Isomorphism Theorem. For example, the subspaces of a vector space (and more generally the submodules of a module over a ring) form a modular lattice.
In a not necessarily modular lattice, there may still be elements for which the modular law holds in connection with arbitrary elements and (for ). Such an element is called a modular element. Even more generally, the modular law may hold for any and a fixed pair . Such a pair is called a modular pair, and there are various generalizations of modularity related to this notion and to semimodularity.
Modular lattices are sometimes called Dedekind lattices after Richard Dedekind, who discovered the modular identity in several motivating examples.
Introduction
The modular law can be seen as a restricted associative law that connects the two lattice operations similarly to the way in which the associative law λ(μx) = (λμ)x for vector spaces connects multiplication in the field and scalar multiplication.
The restriction is clearly necessary, since it follows from . In other words, no lattice with more than one element satisfies the unrestricted consequent of the modular law.
It is easy to see that implies in every lattice. Therefore, the modular law can also be stated as
|
https://en.wikipedia.org/wiki/Mark%20Cerny
|
Mark Evan Cerny (born August 24, 1964) is an American video game designer, programmer, producer and media proprietor.
Raised in the San Francisco Bay Area, Cerny attended UC Berkeley before dropping out to pursue a career in video games. In his early years, he spent time at Atari, Sega, Crystal Dynamics and Universal Interactive Studios before becoming an independent consultant under his own company Cerny Games in 1998. While at Sega, he established Sega Technical Institute, working on games including Sonic the Hedgehog 2 (1992).
Cerny has since frequently collaborated with Sony Interactive Entertainment as a consultant, including being the lead designer for hardware of several PlayStation consoles, being called the architect of the PlayStation Vita, PS4 and PS5. He has also consulted with Naughty Dog and Insomniac Games since their creation in the 1990s, as well as other Sony first-party studios like Sucker Punch Productions. He has also developed several games, notably the arcade game Marble Madness and the Knack series, and credited on many more for his consulting work.
In 2004, he was the recipient of the Lifetime Achievement Award from the International Game Developers Association, and was inducted into the Academy of Interactive Arts & Sciences Hall of Fame in 2010.
Career
1982–1996: First years
Mark Cerny grew up in San Francisco, and was a fan of computer programming and arcade games as a youth. He had attended University of California, Berkeley, but when he was 17 in 1982, he was invited to join Atari, and dropped out of school for the opportunity. He started working in Atari's arcade division on January 18, 1982. In those earlier days of professional game development, teams were small and each member was responsible for a wider range of roles than today. He first worked with Ed Logg on Millipede and Owen Rubin on Major Havoc.
Cerny's first major success was the arcade game Marble Madness in which he, at age 18, acted as designer and co-programmer. D
|
https://en.wikipedia.org/wiki/Independent%20software%20vendor
|
An independent software vendor (ISV), also known as a software publisher, is an organization specializing in making and selling software, as opposed to computer hardware, designed for mass or niche markets. This is in contrast to in-house software, which is developed by the organization that will use it, or custom software, which is designed or adapted for a single, specific third party. Although ISV-provided software is consumed by end users, it remains the property of the vendor.
Software products developed by ISVs serve a wide variety of purposes. Examples include software for real estate brokers, scheduling for healthcare personnel, barcode scanning, stock maintenance, gambling, retailing, energy exploration, vehicle fleet management, and child care management software.
An ISV makes and sells software products that run on one or more computer hardware or operating system platforms. Companies that make the platforms, such as Microsoft, AWS, Cisco, IBM, Hewlett-Packard, Red Hat, Google, Oracle, VMware, Lenovo, Apple, SAP, Salesforce and ServiceNow encourage and lend support to ISVs, often with special "business partner" programs. These programs enable the platform provider and the ISV to leverage joint strengths and convert them into incremental business opportunities.
Independent software vendors have become one of the primary groups in the IT industry, often serving as relays to disseminate new technologies and solutions.
See also
Commercial off-the-shelf
Software company
Micro ISV
References
Software industry
|
https://en.wikipedia.org/wiki/Irish%20grid%20reference%20system
|
The Irish grid reference system is a system of geographic grid references used for paper mapping in Ireland (both Northern Ireland and the Republic of Ireland). The Irish grid partially overlaps the British grid, and uses a similar co-ordinate system but with a meridian more suited to its westerly location.
Usage
In general, neither Ireland nor Great Britain uses latitude or longitude in describing internal geographic locations. Instead grid reference systems are used for mapping.
The national grid referencing system was devised by the Ordnance Survey, and is heavily used in their survey data, and in maps (whether published by the Ordnance Survey of Ireland, the Ordnance Survey of Northern Ireland or commercial map producers) based on those surveys. Additionally grid references are commonly quoted in other publications and data sources, such as guide books or government planning documents.
2001 recasting: the ITM grid
In 2001, the Ordnance Survey of Ireland and the Ordnance Survey of Northern Ireland jointly implemented a new coordinate system for Ireland called Irish Transverse Mercator, or ITM, a location-specific optimisation of UTM, which runs in parallel with the existing Irish grid system. In both systems, the true origin is at 53° 30' N, 8° W — a point in Lough Ree, close to the western (Co. Roscommon) shore, whose grid reference is . The ITM system was specified so as to provide precise alignment with modern high-precision global positioning receivers.
Grid letters
The area of Ireland is divided into 25 squares, measuring , each identified by a single letter. The squares are numbered A to Z with I being omitted. Seven of the squares do not actually cover any land in Ireland: A, E, K, P, U, Y and Z.
Eastings and northings
Within each square, eastings and northings from the origin (south west corner) of the square are given numerically. For example, G0305 means 'square G, east, north'. A location can be indicated to varying resolutions numericall
|
https://en.wikipedia.org/wiki/Software%20prototyping
|
Software prototyping is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing.
A prototype typically simulates only a few aspects of, and may be completely different from, the final product.
Prototyping has several benefits: the software designer and implementer can get valuable feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in prototyping have been in development and debate since its proposal in the early 1970s.
Overview
The purpose of a prototype is to allow users of the software to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions. Software prototyping provides an understanding of the software's functions and potential threats or issues. Prototyping can also be used by end users to describe and prove requirements that have not been considered, and that can be a key factor in the commercial relationship between developers and their clients. Interaction design in particular makes heavy use of prototyping with that goal.
This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost. The monolithic approach has been dubbed the "Slaying t
|
https://en.wikipedia.org/wiki/DFT%20matrix
|
In applied mathematics, a DFT matrix is an expression of a discrete Fourier transform (DFT) as a transformation matrix, which can be applied to a signal through matrix multiplication.
Definition
An N-point DFT is expressed as the multiplication , where is the original input signal, is the N-by-N square DFT matrix, and is the DFT of the signal.
The transformation matrix can be defined as , or equivalently:
,
where is a primitive Nth root of unity in which . We can avoid writing large exponents for using the fact that for any exponent we have the identity This is the Vandermonde matrix for the roots of unity, up to the normalization factor. Note that the normalization factor in front of the sum ( ) and the sign of the exponent in ω are merely conventions, and differ in some treatments. All of the following discussion applies regardless of the convention, with at most minor adjustments. The only important thing is that the forward and inverse transforms have opposite-sign exponents, and that the product of their normalization factors be 1/N. However, the choice here makes the resulting DFT matrix unitary, which is convenient in many circumstances.
Fast Fourier transform algorithms utilize the symmetries of the matrix to reduce the time of multiplying a vector by this matrix, from the usual . Similar techniques can be applied for multiplications by matrices such as Hadamard matrix and the Walsh matrix.
Examples
Two-point
The two-point DFT is a simple case, in which the first entry is the DC (sum) and the second entry is the AC (difference).
The first row performs the sum, and the second row performs the difference.
The factor of is to make the transform unitary (see below).
Four-point
The four-point clockwise DFT matrix is as follows:
where .
Eight-point
The first non-trivial integer power of two case is for eight points:
where
(Note that .)
Evaluating for the value of , gives:
The following image depicts the DFT as a matrix multiplication,
|
https://en.wikipedia.org/wiki/Lath%20and%20plaster
|
Lath and plaster is a building process used to finish mainly interior dividing walls and ceilings. It consists of narrow strips of wood (laths) which are nailed horizontally across the wall studs or ceiling joists and then coated in plaster. The technique derives from an earlier, more primitive process called wattle and daub.
Lath and plaster largely fell out of favour in the U.K. after the introduction of plasterboard in the 1930s. In Canada and the United States, wood lath and plaster remained in use until the process was replaced by transitional methods followed by drywall (the North American term for plasterboard) in the mid-twentieth century.
Description
The wall or ceiling finishing process begins with wood or metal laths. These are narrow strips of wood, extruded metal, or split boards, nailed horizontally across the wall studs or ceiling joists. Each wall frame is covered in lath, tacked at the studs. Wood lath is typically about wide by long by thick. Each horizontal course of lath is spaced about away from its neighboring courses. Metal lath is available in by sheets.
In Canada and the United States the laths were generally sawn, but in the United Kingdom and its colonies, riven or split hardwood laths of random lengths and sizes were often used. Early American examples featured split beam construction, as did examples put up in rural areas of the U.S. and Canada well into the second half of the 19th century. Splitting the timber along its grain greatly improved the laths' strength and durability. As Americans and Canadians expanded west, saw mills were not always available to create neatly planed boards and the first crop of buildings in any new western or northern settlement would be put up with split beam lath. In some areas of the U.K. reed mat was also used as a lath.
Temporary lath guides are then placed vertically to the wall, usually at the studs. Lime or gypsum plaster is then applied, typically using a wooden board as the application
|
https://en.wikipedia.org/wiki/Von%20Mises%20yield%20criterion
|
In continuum mechanics, the maximum distortion energy criterion (also von Mises yield criterion) states that yielding of a ductile material begins when the second invariant of deviatoric stress reaches a critical value. It is a part of plasticity theory that mostly applies to ductile materials, such as some metals. Prior to yield, material response can be assumed to be of a nonlinear elastic, viscoelastic, or linear elastic behavior.
In materials science and engineering, the von Mises yield criterion is also formulated in terms of the von Mises stress or equivalent tensile stress, . This is a scalar value of stress that can be computed from the Cauchy stress tensor. In this case, a material is said to start yielding when the von Mises stress reaches a value known as yield strength, . The von Mises stress is used to predict yielding of materials under complex loading from the results of uniaxial tensile tests. The von Mises stress satisfies the property where two stress states with equal distortion energy have an equal von Mises stress.
Because the von Mises yield criterion is independent of the first stress invariant, , it is applicable for the analysis of plastic deformation for ductile materials such as metals, as onset of yield for these materials does not depend on the hydrostatic component of the stress tensor.
Although it has been believed it was formulated by James Clerk Maxwell in 1865, Maxwell only described the general conditions in a letter to William Thomson (Lord Kelvin). Richard Edler von Mises rigorously formulated it in 1913. Tytus Maksymilian Huber (1904), in a paper written in Polish, anticipated to some extent this criterion by properly relying on the distortion strain energy, not on the total strain energy as his predecessors. Heinrich Hencky formulated the same criterion as von Mises independently in 1924. For the above reasons this criterion is also referred to as the "Maxwell–Huber–Hencky–von Mises theory".
Mathematical formulation
Mathe
|
https://en.wikipedia.org/wiki/Dolichol
|
Dolichol refers to any of a group of long-chain mostly unsaturated organic compounds that are made up of varying numbers of isoprene units terminating in an α-saturated isoprenoid group, containing an alcohol functional group.
Functions
Dolichols play a role in the co-translational modification of proteins known as N-glycosylation in the form of dolichol phosphate. Dolichols function as a membrane anchor for the formation of the oligosaccharide Glc3–Man9–GlcNAc2 (where Glc is glucose, Man is mannose, and GlcNAc is N-acetylglucosamine). This oligosaccharide is transferred from the dolichol donor onto certain asparagine residues (onto a specific sequence that is "Asn–X–Ser/Thr") of newly forming polypeptide chains. Dolichol is also involved in transfer of the monosaccharides to the forming Glc3–Man9–GlcNAc2–Dolichol carrier.
In addition, dolichols can be adducted to proteins as a posttranslational modification, a process in which branched carbohydrate trees are formed on a dolichol moiety and then transferred to an assembly of proteins to form a large glycoprotein in the rough endoplasmic reticulum.
Dolichols are the major lipid component (14% by mass) of human substantia nigra (SN) neuromelanin.
Dolichol phosphate was discovered at the University of Liverpool in the 1960s, although researchers did not know its function at the time of discovery.
Role in aging
Dolichol has been suggested to be used as a biomarker for aging. During aging, the human brain shows a progressive increase in levels of dolichol, a reduction in levels of ubiquinone, but relatively unchanged concentrations of cholesterol and dolichyl phosphate. In the neurodegenerative disease Alzheimer's disease, the situation is reversed, with decreased levels of dolichol and increased levels of ubiquinone. The concentrations of dolichyl phosphate are also increased, while cholesterol remains unchanged. The isoprenoid changes in Alzheimer's disease differ from those occurring during normal aging, and, t
|
https://en.wikipedia.org/wiki/Ghostbusters%20%281984%20video%20game%29
|
Ghostbusters is a licensed game by Activision based on the film of the same name. It was designed by David Crane and released for several home computer platforms in 1984, and later for video game console systems, including the Atari 2600, Master System and Nintendo Entertainment System. The primary target was the Commodore 64 and the programmer for the initial version of the game was Adam Bellin. All versions of the game were released in the USA except for the Amstrad CPC and ZX Spectrum versions, which were released only in Europe, and the MSX version, which was released only in Europe, South America, and Japan.
In 1984, after the film Ghostbusters had been launched, John Dolgen VP of Business Development at Columbia Pictures approached Gregory Fischbach (President of Activision International and subsequently CEO and Co-founder of Acclaim Entertainment) and offered to license the game rights to Activision without specific rules or requests for the design or content of the game, only stipulating that it was to be finished as quickly as possible in order to be released while the movie was at peak popularity. Activision was forced to complete the programming work in only six weeks in contrast to their usual several months of development time for a game. Activision had at the time a rough concept for a driving/maze game to be called "Car Wars", and it was decided to build the Ghostbusters game from it. Both the movie and the game proved to be huge successes with the game selling over two million copies by 1989.
Gameplay
The player sets up a Ghostbusters franchise in a city whose psychokinetic (PK) energy levels have begun to rise. At the start of the game, the player is given a set amount of money and must use it to buy a vehicle and equipment for detecting/catching ghosts. They must then move through a grid representing the city, with flashing red blocks indicating sites of ghost activity.
When the player moves to a flashing block, the game shifts to an overhead
|
https://en.wikipedia.org/wiki/Non-well-founded%20set%20theory
|
Non-well-founded set theories are variants of axiomatic set theory that allow sets to be elements of themselves and otherwise violate the rule of well-foundedness. In non-well-founded set theories, the foundation axiom of ZFC is replaced by axioms implying its negation.
The study of non-well-founded sets was initiated by Dmitry Mirimanoff in a series of papers between 1917 and 1920, in which he formulated the distinction between well-founded and non-well-founded sets; he did not regard well-foundedness as an axiom. Although a number of axiomatic systems of non-well-founded sets were proposed afterwards, they did not find much in the way of applications until Peter Aczel’s hyperset theory in 1988.
The theory of non-well-founded sets has been applied in the logical modelling of non-terminating computational processes in computer science (process algebra and final semantics), linguistics and natural language semantics (situation theory), philosophy (work on the Liar Paradox), and in a different setting, non-standard analysis.
Details
In 1917, Dmitry Mirimanoff introduced the concept of well-foundedness of a set:
A set, x0, is well-founded if it has no infinite descending membership sequence
In ZFC, there is no infinite descending ∈-sequence by the axiom of regularity. In fact, the axiom of regularity is often called the foundation axiom since it can be proved within ZFC− (that is, ZFC without the axiom of regularity) that well-foundedness implies regularity. In variants of ZFC without the axiom of regularity, the possibility of non-well-founded sets with set-like ∈-chains arises. For example, a set A such that A ∈ A is non-well-founded.
Although Mirimanoff also introduced a notion of isomorphism between possibly non-well-founded sets, he considered neither an axiom of foundation nor of anti-foundation. In 1926, Paul Finsler introduced the first axiom that allowed non-well-founded sets. After Zermelo adopted Foundation into his own system in 1930 (from previous w
|
https://en.wikipedia.org/wiki/Remote%20direct%20memory%20access
|
In computing, remote direct memory access (RDMA) is a direct memory access from the memory of one computer into that of another without involving either one's operating system. This permits high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters.
Overview
RDMA supports zero-copy networking by enabling the network adapter to transfer data from the wire directly to application memory or from application memory directly to the wire, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. This reduces latency in message transfer.
However, this strategy presents several problems related to the fact that the target node is not notified of the completion of the request (single-sided communications).
Acceptance
As of 2018 RDMA had achieved broader acceptance as a result of implementation enhancements that enable good performance over ordinary networking infrastructure. For example RDMA over Converged Ethernet (RoCE) now is able to run over either lossy or lossless infrastructure. In addition iWARP enables an Ethernet RDMA implementation at the physical layer using TCP/IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standards-based solution. The RDMA Consortium and the DAT Collaborative have played key roles in the development of RDMA protocols and APIs for consideration by standards groups such as the Internet Engineering Task Force and the Interconnect Software Consortium.
Hardware vendors have started working on higher-capacity RDMA-based network adapters, with rates of 100 Gbit/s reported. Software vendors, such as IBM, Red Hat and Oracle Corporation, support these APIs in their latest products, and engineers have started developing network adapters that implement RDMA over
|
https://en.wikipedia.org/wiki/BIOS%20parameter%20block
|
In computing, the BIOS parameter block, often shortened to BPB, is a data structure in the volume boot record (VBR) describing the physical layout of a data storage volume. On partitioned devices, such as hard disks, the BPB describes the volume partition, whereas, on unpartitioned devices, such as floppy disks, it describes the entire medium. A basic BPB can appear and be used on any partition, including floppy disks where its presence is often necessary; however, certain filesystems also make use of it in describing basic filesystem structures. Filesystems making use of a BIOS parameter block include FAT12 (except for in DOS 1.x), FAT16, FAT32, HPFS, and NTFS. Due to different types of fields and the amount of data they contain, the length of the BPB is different for FAT16, FAT32, and NTFS boot sectors. (A detailed discussion of the various FAT BPB versions and their entries can be found in the FAT article.) Combined with the 11-byte data structure at the very start of volume boot records immediately preceding the BPB or EBPB, this is also called FDC descriptor or extended FDC descriptor in ECMA-107 or ISO/IEC 9293 (which describes FAT as for flexible/floppy and optical disk cartridges).
FAT12 / FAT16
DOS 2.0 BPB
Format of standard DOS 2.0 BPB for FAT12 (13 bytes):
DOS 3.0 BPB
Format of standard DOS 3.0 BPB for FAT12 and FAT16 (19 bytes), already supported by some versions of MS-DOS 2.11:
DOS 3.2 BPB
Format of standard DOS 3.2 BPB for FAT12 and FAT16 (21 bytes):
DOS 3.31 BPB
Format of standard DOS 3.31 BPB for FAT12, FAT16 and FAT16B (25 bytes):
DOS 3.4 EBPB
Format of PC DOS 3.4 and OS/2 1.0-1.1 Extended BPB for FAT12, FAT16 and FAT16B (32 bytes):
FAT12 / FAT16 / HPFS
DOS 4.0 EBPB
Format of DOS 4.0 and OS/2 1.2 Extended BPB for FAT12, FAT16, FAT16B and HPFS (51 bytes):
FAT32
DOS 7.1 EBPB
Format of short DOS 7.1 Extended BIOS Parameter Block (60 bytes) for FAT32:
Format of full DOS 7.1 Extended BIOS Parameter Block (79 bytes) for FAT32:
NT
|
https://en.wikipedia.org/wiki/Scorer%27s%20function
|
In mathematics, the Scorer's functions are special functions studied by and denoted Gi(x) and Hi(x).
Hi(x) and -Gi(x) solve the equation
and are given by
The Scorer's functions can also be defined in terms of Airy functions:
References
Special functions
|
https://en.wikipedia.org/wiki/Google
|
Google LLC () is an American multinational technology company focusing on artificial intelligence, online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, and consumer electronics. It has been referred to as "the most powerful company in the world" and as one of the world's most valuable brands due to its market dominance, data collection, and technological advantages in the field of artificial intelligence. Alongside Amazon, Apple Inc., Meta Platforms, and Microsoft, Google's parent company Alphabet Inc. is one of the five Big Tech companies.
Google was founded on September 4, 1998, by American computer scientists Larry Page and Sergey Brin while they were PhD students at Stanford University in California. Together they own about 14% of its publicly listed shares and control 56% of its stockholder voting power through super-voting stock. The company went public via an initial public offering (IPO) in 2004. In 2015, Google was reorganized as a wholly owned subsidiary of Alphabet Inc. Google is Alphabet's largest subsidiary and is a holding company for Alphabet's internet properties and interests. Sundar Pichai was appointed CEO of Google on October 24, 2015, replacing Larry Page, who became the CEO of Alphabet. On December 3, 2019, Pichai also became the CEO of Alphabet.
The company has since rapidly grown to offer a multitude of products and services beyond Google Search, many of which hold dominant market positions. These products address a wide range of use cases, including email (Gmail), navigation (Waze & Maps), cloud computing (Cloud), web browsing (Chrome), video sharing (YouTube), productivity (Workspace), operating systems (Android), cloud storage (Drive), language translation (Translate), photo storage (Photos), video calling (Meet), smart home (Nest), smartphones (Pixel), wearable technology (Pixel Watch & Fitbit), music streaming (YouTube Music), video on demand (YouTube TV), artificial intellige
|
https://en.wikipedia.org/wiki/Pedal%20keyboard
|
A pedalboard (also called a pedal keyboard, pedal clavier, or, with electronic instruments, a bass pedalboard) is a keyboard played with the feet that is usually used to produce the low-pitched bass line of a piece of music. A pedalboard has long, narrow lever-style keys laid out in the same semitone scalar pattern as a manual keyboard, with longer keys for C, D, E, F, G, A, and B, and shorter, raised keys for C, D, F, G and A. Training in pedal technique is part of standard organ pedagogy in church music and art music.
Pedalboards are found at the base of the console of most pipe organs, pedal pianos, theatre organs, and electronic organs. Standalone pedalboards such as the 1970s-era Moog Taurus bass pedals are occasionally used in progressive rock and fusion music. In the 21st century, MIDI pedalboard controllers are used with synthesizers, electronic Hammond-style organs, and with digital pipe organs. Pedalboards are also used with pedal pianos and with some harpsichords, clavichords, and carillons (church bells).
History
13th century to 16th century
The first use of pedals on a pipe organ grew out of the need to hold bass drone notes, to support the polyphonic musical styles that predominated in the Renaissance. Indeed, the term pedal point, which refers to a prolonged bass tone under changing upper harmonies, derives from the use of the organ pedalboard to hold sustained bass notes. These earliest pedals were wooden stubs nicknamed mushrooms, which were placed at the height of the feet. These pedals, which used simple pull-downs connected directly to the manual keys, are found in organs dating to the 13th century. The pedals on French organs were composed of short stubs of wood projecting out of the floor, which were mounted in pedalboards that could be either flat or tilted. Organists were unable to play anything but simple bass lines or slow-moving plainsong melodies on these short stub-type pedals. Organist E. Power Biggs, in the liner notes for his albu
|
https://en.wikipedia.org/wiki/Bradfield%20Scheme
|
The Bradfield Scheme, a proposed Australian water diversion scheme, is an inland irrigation project that was designed to irrigate and drought-proof much of the western Queensland interior, as well as large areas of South Australia. It was devised by Dr John Bradfield (1867–1943), a Queensland born civil engineer, who also designed the Sydney Harbour Bridge and Brisbane's Story Bridge.
The scheme that Bradfield proposed in 1938 required large pipes, tunnels, pumps and dams. It involved diverting water from the upper reaches of the Tully, Herbert and Burdekin rivers. These Queensland rivers are fed by the monsoon, and flow east to the Coral Sea. It was proposed that the water would enter the Thomson River on the western side of the Great Dividing Range and eventually flow south west to Lake Eyre. An alternative plan was to divert water into the Flinders River.
G. W. Leeper of the school of agricultural science at the University of Melbourne considered the plan to be lacking in scientific justification.
In 1981, a Queensland NPA subcommittee proposed variation of the scheme.
Possible benefits
The water was expected to provide irrigation for more than of agricultural land in Queensland. The scheme would reduce the massive natural erosion problems in areas of Central Queensland. The scheme had the ability to generate of power and the potential to double that amount.
It is claimed that extra water and vegetation in the interior may then produce changes to the climate of Australia, however various studies have concluded that this is unlikely. This may increase the rainfall in areas of southern Queensland and northern New South Wales. Extra rainfall may drought-proof Eastern Queensland, and thereby improve river inflows to the Murray-Darling River system. It is claimed that a full Lake Eyre would moderate the air temperature in the region by the absorption of sunlight by the water instead of heat radiation from dry land into the air. No evidence to support the theor
|
https://en.wikipedia.org/wiki/Stanley%27s%20reciprocity%20theorem
|
In combinatorial mathematics, Stanley's reciprocity theorem, named after MIT mathematician Richard P. Stanley, states that a certain functional equation is satisfied by the generating function of any rational cone (defined below) and the generating function of the cone's interior.
Definitions
A rational cone is the set of all d-tuples
(a1, ..., ad)
of nonnegative integers satisfying a system of inequalities
where M is a matrix of integers. A d-tuple satisfying the corresponding strict inequalities, i.e., with ">" rather than "≥", is in the interior of the cone.
The generating function of such a cone is
The generating function Fint(x1, ..., xd) of the interior of the cone is defined in the same way, but one sums over d-tuples in the interior rather than in the whole cone.
It can be shown that these are rational functions.
Formulation
Stanley's reciprocity theorem states that for a rational cone as above, we have
Matthias Beck and Mike Develin have shown how to prove this by using the calculus of residues. Develin has said that this amounts to proving the result "without doing any work".
Stanley's reciprocity theorem generalizes Ehrhart-Macdonald reciprocity for Ehrhart polynomials of rational convex polytopes.
See also
Ehrhart polynomial
References
Algebraic combinatorics
Theorems in combinatorics
|
https://en.wikipedia.org/wiki/AIB%20International
|
The American Institute of Baking, now known as AIB International, was founded in 1919 as a technology and information transfer center for bakers and food processors.
Organization
Staff includes experts in the fields of baking production, experimental baking, cereal science, nutrition, food safety, and hygiene. AIB is headquartered in Manhattan, Kansas.
References
Baking industry
Buildings and structures in Riley County, Kansas
Libraries in Kansas
Education in Riley County, Kansas
Manhattan, Kansas
Food science institutes
|
https://en.wikipedia.org/wiki/Z4%20%28computer%29
|
The Z4 was arguably the world's first commercial digital computer, and is the oldest surviving programmable computer. It was designed, and manufactured by early computer scientist Konrad Zuse's company Zuse Apparatebau, for an order placed by Henschel & Son, in 1942; though only partially assembled in Berlin, then completed in Göttingen, and not delivered before the defeat of Nazi Germany, in 1945. The Z4 was Zuse's final target for the Z3 design. Like the earlier Z2, it comprised a combination of mechanical memory and electromechanical logic, so was not a true electronic computer.
Construction
The Z4 was very similar to the Z3 in its design but was significantly enhanced in a number of respects. The memory consisted of 32-bit rather than 22-bit floating point words. The Program Construction Unit (Planfertigungsteil) punched the program tapes, making programming and correcting programs for the machine much easier by the use of symbolic operations and memory cells. Numbers were entered and output as decimal floating-point even though the internal working was in binary. The machine had a large repertoire of instructions including square root, MAX, MIN and sine. Conditional tests included tests for infinity. When delivered to ETH Zurich in 1950 the machine had a conditional branch facility added and could print on a Mercedes typewriter. There were two program tapes where the second could be used to hold a subroutine. (Originally six were planned.)
In 1944, Zuse was working on the Z4 with around two dozen people, including Wilfried de Beauclair. Some engineers who worked at the telecommunications facility of the OKW also worked for Zuse as a secondary occupation. Also in 1944 Zuse transformed his company to the Zuse KG (Kommanditgesellschaft, i.e. a limited partnership) and planned to manufacture 300 computers. This way he could also request additional staff and scientists as a contractor in the Emergency Fighter Program. Zuse's company also cooperated with Alwin Wal
|
https://en.wikipedia.org/wiki/NCR%20304
|
The NCR 304 computer, announced in 1957, first delivered in 1959, was National Cash Register (NCR)'s first transistor-based computer. The 304 was developed and manufactured in cooperation with General Electric, where it was also used internally.
Its follow-on was the NCR 315.
See also
Computer architecture
Electronic hardware
Glossary of computer hardware terms
History of computing hardware
List of computer hardware manufacturers
Open-source computing hardware
Open-source hardware
Transistor
References
Transistorized computers
NCR Corporation products
Decimal computers
|
https://en.wikipedia.org/wiki/Hough%20function
|
In applied mathematics, the Hough functions are the eigenfunctions of Laplace's tidal equations which govern fluid motion on a rotating sphere. As such, they are relevant in geophysics and meteorology where they form part of the solutions for atmospheric and ocean waves. These functions are named in honour of Sydney Samuel Hough.
Each Hough mode is a function of latitude and may be expressed as an infinite sum of associated Legendre polynomials; the functions are orthogonal over the sphere in the continuous case. Thus they can also be thought of as a generalized Fourier series in which the basis functions are the normal modes of an atmosphere at rest.
See also
Secondary circulation
Legendre polynomials
Primitive equations
References
Further reading
Atmospheric dynamics
Physical oceanography
Fluid mechanics
Special functions
|
https://en.wikipedia.org/wiki/Whipple%27s%20triad
|
Whipple's triad is a collection of three signs (called Whipple's criteria) that suggests that a patient's symptoms result from hypoglycaemia that may indicate insulinoma. The essential conditions are symptoms of hypoglycaemia, low blood plasma glucose concentration, and relief of symptoms when plasma glucose concentration is increased. It was first described by the pancreatic surgeon Allen Whipple, who aimed to establish criteria for exploratory pancreatic surgery to look for insulinoma.
Definition
Whipple's triad is stated in various versions. The essential conditions are:
Symptoms known or likely to be caused by hypoglycaemia, especially after fasting or intense exercise. These symptoms include tremor, tachycardia, anxiety, dizziness, and loss of consciousness.
A low blood plasma glucose concentration measured at the time of the symptoms. This may be measured as a blood plasma glucose concentration of less than 550 milligrams per litre.
Relief of symptoms when glucose level is increased.
The use and significance of the criteria have evolved over the last century as understanding of the many forms of hypoglycaemia has increased and diagnostic tests and imaging procedures have improved. Whipple's criteria are no longer used to justify surgical exploration for an insulinoma, but to separate "true hypoglycaemia" (in which a low glucose can be demonstrated) from a variety of other conditions (e.g., idiopathic postprandial syndrome) in which symptoms suggestive of hypoglycaemia occur, but low glucose levels cannot be demonstrated. The criteria are now invoked far more often by endocrinologists than by surgeons. The radiological investigation of choice now is endoscopic and/or intraoperative ultrasonography.
Differential diagnosis
Whipple's triad is not exclusive for insulinoma, and other conditions will also be considered. The same signs may be caused by hyperinsulinism not caused by insulinoma.
History
The criteria date back to the 1930s, when a few patients wi
|
https://en.wikipedia.org/wiki/Commodore%20DOS
|
Commodore DOS, also known as CBM DOS, is the disk operating system used with Commodore's 8-bit computers. Unlike most other DOSes, which are loaded from disk into the computer's own RAM and executed there, CBM DOS is executed internally in the drive: the DOS resides in ROM chips inside the drive, and is run there by one or more dedicated MOS 6502 family CPUs. Thus, data transfer between Commodore 8-bit computers and their disk drives more closely resembles a local area network connection than typical disk/host transfers.
CBM DOS versions
At least seven distinctly numbered versions of Commodore DOS are known to exist; the following list gives the version numbers and related disk drives. Unless otherwise noted, drives are 5¼-inch format. The "lp" code designates "low-profile" drives. Drives whose model number starts with 15 connect via Commodore's unique serial IEEE-488 bus (IEC Bus) serial (TALK/LISTEN) protocols; all others use the parallel IEEE-488.
1.0 – found in the 2040 and 3040 floppy drives
2.0 – found in the 4040 and 3040 floppy drives
2.5 – found in the 8050 floppy drives
2.6 – found in the 1540, 1541 including the one built into the SX-64, 1551, 2031 (+"lp"), and 4031 floppy drives
2.7 – found in the 8050, 8250 (+"lp"), and SFD-1001 floppy drives
3.0 – found in the 1570, external 1571, and 8280 floppy drives (8280: 8-inch), as well as the 9060 and 9090 hard drives
3.1 – found in the built-in 1571 drive of C128DCR computers
10.0 – found in the 1581 (3½-inch) floppy drive
Version 2.6 was by far the most commonly used and widely known DOS version, due to its use in the 1541 as part of C64 systems.
Note: The revised firmware for the 1571 which fixed the relative file bug was also identified as V3.0. Thus it is not possible to differentiate the two versions using the version number alone.
Technical overview
1541 directory and file types
The 1541 Commodore floppy disk can contain up to 144 files in a flat namespace (no subdirectories
|
https://en.wikipedia.org/wiki/Smart%20bullet
|
A smart bullet is a bullet that is able to do something other than simply follow its given trajectory, such as turning, changing speed or sending data. Such a projectile may be fired from a precision-guided firearm capable of programming its behavior. It is a miniaturized type of precision-guided munition.
Types of smart bullets
In 2008 the EXACTO program began under DARPA to develop a "fire and forget" smart sniper rifle system including a guided smart bullet and improved scope. The exact technologies of this smart bullet have yet to be released. EXACTO was test fired in 2014 and 2015 and results showing the bullet alter course to correct its path to its target were released.
In 2012 Sandia National Laboratories announced a self-guided bullet prototype that could track a target illuminated with a laser designator. The bullet is capable of updating its position 30 times a second and hitting targets over a mile away.
In mid-2016, Russia revealed it was developing a similar "smart bullet" weapon designed to hit targets at a distance of up to .
Guided bullet
The guided bullet was conceptualized by Dr. Rolin F. Barrett, Jr. and patented in August 1998.
As first designed, the bullet would have three fiber-optic based eyes (at minimum, for three-dimensionality), evenly distributed about its circumference. To activate its guided nature, a laser is pointed at a target. As the bullet approaches its final destination, it adjusts its flight path in real time to allow an equivalent amount of light from the laser to enter each eye. The bullet would not travel in multiple directions as though it were an autonomous vehicle, but instead, would make small adjustments to its flight path to hit the target precisely where the laser was placed. Moreover, the laser would not have to originate from the source of the bullet, allowing the projectile to be fired at a target beyond visual range.
To allow the bullet to modify its flight path, the body was designed as a metal and polym
|
https://en.wikipedia.org/wiki/Side%20stitch
|
A side stitch (or "stitch in one's side") is an intense stabbing abdominal pain under the lower edge of the ribcage that occurs during exercise. It is also called a side ache, side cramp, muscle stitch, or simply stitch, and the medical term is exercise-related transient abdominal pain (ETAP). It sometimes extends to shoulder tip pain, and commonly occurs during running, swimming, and horseback riding. Approximately two-thirds of runners will experience at least one episode of a stitch each year. The precise cause is unclear, although it most likely involves irritation of the abdominal lining, and the condition is more likely after consuming a meal or a sugary beverage. If the pain is present only when exercising and is completely absent at rest, in an otherwise healthy person, it does not require investigation. Typical treatment strategies involve deep breathing and/or manual pressure on the affected area.
Causes
The precise cause of ETAP is unclear. Proposed mechanisms include diaphragmatic ischemia (insufficient oxygen); stress on the supportive visceral ligaments that attach the abdominal organs to the diaphragm; gastrointestinal ischemia or distension; cramping of the abdominal musculature; ischemic pain resulting from compression of the celiac artery by the median arcuate ligament under the diaphragm; aggravation of the spinal nerves; or, most likely, irritation of the parietal peritoneum (abdominal lining).
Although the diaphragm is mostly innervated by the phrenic nerve, and thus could explain referred pain to the shoulder tip region, the main evidence against diaphragmatic ischemia is that ETAP can be induced by activities of low respiratory demand, such as horse, camel, and motorbike riding, where ischemia of the diaphragm is unlikely. In a study using a fluoroscopic technique, diaphragmatic movements during an ETAP episode have been shown to be full and unrestricted. In another study, researchers analyzed flow-volume loops from subjects who were experi
|
https://en.wikipedia.org/wiki/Mill%20%28grinding%29
|
A mill is a device, often a structure, machine or kitchen appliance, that breaks solid materials into smaller pieces by grinding, crushing, or cutting. Such comminution is an important unit operation in many processes. There are many different types of mills and many types of materials processed in them. Historically mills were powered by hand or by animals (e.g., via a hand crank), working animal (e.g., horse mill), wind (windmill) or water (watermill). In modern era, they are usually powered by electricity.
The grinding of solid materials occurs through mechanical forces that break up the structure by overcoming the interior bonding forces. After the grinding the state of the solid is changed: the grain size, the grain size disposition and the grain shape.
Milling also refers to the process of breaking down, separating, sizing, or classifying aggregate material (e.g. mining ore). For instance rock crushing or grinding to produce uniform aggregate size for construction purposes, or separation of rock, soil or aggregate material for the purposes of structural fill or land reclamation activities. Aggregate milling processes are also used to remove or separate contamination or moisture from aggregate or soil and to produce "dry fills" prior to transport or structural filling.
Grinding may serve the following purposes in engineering:
increase of the surface area of a solid
manufacturing of a solid with a desired grain size
pulping of resources
Grinding laws
In spite of a great number of studies in the field of fracture schemes there is no formula known which connects the technical grinding work with grinding results. Mining engineers, Peter von Rittinger, Friedrich Kick and Fred Chester Bond independently produced equations to relate the needed grinding work to the grain size produced and a fourth engineer, R.T.Hukki suggested that these three equations might each describe a narrow range of grain sizes and proposed uniting them along a single curve describing wha
|
https://en.wikipedia.org/wiki/Sega%20Net%20Link
|
Sega Net Link (also called Sega Saturn Net Link) is an attachment for the Sega Saturn game console to provide Saturn users with internet access and access to email through their console. The unit was released in October 1996. The Sega Net Link fit into the Sega Saturn cartridge port and consisted of a 28.8 kbit/s modem, a custom chip to allow it to interface with the Saturn, and a browser developed by Planetweb, Inc. The unit sold for US$199, or US$400 bundled with a Sega Saturn. In 1997 Sega began selling the NetLink Bundle, which included the standard NetLink plus the compatible games Sega Rally Championship and Virtual On: Cyber Troopers NetLink Edition, for $99.
The Net Link connected to the internet through standard dial-up services. Unlike other online gaming services in the US, one does not connect to a central service, but instead tells the dial-up modem connected to the Saturn's cartridge slot to call to the person with whom one wishes to play. Since it requires no servers to operate, the service can operate as long as at least two users have the necessary hardware and software, as well as a phone line.
In Japan, however, gamers did connect through a centralized service known as SegaNet, which would later be taken offline and converted for Dreamcast usage.
History
According to Yutaka Yamamoto, Sega of America's director of new technology, the Saturn's design allowed it to access the internet purely through software: "Sega engineers always felt the Saturn would be good for multimedia applications as well as game playing. So they developed a kernel in the operating system to support communications tasks."
While the Net Link was not the first accessory which allowed console gamers in North America to play video games online (see online console gaming), it was the first to allow players to use their own Internet Service Provider (ISP) to connect. While Sega recommended that players use Concentric, the Sega Net Link enabled players to choose any ISP that wa
|
https://en.wikipedia.org/wiki/Earthworks%20%28engineering%29
|
Earthworks are engineering works created through the processing of parts of the earth's surface involving quantities of soil or unformed rock.
Shoring structures
An incomplete list of possible temporary or permanent geotechnical shoring structures that may be designed and utilised as part of earthworks:
Mechanically stabilized earth
Earth anchor
Cliff stabilization
Grout curtain
Retaining wall
Slurry wall
Soil nailing
Tieback (geotechnical)
Trench shoring
Caisson
Dam
Gabion
Ground freezing
Gallery
Excavation
Excavation may be classified by type of material:
Topsoil excavation
Earth excavation
Rock excavation
Muck excavation – this usually contains excess water and unsuitable soil
Unclassified excavation – this is any combination of material types
Excavation may be classified by the purpose:
Stripping
Roadway excavation
Drainage or structure excavation
Bridge excavation
Channel excavation
Footing excavation
Borrow excavation
Dredge excavation
Underground excavation
Civil engineering use
Typical earthworks include road construction, railway beds, causeways, dams, levees, canals, and berms. Other common earthworks are land grading to reconfigure the topography of a site, or to stabilize slopes.
Military use
In military engineering, earthworks are, more specifically, types of fortifications constructed from soil. Although soil is not very strong, it is cheap enough that huge quantities can be used, generating formidable structures. Examples of older earthwork fortifications include moats, sod walls, motte-and-bailey castles, and hill forts. Modern examples include trenches and berms.
Equipment
Heavy construction equipment is usually used due to the amounts of material to be moved — up to millions of cubic metres. Earthwork construction was revolutionized by the development of the (Fresno) scraper and other earth-moving machines such as the loader, the dump truck, the grader, the bulldozer, the backhoe, and the dragline excavator.
Mass haul plan
|
https://en.wikipedia.org/wiki/Zoospore
|
A zoospore is a motile asexual spore that uses a flagellum for locomotion. Also called a swarm spore, these spores are created by some protists, bacteria, and fungi to propagate themselves.
Diversity
Flagella types
Zoospores may possess one or more distinct types of flagella - tinsel or "decorated", and whiplash, in various combinations.
Tinsellated (straminipilous) flagella have lateral filaments known as mastigonemes perpendicular to their main axis, which allow for more surface area, and disturbance of the medium, giving them the property of a rudder, that is, used for steering.
Whiplash flagella are straight, to power the zoospore through its medium. Also, the "default" zoospore only has the propelling, whiplash flagella.
Both tinsel and whiplash flagella beat in a sinusoidal wave pattern, but when both are present, the tinsel beats in the opposite direction of the whiplash, to give two axes of control of motility.
Morphological types
In eukaryotes, the four main types of zoospore are illustrated in Fig. 1 at right:
Posterior whiplash flagella are a characteristic of the Chytridiomycota, and a proposed uniting trait of the opisthokonts, a large clade of eukaryotes containing animals and fungi. Most of these have a single posterior flagellum (Fig. 1a), but the Neocallimastigales have up to 16 (Fig. 1b).
Anisokonts are biflagellated zoospores with two whip types flagella of unequal length (Fig. 1c). These are found in some of the Myxomycota and Plasmodiophoromycota.
Zoospores with a single anterior flagellum (Fig. 1d) of the tinsel type are characteristic of Hyphochytriomycetes.
Heterokont are biflagellated zoospores (Fig. 1e, f) with both whiplash (smooth) and tinsel type (fine outgrowths called mastigonemes) flagella attached anteriorly or laterally. These zoospores are characteristic of the Oomycota and other heterokonts.
Zoosporangium
A zoosporangium is the asexual structure (sporangium) in which the zoospores develop in plants, fungi, or protists
|
https://en.wikipedia.org/wiki/Categorical%20theory
|
In mathematical logic, a theory is categorical if it has exactly one model (up to isomorphism). Such a theory can be viewed as defining its model, uniquely characterizing the model's structure.
In first-order logic, only theories with a finite model can be categorical. Higher-order logic contains categorical theories with an infinite model. For example, the second-order Peano axioms are categorical, having a unique model whose domain is the set of natural numbers
In model theory, the notion of a categorical theory is refined with respect to cardinality. A theory is -categorical (or categorical in ) if it has exactly one model of cardinality up to isomorphism. Morley's categoricity theorem is a theorem of stating that if a first-order theory in a countable language is categorical in some uncountable cardinality, then it is categorical in all uncountable cardinalities.
extended Morley's theorem to uncountable languages: if the language has cardinality and a theory is categorical in some uncountable cardinal greater than or equal to then it is categorical in all cardinalities greater than .
History and motivation
Oswald Veblen in 1904 defined a theory to be categorical if all of its models are isomorphic. It follows from the definition above and the Löwenheim–Skolem theorem that any first-order theory with a model of infinite cardinality cannot be categorical. One is then immediately led to the more subtle notion of -categoricity, which asks: for which cardinals is there exactly one model of cardinality of the given theory T up to isomorphism? This is a deep question and significant progress was only made in 1954 when Jerzy Łoś noticed that, at least for complete theories T over countable languages with at least one infinite model, he could only find three ways for T to be -categorical at some :
T is totally categorical, i.e. T is -categorical for all infinite cardinals .
T is uncountably categorical, i.e. T is -categorical if and only if is an uncou
|
https://en.wikipedia.org/wiki/BSD/OS
|
BSD/OS (originally called BSD/386 and sometimes known as BSDi) is a discontinued proprietary version of the BSD operating system developed by Berkeley Software Design, Inc. (BSDi).
BSD/OS had a reputation for reliability in server roles; the renowned Unix programmer and author W. Richard Stevens used it for his own personal web server.
History
BSDi was formed in 1991 by members of the Computer Systems Research Group (CSRG) at UC Berkeley to develop and sell a proprietary version of BSD Unix for PC compatible systems with Intel 386 (or later) processors. This made use of work previously done by Bill Jolitz to port BSD to the PC platform.
BSD/386 1.0 was released in March 1993. The company sold licenses and support for it, taking advantage of terms in the BSD License which permit use of the BSD software in proprietary systems, as long the author is credited. The company in turn contributed code and resources to the development of non-proprietary BSD operating systems. In the meantime, Jolitz had left BSDi and independently released an open source BSD for PCs, called 386BSD. The BSDi system features complete and thorough manpage documentation for the entire system, including complete syntax and argument explanations, examples, file usage, authors, and cross-references to other commands.
BSD/386 licenses (including source code) were priced at $995, lower than AT&T UNIX System V source licenses, a fact highlighted in their advertisements. As part of the settlement of USL v. BSDi, BSDI substituted code that had been written for the University's 4.4 BSD-Lite release for disputed code in their OS, effective with release 2.0. By the time of this release, the "386" designation had become dated, and BSD/386 was renamed "BSD/OS". Later releases of BSD/OS also support Sun SPARC-based systems. BSD/OS 5.x versions are available for PowerPC too.
The marketing of BSD/OS became increasingly focused on Internet server applications. However, the increasingly tight market for Unix
|
https://en.wikipedia.org/wiki/WTBS-LD
|
WTBS-LD, virtual and VHF digital channel 6, is a low-power television station licensed to Atlanta, Georgia, United States. The station has been owned by Prism Broadcasting since 1991. The station's transmitter and antenna are located in downtown Atlanta atop the American Tower Site located at 315 Chester Avenue, Atlanta.
The original WTBS-LD, virtual channel 26 (UHF digital channel 30), was a low-powered Estrella TV-affiliated television station also licensed to Atlanta. The station, which broadcast six subchannels, was a digital satellite of the current WTBS-LD, then known as WTBS-LP.
The digital transmitter, which signed on in early January 2011, was, with its sister station WANN-CD, located just northeast of the city on one of the two large towers on Briarcliff Road, at the same site along with Independent station WPCH-TV (channel 17), Univision affiliate WUVG-DT (channel 34), CBS affiliate WANF (channel 46), TBN affiliate WHSG-TV (channel 63), and several other stations.
The original WTBS-LD's license was cancelled by the Federal Communications Commission on March 17, 2021.
History
The station signed on as W56CD in Rome, Georgia; then W26BT; WANX-LP in January 2000; and WTBS-LP on October 15, 2007. The WANX call letters were formerly used by CBS affiliate WANF.
In 2014, the analog WTBS-LP reappeared under special temporary authority on TV channel 6, which can also be received on 87.75 MHz of the FM dial. All analog television channels had been scheduled to cease broadcasting in September 2015; this was suspended by the FCC in April of that year. In 2017, it was announced that July 13, 2021 would be the new analog low-power television transmission shutoff date. Analog channel 6 later broadcast an urban AC format branded as "Mix 87.7", which is also heard via stereo audio on 87.75 FM. Steve Hegwood, the operator of Mix 87.7 announced it would cease using the WTBS frequency on January 31, 2019 due to financial shortfalls and an overcompetitive market for radio
|
https://en.wikipedia.org/wiki/Conserved%20current
|
In physics a conserved current is a current, , that satisfies the continuity equation . The continuity equation represents a conservation law, hence the name.
Indeed, integrating the continuity equation over a volume , large enough to have no net currents through its surface, leads to the conservation law
where is the conserved quantity.
In gauge theories the gauge fields couple to conserved currents. For example, the electromagnetic field couples to the conserved electric current.
Conserved quantities and symmetries
Conserved current is the flow of the canonical conjugate of a quantity possessing a continuous translational symmetry. The continuity equation for the conserved current is a statement of a conservation law.
Examples of canonical conjugate quantities are:
Time and energy - the continuous translational symmetry of time implies the conservation of energy
Space and momentum - the continuous translational symmetry of space implies the conservation of momentum
Space and angular momentum - the continuous rotational symmetry of space implies the conservation of angular momentum
Wave function phase and electric charge - the continuous phase angle symmetry of the wave function implies the conservation of electric charge
Conserved currents play an extremely important role in theoretical physics, because Noether's theorem connects the existence of a conserved current to the existence of a symmetry of some quantity in the system under study. In practical terms, all conserved currents are the Noether currents, as the existence of a conserved current implies the existence of a symmetry. Conserved currents play an important role in the theory of partial differential equations, as the existence of a conserved current points to the existence of constants of motion, which are required to define a foliation and thus an integrable system. The conservation law is expressed as the vanishing of a 4-divergence, where the Noether charge forms the zeroth component of th
|
https://en.wikipedia.org/wiki/Relict
|
A relict is a surviving remnant of a natural phenomenon.
Biology
A relict (or relic) is an organism that at an earlier time was abundant in a large area but now occurs at only one or a few small areas.
Geology and geomorphology
In geology, a relict is a structure or mineral from a parent rock that did not undergo metamorphosis when the surrounding rock did, or a rock that survived a destructive geologic process.
In geomorphology, a relict landform is a landform formed by either erosive or constructive surficial processes that are no longer active as they were in the past.
A glacial relict is a cold-adapted organism that is a remnant of a larger distribution that existed in the ice ages.
Human populations
As revealed by DNA testing, a relict population is an ancient people in an area, who have been largely supplanted by a later group of migrants and their descendants.
In various places around the world, minority ethnic groups represent lineages of ancient human migrations in places now occupied by more populous ethnic groups, whose ancestors arrived later. For example, the first human groups to inhabit the Caribbean islands were hunter-gatherer tribes from South and Central America. Genetic testing of natives of Cuba show that, in late pre-Columbian times, the island was home to agriculturalists of Taino ethnicity. In addition, a relict population of the original hunter-gatherers remained in western Cuba as the Ciboney people.
Other uses
In ecology, an ecosystem which originally ranged over a large expanse, but is now narrowly confined, may be termed a relict.
In agronomy, a relict crop is a crop which was previously grown extensively, but is now only used in one limited region, or a small number of isolated regions.
In real estate law, reliction is the gradual recession of water from its usual high-water mark so that the newly uncovered land becomes the property of the adjoining riparian property owner.
"Relict" was an ancient term still used in coloni
|
https://en.wikipedia.org/wiki/Cyprus%20Mathematical%20Society
|
The Cyprus Mathematical Society (CMS) aims to promote the mathematical education and science. It was founded in 1983. The C.M.S. is a non-profit organization supported by the voluntary work of its members. The C.M.S. counts over 600 members. In order to promote its aims, C.M.S. organizes Mathematical competition between students all over Cyprus, and takes part in international mathematics competitions (BMO, Junior BMO competition, PMWC, IMO). CMS organize a series of mathematics competitions as part of the selection process of the national teams in the international mathematics competitions and the Cyprus Mathematical Olympiad. The new selection process in described below.
High School (Lyceum) competitions
As of 2005, in Cyprus Four provincial competitions are held in November in every district capital. In Lefkosia is called Iakovos Patatsos, in Lemesos is called Andreas Vlamis, in Larnaka and Ammochostos is called Petrakis Kyprianou and in Pafos is called Andreas Hadjitheoris. Every grade has different problems. Afterwards ten high-school (Lyceum) students from every grade (1st, 2nd and 3rd grade) of every district are selected.
Total: 4 districts * 3 grades * 10 students = 120 students.
Then a National (Pancyprian) competition is held in December and is called "Zeno". Every grade still has different problems. Afterwards ten students from every grade are selected.
Total: 3 grades * 10 students = 30 students.
These student are usually divided into two groups according to the district they come from.
Each group watches about eight to ten four-hour preparation lessons for the olympiad. During the lessons Four Team Selection Tests are held which are considered the four parts of the Selection Competition above 15.5 and are called Euclides. All the student have the same test. In each of the competition five students are eliminated. So after the fourth competition the six member of national team for IMO and BMO and the four runners-up are selected.
In every compet
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20societies
|
This article provides a list of mathematical societies.
International
African Mathematical Union
Association for Women in Mathematics
Circolo Matematico di Palermo
European Mathematical Society
European Women in Mathematics
Foundations of Computational Mathematics
International Association for Cryptologic Research
International Association of Mathematical Physics
International Linear Algebra Society
International Mathematical Union
International Statistical Institute
International Society for Analysis, its Applications and Computation
International Society for Mathematical Sciences
Kurt Gödel Society
Mathematical Council of the Americas (MCofA)
Mathematical Society of South Eastern Europe (MASSEE)
Mathematical Optimization Society
Maths Society
Ramanujan Mathematical Society
Quaternion Society
Society for Industrial and Applied Mathematics
Southeast Asian Mathematical Society (SEAMS)
Spectra (mathematical association)
Unión Matemática de América Latina y el Caribe (UMALCA)
Young Mathematicians Network
Honor societies
Kappa Mu Epsilon
Mu Alpha Theta
Pi Mu Epsilon
National and subnational
Arranged as follows: Society name in English (Society name in home-language; Abbreviation if used), Country and/or subregion/city if not specified in name.
This list is sorted by continent.
Africa
Algeria Mathematical Society
Gabon Mathematical Society
South African Mathematical Society
Asia
Bangladesh Mathematical Society
Calcutta Mathematical Society (CalMathSoc), Kolkata, India
Chinese Mathematical Society
Indian Mathematical Society
Iranian Mathematical Society
Israel Mathematical Union
Jadavpur University Mathematical Society (JMS), Jadavpur, India
Kerala Mathematical Association, Kerala State, India
Korean Mathematical Society, South Korea
Mathematical Society of Japan
Mathematical Society of the Philippines
Pakistan Mathematical Society
Turkish Mathematical Society
Europe
Albanian Mathematical Association
Armenian Mathematic
|
https://en.wikipedia.org/wiki/Simeticone
|
Simeticone (INN), also known as simethicone (USAN), is an anti-foaming agent used to reduce bloating, discomfort or pain caused by excessive gas.
Medical uses
Simeticone is used to relieve the symptoms of excessive gas in the gastrointestinal tract, namely bloating, burping, and flatulence. While there is a lack of conclusive evidence that simeticone is effective for this use, studies have shown that it can relieve symptoms of functional dyspepsia and functional bloating.
It has not been fully established that simeticone is useful to treat colic in babies, and it is not recommended for this purpose. A study in the United Kingdom reported that according to parental perception simeticone helped infant colic in some cases.
Simeticone can also be used for suspected postoperative abdominal discomfort in infants.
Side effects
Simeticone does not have any serious side effects. Two uncommon side effects (occurring in 1 in 100 to 1 in 1,000 patients) are constipation and nausea.
Pharmacology
Simeticone is a non-systemic surfactant which decreases the surface tension of gas bubbles in the GI tract. This allows gas bubbles to leave the GI tract as flatulence or belching. Simeticone does not reduce or prevent the formation of gas. Its effectiveness has been shown in several in vitro studies.
Chemistry
Simethicone is a mixture of dimethicone and silicon dioxide.
Names
The INN name is "simeticone", which was added to the INN recommended list in 1999.
Simeticone is marketed under many brand names and in many combination drugs; it is also marketed as a veterinary drug.
Brand names include A.F., Acid Off, Aero Red, Aero-OM, Aero-Sim, Aerocol, Aerox, Aesim, Aflat, Air-X, Anaflat, Antiflat, Baby Rest, Bicarsim, Bicarsim Forte, Blow-X, Bobotic, Bobotik, Carbogasol, Colic E, Colin, Cuplaton, Degas, Dentinox, Dermatix, Digesta, Dimetikon Meda, Disflatyl, Disolgas, Elugan N, Elzym, Endo-Paractol, Enterosilicona, Espaven Antigas, Espumisan, Espumisan L, Flacol, Flapex, Flatidyl,
|
https://en.wikipedia.org/wiki/Transmission%20security
|
Transmission security (TRANSEC) is the component of communications security (COMSEC) that results from the application of measures designed to protect transmissions from interception and exploitation by means other than cryptanalysis. Goals of transmission security include:
Low probability of interception (LPI)
Low probability of detection (LPD)
Antijam — resistance to jamming (EPM or ECCM)
Methods used to achieve transmission security include frequency hopping and spread spectrum where the required pseudorandom sequence generation is controlled by a cryptographic algorithm and key. Such keys are known as transmission security keys (TSK). Modern U.S. and NATO TRANSEC-equipped radios include SINCGARS and HAVE QUICK.
See also
Electronic warfare
Cryptography
Military communications
|
https://en.wikipedia.org/wiki/Morphology%20%28biology%29
|
Morphology is a branch of biology dealing with the study of the form and structure of organisms and their specific structural features.
This includes aspects of the outward appearance (shape, structure, colour, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of the internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of gross structure of an organism or taxon and its component parts.
History
The etymology of the word "morphology" is from the Ancient Greek (), meaning "form", and (), meaning "word, study, research".
While the concept of form in biology, opposed to function, dates back to Aristotle (see Aristotle's biology), the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800).
Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Karl Gegenbaur and Ernst Haeckel.
In 1830, Cuvier and E.G.Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution.
Divisions of morphology
Comparative morphology is analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.
Functional morphology is the study of the relationship between the structure and function of morphological features.
Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.
Anatomy is a "branch of morphology that deals with the structure of organisms".
Molecular morphology is a rarely used term, usually r
|
https://en.wikipedia.org/wiki/B%C3%B4cher%20Memorial%20Prize
|
The Bôcher Memorial Prize was founded by the American Mathematical Society in 1923 in memory of Maxime Bôcher with an initial endowment of $1,450 (contributed by members of that society). It is awarded every three years (formerly every five years) for a notable research work in analysis that has appeared during the past six years. The work must be published in a recognized, peer-reviewed venue. The current award is $5,000.
There have been thirty-seven prize recipients. The first woman to win the award, Laure Saint-Raymond, did so in 2020. About eighty percent of the journal articles recognized since 2000 have been from Annals of Mathematics, the Journal of the American Mathematical Society, Inventiones Mathematicae, and Acta Mathematica.
Past winners
1923 George David Birkhoff for
Dynamical systems with two degrees of freedom. Trans. Amer. Math. Soc. 18 (1917), 119-300.
1924 Eric Temple Bell for
Arithmetical paraphrases. I,II. Trans. Amer. Math. Soc. 22 (1921), 1-30, 198-219.
1924 Solomon Lefschetz for
On certain numerical invariants with applications to Abelian varieties. Trans. Amer. Math. Soc. 22 (1921), 407-482.
1928 James W. Alexander II for
Combinatorial analysis situs. Trans. Amer. Math. Soc. 28 (1926), 301-329.
1933 Marston Morse for
The foundations of a theory of the calculus of variations in the large in m-space. Trans. Amer. Math. Soc. 31 (1929), 379-404.
1933 Norbert Wiener for
Tauberian theorems. Ann. Math. 33 (1932), 1-100.
1938 John von Neumann for
Almost periodic functions. I. Trans. Amer. Math. Soc. 36 (1934), 445-294
Almost periodic functions. II. Trans. Amer. Math. Soc. 37 (1935), 21-50
1943 Jesse Douglas for
Green's function and the problem of Plateau. Amer. J. Math. 61 (1939), 545-589
The most general form of the problem of Plateau. Amer. J. Math. 61 (1939), 590-608
Solution of the inverse problem of the calculus of variations. Proc. Natl. Acad. Sci. U.S.A. 25 (1939), 631-637.
1948 Albert Schaeffer and Donald Spencer for
Coefficient
|
https://en.wikipedia.org/wiki/Branching%20fraction
|
In particle physics and nuclear physics, the branching fraction (or branching ratio) for a decay is the fraction of particles which decay by an individual decay mode or with respect to the total number of particles which decay. It applies to either the radioactive decay of atoms or the decay of elementary particles. It is equal to the ratio of the partial decay constant to the overall decay constant. Sometimes a partial half-life is given, but this term is misleading; due to competing modes, it is not true that half of the particles will decay through a particular decay mode after its partial half-life. The partial half-life is merely an alternate way to specify the partial decay constant , the two being related through:
For example, for spontaneous decays of 132Cs, 98.1% are ε (electron capture) or β+ (positron) decays, and 1.9% are β− (electron) decays. The partial decay constants can be calculated from the branching fraction and the half-life of 132Cs (6.479 d), they are: 0.10 d−1 (ε + β+) and 0.0020 d−1 (β−). The partial half-lives are 6.60 d (ε + β+) and 341 d (β−). Here the problem with the term partial half-life is evident: after (341+6.60) days almost all the nuclei will have decayed, not only half as one may initially think.
Isotopes with significant branching of decay modes include copper-64, arsenic-74, rhodium-102, indium-112, iodine-126 and holmium-164.
References
External links
LBNL Isotopes Project
Particle Data Group (listings for particle physics)
Nuclear Structure and Decay Data - IAEA for nuclear decays
Particle physics
Nuclear physics
Radioactivity
Ratios
|
https://en.wikipedia.org/wiki/List%20of%20analyses%20of%20categorical%20data
|
This a list of statistical procedures which can be used for the analysis of categorical data, also known as data on the nominal scale and as categorical variables.
General tests
Bowker's test of symmetry
Categorical distribution, general model
Chi-squared test
Cochran–Armitage test for trend
Cochran–Mantel–Haenszel statistics
Correspondence analysis
Cronbach's alpha
Diagnostic odds ratio
G-test
Generalized estimating equations
Generalized linear models
Krichevsky–Trofimov estimator
Kuder–Richardson Formula 20
Linear discriminant analysis
Multinomial distribution
Multinomial logit
Multinomial probit
Multiple correspondence analysis
Odds ratio
Poisson regression
Powered partial least squares discriminant analysis
Qualitative variation
Randomization test for goodness of fit
Relative risk
Stratified analysis
Tetrachoric correlation
Uncertainty coefficient
Wald test
Binomial data
Bernstein inequalities (probability theory)
Binomial regression
Binomial proportion confidence interval
Chebyshev's inequality
Chernoff bound
Gauss's inequality
Markov's inequality
Rule of succession
Rule of three (medicine)
Vysochanskiï–Petunin inequality
2 × 2 tables
Chi-squared test
Diagnostic odds ratio
Fisher's exact test
G-test
Odds ratio
Relative risk
McNemar's test
Yates's correction for continuity
Measures of association
Aickin's α
Andres and Marzo's delta
Bangdiwala's B
Bennett, Alpert, and Goldstein’s S
Brennan and Prediger’s κ
Coefficient of colligation - Yule's Y
Coefficient of consistency
Coefficient of raw agreement
Conger's Kappa
Contingency coefficient – Pearson's C
Cramér's V
Dice's coefficient
Fleiss' kappa
Goodman and Kruskal's lambda
Guilford’s G
Gwet's AC1
Hanssen–Kuipers discriminant
Heidke skill score
Jaccard index
Janson and Vegelius' C
Kappa statistics
Klecka's tau
Krippendorff's Alpha
Kuipers performance index
Matthews correlation coefficient
Phi coefficient
Press' Q
Renkonen similarity i
|
https://en.wikipedia.org/wiki/Sphingolipid
|
Sphingolipids are a class of lipids containing a backbone of sphingoid bases, which are a set of aliphatic amino alcohols that includes sphingosine. They were discovered in brain extracts in the 1870s and were named after the mythological sphinx because of their enigmatic nature. These compounds play important roles in signal transduction and cell recognition. Sphingolipidoses, or disorders of sphingolipid metabolism, have particular impact on neural tissue. A sphingolipid with a terminal hydroxyl group is a ceramide. Other common groups bonded to the terminal oxygen atom include phosphocholine, yielding a sphingomyelin, and various sugar monomers or dimers, yielding cerebrosides and globosides, respectively. Cerebrosides and globosides are collectively known as glycosphingolipids.
Structure
The long-chain bases, sometimes simply known as sphingoid bases, are the first non-transient products of de novo sphingolipid synthesis in both yeast and mammals. These compounds, specifically known as phytosphingosine and dihydrosphingosine (also known as sphinganine, although this term is less common), are mainly C18 compounds, with somewhat lower levels of C20 bases. Ceramides and glycosphingolipids are N-acyl derivatives of these compounds.
The sphingosine backbone is O-linked to a (usually) charged head group such as ethanolamine, serine, or choline.
The backbone is also amide-linked to an acyl group, such as a fatty acid.
Types
Simple sphingolipids, which include the sphingoid bases and ceramides, make up the early products of the sphingolipid synthetic pathways.
Sphingoid bases are the fundamental building blocks of all sphingolipids. The main mammalian sphingoid bases are dihydrosphingosine and sphingosine, while dihydrosphingosine and phytosphingosine are the principal sphingoid bases in yeast. Sphingosine, dihydrosphingosine, and phytosphingosine may be phosphorylated.
Ceramides, as a general class, are N-acylated sphingoid bases lacking additional head groups.
|
https://en.wikipedia.org/wiki/Model%20predictive%20control
|
Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.
Overview
The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control ele
|
https://en.wikipedia.org/wiki/VU%20meter
|
A volume unit (VU) meter or standard volume indicator (SVI) is a device displaying a representation of the signal level in audio equipment.
The original design was proposed in the 1940 IRE paper, A New Standard Volume Indicator and Reference Level, written by experts from CBS, NBC, and Bell Telephone Laboratories. The Acoustical Society of America then standardized it in 1942 (ANSI C16.5-1942) for use in telephone installation and radio broadcast stations.
Consumer audio equipment often features VU meters, both for utility purposes (e.g. in recording equipment) and for aesthetics (in playback devices).
The original VU meter is a passive electromechanical device, namely a 200 µA DC d'Arsonval movement ammeter fed from a full-wave copper-oxide rectifier mounted within the meter case. The mass of the needle causes a relatively slow response, which in effect integrates or smooths the signal, with a rise time of 300 ms. This has the effect of averaging out peaks and troughs of short duration, and reflects the perceived loudness of the material more closely than the more modern and initially more expensive PPM meters. For this reason many audio practitioners prefer the VU meter to its alternatives, though the meter indication does not reflect some of the key features of the signal, most notably its peak level, which in many cases, must not pass a defined limit.
0 VU is equal to +4 dBu, or 1.228 volts RMS, a power of about 2.5 milliwatts when applied across a 600-ohm load. 0 VU is often referred to as "0 dB". The meter was designed not to measure the signal, but to let users aim the signal level to a target level of 0 VU (sometimes labelled 100%), so it is not important that the device is non-linear and imprecise for low levels. In effect, the scale ranges from −20 VU to +3 VU, with −3 VU right in the middle (half the power of 0 VU). Purely electronic devices may emulate the response of the needle; they are VU-meters in as much as they respect the standard.
In the b
|
https://en.wikipedia.org/wiki/Tudor%20rose
|
The Tudor rose (sometimes called the Union rose) is the traditional floral heraldic emblem of England and takes its name and origins from the House of Tudor, which united the House of Lancaster and the House of York. The Tudor rose consists of five white inner petals, representing the House of York, and five red outer petals to represent the House of Lancaster.
Origins
In the Battle of Bosworth Field (1485), Henry VII, of the House of Lancaster, took the crown of England from Richard III, of the House of York. He thus brought to an end the retrospectively dubbed "Wars of the Roses". Kings of the House of Lancaster had sometimes used a red or gold rose as a badge; and the House of York had used a white rose as a badge. Henry's father was Edmund Tudor, and his mother was Margaret Beaufort from the House of Lancaster; in January 1486 he married Elizabeth of York to bring the two factions together. (In battle, Richard III fought under the banner of the boar, and Henry under the banner of the dragon of his native Wales.) The white rose versus red rose juxtaposition was mostly Henry's invention, created to exploit his appeal as a 'peacemaker king'. The historian Thomas Penn writes:
On his marriage, Henry VII adopted the Tudor rose badge conjoining the White Rose of York and the Red Rose of Lancaster. The Tudor rose is occasionally seen divided in quarters (heraldically as "quartered") and vertically (in heraldic terms per pale) red and white. More often, the Tudor rose is depicted as a double rose, white on red and is always described, heraldically, as "proper" (that is, naturally-coloured, despite not actually existing in nature).
Historical uses
Henry VII was reserved in his usage of the Tudor rose. He regularly used the Lancastrian rose by itself, being the house to which he descended. His successor Henry VIII, descended from the House of York as well through his mother, would use the rose more often.
When Arthur, Prince of Wales, died in 1502, his tomb in Worc
|
https://en.wikipedia.org/wiki/Human%20waste
|
Human waste (or human excreta) refers to the waste products of the human digestive system, menses, and human metabolism including urine and feces. As part of a sanitation system that is in place, human waste is collected, transported, treated and disposed of or reused by one method or another, depending on the type of toilet being used, ability by the users to pay for services and other factors. Fecal sludge management is used to deal with fecal matter collected in on-site sanitation systems such as pit latrines and septic tanks.
The sanitation systems in place differ vastly around the world, with many people in developing countries having to resort to open defecation where human waste is deposited in the environment, for lack of other options. Improvements in "water, sanitation and hygiene" (WASH) around the world is a key public health issue within international development and is the focus of Sustainable Development Goal 6.
People in developed countries tend to use flush toilets where the human waste is mixed with water and transported to sewage treatment plants.
Children's excreta can be disposed of in diapers and mixed with municipal solid waste. Diapers are also sometimes dumped directly into the environment, leading to public health risks.
Terminology
The term "human waste" is used in the general media to mean several things, such as sewage, sewage sludge, blackwater - in fact anything that may contain some human feces. In the stricter sense of the term, human waste is in fact human excreta, i.e. urine and feces, with or without water being mixed in. For example, dry toilets collect human waste without the addition of water.
Health aspects
Human waste is considered a biowaste, as it is a vector for both viral and bacterial diseases. It can be a serious health hazard if it gets into sources of drinking water. The World Health Organization (WHO) reports that nearly 2.2 million people die annually from diseases caused by contaminated water, such as cho
|
https://en.wikipedia.org/wiki/Email%20marketing
|
Email marketing is the act of sending a commercial message, typically to a group of people, using email. In its broadest sense, every email sent to a potential or current customer could be considered email marketing. It involves using email to send advertisements, request business, or solicit sales or donations. Email marketing strategies commonly seek to achieve one or more of three primary objectives, to building loyalty, trust, or brand awareness. The term usually refers to sending email messages with the purpose of enhancing a merchant's relationship with current or previous customers, encouraging customer loyalty and repeat business, acquiring new customers or convincing current customers to purchase something immediately, and sharing third-party ads.
History
Email marketing has evolved rapidly alongside the technological growth of the 21st century. Before this growth, when emails were novelties to most customers, email marketing was not as effective. In 1978, Gary Thuerk of Digital Equipment Corporation (DEC) sent out the first mass email to approximately 400 potential clients via the Advanced Research Projects Agency Network (ARPANET). He claimed that this resulted in $13 million worth of sales of DEC products, and highlighted the potential of marketing through mass emails.
However, as email marketing developed as an effective means of direct communication, in the 1990s, users increasingly began referring to it as "spam" and began blocking out content from emails with filters and blocking programs. To effectively communicate a message through email, marketers had to develop a way of pushing content through to the end user without being cut out by automatic filters and spam removing software.
Historically, it has not been easy to measure the effectiveness of marketing campaigns because target markets cannot be adequately defined. Email marketing carries the benefit of allowing marketers to identify returns on investment and measure and improve efficiency.
|
https://en.wikipedia.org/wiki/IQue%20Player
|
The iQue Player (, stylised as iQue PLAYER) is a handheld TV game version of the Nintendo 64 console that was manufactured by iQue, a joint venture between Nintendo and Taiwanese-American scientist Wei Yen after China had banned the sale of home video games. Its Chinese name is Shén Yóu Ji (), literally "God Gaming Machine". Shényóu () is a double entendre as "to make a mental journey". It was never released in any English-speaking countries, but the name "iQue Player" appears in the instruction manual. The console and its controller are one unit, plugging directly into the television. A box accessory allows multiplayer gaming.
History
Development
China has a large black market for video games and usually only a few games officially make it to the Chinese market. Many Chinese gamers tend to purchase pirated cartridge or disc copies or download copied game files to play via emulator. Nintendo wanted to curb software piracy in China, and bypass the ban that the Chinese government has implemented on home game consoles since 2000. Nintendo partnered with Wei Yen, who had led past Nintendo product development such as the Nintendo 64 console and the Wii Remote. Originally, the system would support games released on Nintendo consoles prior to the GameCube, including the NES, Super NES, and Nintendo 64, but it was decided only to include Nintendo 64 games. Additionally, The Legend of Zelda: Majora's Mask was planned and is shown on the back of the box, but was later cancelled.
The iQue Player was announced at Tokyo Game Show 2003. It was originally planned to play Super NES in addition to Nintendo 64 games, and had a release date set for mid-October with debut markets including Shanghai, Guangzhou, and Chengdu, expanding into the rest of China by the following spring. The system missed its mid-October launch. By November 21, it was available for purchase at Lik Sang.
Release
The iQue Player was released on November 18, 2003 with few launch games. Nintendo's strategy to
|
https://en.wikipedia.org/wiki/Marx%20generator
|
A Marx generator is an electrical circuit first described by Erwin Otto Marx in 1924. Its purpose is to generate a high-voltage pulse from a low-voltage DC supply. Marx generators are used in high-energy physics experiments, as well as to simulate the effects of lightning on power-line gear and aviation equipment. A bank of 36 Marx generators is used by Sandia National Laboratories to generate X-rays in their Z Machine.
Principle of operation
The circuit generates a high-voltage pulse by charging a number of capacitors in parallel, then suddenly connecting them in series. See the circuit diagram on the right. At first, n capacitors (C) are charged in parallel to a voltage VC by a DC power supply through the resistors (RC). The spark gaps used as switches have the voltage VC across them, but the gaps have a breakdown voltage greater than VC, so they all behave as open circuits while the capacitors charge. The last gap isolates the output of the generator from the load; without that gap, the load would prevent the capacitors from charging. To create the output pulse, the first spark gap is caused to break down (triggered); the breakdown effectively shorts the gap, placing the first two capacitors in series, applying a voltage of about 2VC across the second spark gap. Consequently, the second gap breaks down to add the third capacitor to the "stack", and the process continues to sequentially break down all of the gaps. This process of the spark gaps connecting the capacitors in series to create the high voltage is called erection. The last gap connects the output of the series "stack" of capacitors to the load. Ideally, the output voltage will be nVC, the number of capacitors times the charging voltage, but in practice the value is less. Note that none of the charging resistors Rc are subjected to more than the charging voltage even when the capacitors have been erected. The charge available is limited to the charge on the capacitors, so the output is a brief pulse as
|
https://en.wikipedia.org/wiki/Difference%20of%20two%20squares
|
In mathematics, the difference of two squares is a squared (multiplied by itself) number subtracted from another squared number. Every difference of squares may be factored according to the identity
in elementary algebra.
Proof
The proof of the factorization identity is straightforward. Starting from the left-hand side, apply the distributive law to get
By the commutative law, the middle two terms cancel:
leaving
The resulting identity is one of the most commonly used in mathematics. Among many uses, it gives a simple proof of the AM–GM inequality in two variables.
The proof holds in any commutative ring.
Conversely, if this identity holds in a ring R for all pairs of elements a and b, then R is commutative. To see this, apply the distributive law to the right-hand side of the equation and get
.
For this to be equal to , we must have
for all pairs a, b, so R is commutative.
Geometrical demonstrations
The difference of two squares can also be illustrated geometrically as the difference of two square areas in a plane. In the diagram, the shaded part represents the difference between the areas of the two squares, i.e. . The area of the shaded part can be found by adding the areas of the two rectangles; , which can be factorized to . Therefore, .
Another geometric proof proceeds as follows: We start with the figure shown in the first diagram below, a large square with a smaller square removed from it. The side of the entire square is a, and the side of the small removed square is b. The area of the shaded region is . A cut is made, splitting the region into two rectangular pieces, as shown in the second diagram. The larger piece, at the top, has width a and height a-b. The smaller piece, at the bottom, has width a-b and height b. Now the smaller piece can be detached, rotated, and placed to the right of the larger piece. In this new arrangement, shown in the last diagram below, the two pieces together form a rectangle, whose width is and whose height is .
|
https://en.wikipedia.org/wiki/Kuroda%20normal%20form
|
In formal language theory, a noncontracting grammar is in Kuroda normal form if all production rules are of the form:
AB → CD or
A → BC or
A → B or
A → a
where A, B, C and D are nonterminal symbols and a is a terminal symbol. Some sources omit the A → B pattern.
It is named after Sige-Yuki Kuroda, who originally called it a linear bounded grammar, a terminology that was also used by a few other authors thereafter.
Every grammar in Kuroda normal form is noncontracting, and therefore, generates a context-sensitive language. Conversely, every noncontracting grammar that does not generate the empty string can be converted to Kuroda normal form.
A straightforward technique attributed to György Révész transforms a grammar in Kuroda normal form to a context-sensitive grammar: AB → CD is replaced by four context-sensitive rules AB → AZ, AZ → WZ, WZ → WD and WD → CD. This proves that every noncontracting grammar generates a context-sensitive language.
There is a similar normal form for unrestricted grammars as well, which at least some authors call "Kuroda normal form" too:
AB → CD or
A → BC or
A → a or
A → ε
where ε is the empty string. Every unrestricted grammar is weakly equivalent to one using only productions of this form.
If the rule AB → CD is eliminated from the above, one obtains context-free grammars in Chomsky Normal Form. The Penttonen normal form (for unrestricted grammars) is a special case where first rule above is AB → AD. Similarly, for context-sensitive grammars, the Penttonen normal form, also called the one-sided normal form (following Penttonen's own terminology) is:
AB → AD or
A → BC or
A → a
For every context-sensitive grammar, there exists a weakly equivalent one-sided normal form.
See also
Backus–Naur form
Chomsky normal form
Greibach normal form
References
Further reading
G. Révész, "Comment on the paper 'Error detection in formal languages,'" Journal of Computer and System Sciences, vol. 8, no. 2, pp. 238–242, Apr. 1974. (Révész'
|
https://en.wikipedia.org/wiki/Ultra-high%20vacuum
|
Ultra-high vacuum (often spelled ultrahigh in American English, UHV) is the vacuum regime characterised by pressures lower than about . UHV conditions are created by pumping the gas out of a UHV chamber. At these low pressures the mean free path of a gas molecule is greater than approximately 40 km, so the gas is in free molecular flow, and gas molecules will collide with the chamber walls many times before colliding with each other. Almost all molecular interactions therefore take place on various surfaces in the chamber.
UHV conditions are integral to scientific research. Surface science experiments often require a chemically clean sample surface with the absence of any unwanted adsorbates. Surface analysis tools such as X-ray photoelectron spectroscopy and low energy ion scattering require UHV conditions for the transmission of electron or ion beams. For the same reason, beam pipes in particle accelerators such as the Large Hadron Collider are kept at UHV.
Overview
Maintaining UHV conditions requires the use of unusual materials for equipment. Useful concepts for UHV include:
Sorption of gases
Kinetic theory of gases
Gas transport and pumping
Vacuum pumps and systems
Vapour pressure
Typically, UHV requires:
High pumping speed — possibly multiple vacuum pumps in series and/or parallel
Minimized surface area in the chamber
High conductance tubing to pumps — short and fat, without obstruction
Use of low-outgassing materials such as certain stainless steels
Avoid creating pits of trapped gas behind bolts, welding voids, etc.
Electropolishing of all metal parts after machining or welding
Use of low vapor pressure materials (ceramics, glass, metals, teflon if unbaked)
Baking of the system to remove water or hydrocarbons adsorbed to the walls
Chilling of chamber walls to cryogenic temperatures during use
Avoiding all traces of hydrocarbons, including skin oils in a fingerprint — gloves must always be used
Hydrogen and carbon monoxide are the most common backg
|
https://en.wikipedia.org/wiki/Planimeter
|
A planimeter, also known as a platometer, is a measuring instrument used to determine the area of an arbitrary two-dimensional shape.
Construction
There are several kinds of planimeters, but all operate in a similar way. The precise way in which they are constructed varies, with the main types of mechanical planimeter being polar, linear, and Prytz or "hatchet" planimeters. The Swiss mathematician Jakob Amsler-Laffon built the first modern planimeter in 1854, the concept having been pioneered by Johann Martin Hermann in 1814. Many developments followed Amsler's famous planimeter, including electronic versions.
The Amsler (polar) type consists of a two-bar linkage. At the end of one link is a pointer, used to trace around the boundary of the shape to be measured. The other end of the linkage pivots freely on a weight that keeps it from moving. Near the junction of the two links is a measuring wheel of calibrated diameter, with a scale to show fine rotation, and worm gearing for an auxiliary turns counter scale. As the area outline is traced, this wheel rolls on the surface of the drawing. The operator sets the wheel, turns the counter to zero, and then traces the pointer around the perimeter of the shape. When the tracing is complete, the scales at the measuring wheel show the shape's area.
When the planimeter's measuring wheel moves perpendicular to its axis, it rolls, and this movement is recorded. When the measuring wheel moves parallel to its axis, the wheel skids without rolling, so this movement is ignored. That means the planimeter measures the distance that its measuring wheel travels, projected perpendicularly to the measuring wheel's axis of rotation. The area of the shape is proportional to the number of turns through which the measuring wheel rotates.
The polar planimeter is restricted by design to measuring areas within limits determined by its size and geometry. However, the linear type has no restriction in one dimension, because it can roll. Its
|
https://en.wikipedia.org/wiki/Line%20Mode%20Browser
|
The Line Mode Browser (also known as LMB, WWWLib, or just www) is the second web browser ever created.
The browser was the first demonstrated to be portable to several different operating systems.
Operated from a simple command-line interface, it could be widely used on many computers and computer terminals throughout the Internet.
The browser was developed starting in 1990, and then supported by the World Wide Web Consortium (W3C) as an example and test application for the libwww library.
History
One of the fundamental concepts of the "World Wide Web" projects at CERN was "universal readership". In 1990, Tim Berners-Lee had already written the first browser, WorldWideWeb (later renamed to Nexus), but that program only worked on the proprietary software of NeXT computers, which were in limited use. Berners-Lee and his team could not port the WorldWideWeb application with its features—including the graphical WYSIWYG editor— to the more widely deployed X Window System, since they had no experience in programming it. The team recruited Nicola Pellow, a math student intern working at CERN, to write a "passive browser" so basic that it could run on most computers of that time.
The name "Line Mode Browser" refers to the fact that, to ensure compatibility with the earliest computer terminals such as Teletype machines, the program only displayed text, (no images) and had only line-by-line text input (no cursor positioning).
Development started in November 1990 and the browser was demonstrated in December 1990.
The development environment used resources from the PRIAM project, a French language acronym for "PRojet Interdivisionnaire d'Assistance aux Microprocesseurs", a project to standardise microprocessor development across CERN.
The short development time produced software in a simplified dialect of the C programming language. The official standard ANSI C was not yet available on all platforms.
The Line Mode Browser was released to a limited audience on VAX, RS/6000 a
|
https://en.wikipedia.org/wiki/Single%20instruction%2C%20single%20data
|
In computing, single instruction stream, single data stream (SISD) is a computer architecture in which a single uni-core processor executes a single instruction stream, to operate on data stored in a single memory. This corresponds to the von Neumann architecture.
SISD is one of the four main classifications as defined in Flynn's taxonomy. In this system, classifications are based upon the number of concurrent instructions and data streams present in the computer architecture. According to Michael J. Flynn, SISD can have concurrent processing characteristics. Pipelined processors and superscalar processors are common examples found in most modern SISD computers.
Instructions are sent to the control unit from the memory module and are decoded and sent to the processing unit which processes on the data retrieved from memory module and sends back to it.
References
Flynn's taxonomy
SISD
|
https://en.wikipedia.org/wiki/Fractional%20Fourier%20transform
|
In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the n-th power, where n need not be an integer — thus, it can transform a function to any intermediate domain between time and frequency. Its applications range from filter design and signal analysis to phase retrieval and pattern recognition.
The FRFT can be used to define fractional convolution, correlation, and other operations, and can also be further generalized into the linear canonical transformation (LCT). An early definition of the FRFT was introduced by Condon, by solving for the Green's function for phase-space rotations, and also by Namias, generalizing work of Wiener on Hermite polynomials.
However, it was not widely recognized in signal processing until it was independently reintroduced around 1993 by several groups. Since then, there has been a surge of interest in extending Shannon's sampling theorem for signals which are band-limited in the Fractional Fourier domain.
A completely different meaning for "fractional Fourier transform" was introduced by Bailey and Swartztrauber as essentially another name for a z-transform, and in particular for the case that corresponds to a discrete Fourier transform shifted by a fractional amount in frequency space (multiplying the input by a linear chirp) and evaluating at a fractional set of frequency points (e.g. considering only a small portion of the spectrum). (Such transforms can be evaluated efficiently by Bluestein's FFT algorithm.) This terminology has fallen out of use in most of the technical literature, however, in preference to the FRFT. The remainder of this article describes the FRFT.
Introduction
The continuous Fourier transform of a function is a unitary operator of space that maps the function to its frequential version (all expressions are taken in the sense, rather than point
|
https://en.wikipedia.org/wiki/International%20Union%20for%20the%20Protection%20of%20New%20Varieties%20of%20Plants
|
The International Union for the Protection of New Varieties of Plants or UPOV () is a treaty body (non-United Nations intergovernmental organization) with headquarters in Geneva, Switzerland. Its objective is to provide an effective system for plant variety protection. It does so by defining a blueprint regulation to be implemented by its members in national law. The expression UPOV Convention also refers to one of the three instruments that relate to the union, namely the 1991 Act of the UPOV Convention (UPOV 91), 1978 Act of the UPOV Convention (UPOV 78) and 1961 Act of the UPOV Convention with Amendments of 1972 (UPOV 61).
History
UPOV was established by the International Convention for the Protection of New Varieties of Plants (UPOV 61). The convention was adopted in Paris in 1961 and revised in 1972, 1978 and 1991.
The initiative for the foundation of UPOV came from European breeding companies, who 1956 called for a conference to define basic principles for plant variety protection. The first version of the UPOV convention was agreed on in 1961 by 12 European countries. By 1990 still only 19 countries were part of the convention, with South Africa being the only country from the Southern Hemisphere. From the mid-1990s more and more countries from Latin America, Asia and Africa joined the convention. A reason for this development might be the TRIPS-Agreement that obliged WTO members to introduce plant variety protection in national law. Later, many countries have been obliged to join UPOV through specific clauses in bilateral trade agreements, in particular with the EU, USA, Japan and EFTA. The TRIPS-Agreement doesn't require adherence to UPOV but gives the possibility to define a sui generis system for plant variety protection. In contrast, clauses in free trade agreement are more comprehensive and typically require adherence to UPOV.
While the earlier versions of the convention have been replaced, UPOV 78 and UPOV 91 coexist. Existing members are free to d
|
https://en.wikipedia.org/wiki/Plant%20breeders%27%20rights
|
Plant breeders' rights (PBR), also known as plant variety rights (PVR), are rights granted in certain places to the breeder of a new variety of plant that give the breeder exclusive control over the propagating material (including seed, cuttings, divisions, tissue culture) and harvested material (cut flowers, fruit, foliage) of a new variety for a number of years.
With these rights, the breeder can choose to become the exclusive marketer of the variety, or to license the variety to others. In order to qualify for these exclusive rights, a variety must be new, distinct, uniform, and stable. A variety is:
new if it has not been commercialized for more than one year in the country of protection;
distinct if it differs from all other known varieties by one or more important botanical characteristics, such as height, maturity, color, etc.;
uniform if the plant characteristics are consistent from plant to plant within the variety;
stable if the plant characteristics are genetically fixed and therefore remain the same from generation to generation, or after a cycle of reproduction in the case of hybrid varieties.
The breeder must also give the variety an acceptable "denomination", which becomes its generic name and must be used by anyone who markets the variety.
Typically, plant variety rights are granted by national offices after examination. Seed is submitted to the plant variety office, who grow it for one or more seasons, to check that it is distinct, stable, and uniform. If these tests are passed, exclusive rights are granted for a specified period (typically 20/25 years, or 25/30 years for trees and vines). Renewal fees (often, annual) are required to maintain the rights.
Breeders can bring suit to enforce their rights and can recover damages. Plant breeders' rights contain exemptions that are not recognized under other legal doctrines such as patent law. Commonly, there is an exemption for farm-saved seed. Farmers may store this production in their own bins for
|
https://en.wikipedia.org/wiki/Covariance%20and%20contravariance%20%28computer%20science%29
|
Many programming language type systems support subtyping. For instance, if the type is a subtype of , then an expression of type should be substitutable wherever an expression of type is used.
Variance is how subtyping between more complex types relates to subtyping between their components. For example, how should a list of s relate to a list of s? Or how should a function that returns relate to a function that returns ?
Depending on the variance of the type constructor, the subtyping relation of the simple types may be either preserved, reversed, or ignored for the respective complex types. In the OCaml programming language, for example, "list of Cat" is a subtype of "list of Animal" because the list type constructor is covariant. This means that the subtyping relation of the simple types is preserved for the complex types.
On the other hand, "function from Animal to String" is a subtype of "function from Cat to String" because the function type constructor is contravariant in the parameter type. Here, the subtyping relation of the simple types is reversed for the complex types.
A programming language designer will consider variance when devising typing rules for language features such as arrays, inheritance, and generic datatypes. By making type constructors covariant or contravariant instead of invariant, more programs will be accepted as well-typed. On the other hand, programmers often find contravariance unintuitive, and accurately tracking variance to avoid runtime type errors can lead to complex typing rules.
In order to keep the type system simple and allow useful programs, a language may treat a type constructor as invariant even if it would be safe to consider it variant, or treat it as covariant even though that could violate type safety.
Formal definition
Suppose A and B are types, and I<U> denotes application of a type constructor I with type argument U.
Within the type system of a programming language, a typing rule for a type constructor
|
https://en.wikipedia.org/wiki/Grigory%20Margulis
|
Grigory Aleksandrovich Margulis (, first name often given as Gregory, Grigori or Gregori; born February 24, 1946) is a Russian-American mathematician known for his work on lattices in Lie groups, and the introduction of methods from ergodic theory into diophantine approximation. He was awarded a Fields Medal in 1978, a Wolf Prize in Mathematics in 2005, and an Abel Prize in 2020, becoming the fifth mathematician to receive the three prizes. In 1991, he joined the faculty of Yale University, where he is currently the Erastus L. De Forest Professor of Mathematics.
Biography
Margulis was born to a Russian family of Lithuanian Jewish descent in Moscow, Soviet Union. At age 16 in 1962 he won the silver medal at the International Mathematical Olympiad. He received his PhD in 1970 from the Moscow State University, starting research in ergodic theory under the supervision of Yakov Sinai. Early work with David Kazhdan produced the Kazhdan–Margulis theorem, a basic result on discrete groups. His superrigidity theorem from 1975 clarified an area of classical conjectures about the characterisation of arithmetic groups amongst lattices in Lie groups.
He was awarded the Fields Medal in 1978, but was not permitted to travel to Helsinki to accept it in person, allegedly due to antisemitism against Jewish mathematicians in the Soviet Union. His position improved, and in 1979 he visited Bonn, and was later able to travel freely, though he still worked in the Institute of Problems of Information Transmission, a research institute rather than a university. In 1991, Margulis accepted a professorial position at Yale University.
Margulis was elected a member of the U.S. National Academy of Sciences in 2001. In 2012 he became a fellow of the American Mathematical Society.
In 2005, Margulis received the Wolf Prize for his contributions to theory of lattices and applications to ergodic theory, representation theory, number theory, combinatorics, and measure theory.
In 2020, Margulis rec
|
https://en.wikipedia.org/wiki/Super%20FX
|
[[File:SuperFX GSU-2-SP1 chip.jpg|right|thumb|250px|Super FX 2 chip on Super Mario World 2: Yoshi's Island]]
The Super FX is a coprocessor on the Graphics Support Unit (GSU) added to select Super Nintendo Entertainment System (SNES) video game cartridges, primarily to facilitate advanced 2D and 3D graphics. The Super FX chip was designed by Argonaut Games, who also co-developed the 3D space rail shooter video game Star Fox with Nintendo to demonstrate the additional polygon rendering capabilities that the chip had introduced to the SNES.
History
The Super FX chip design team included engineers Ben Cheese, Rob Macaulay, and James Hakewill. While in development, the Super FX chip was codenamed "Super Mario FX" and "MARIO". "MARIO", a backronym for "Mathematical, Argonaut, Rotation, & Input/Output", is printed on the face of the final production chip.
Because of high manufacturing costs and increased development time, few Super FX based games were made compared to the rest of the SNES library. Due to these increased costs, Super FX games often retailed at a higher MSRP compared to other SNES games.
According to Argonaut Games founder Jez San, Argonaut had initially intended to develop the Super FX chip for the Nintendo Entertainment System. The team programmed an NES version of the first-person combat flight simulator Starglider, which Argonaut had developed for the Atari ST and other home computers a few years earlier, and showed it to Nintendo in 1990. The prototype impressed the company, but they suggested that they develop games for the then-unreleased Super Famicom due to the NES's hardware becoming outdated in light of newer systems such as the Sega Genesis/Mega Drive and the TurboGrafx-16/PC Engine. Shortly after the 1990 Consumer Electronics Show held in Chicago, Illinois, Argonaut ported the NES version of Starglider to the Super Famicom, a process which took roughly one week according to San.
Function
The Super FX chip is used to render 3D polygons and t
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.