source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Ophcrack
Ophcrack is a free open-source (GPL licensed) program that cracks Windows log-in passwords by using LM hashes through rainbow tables. The program includes the ability to import the hashes from a variety of formats, including dumping directly from the SAM files of Windows. On most computers, ophcrack can crack most passwords within a few minutes. Rainbow tables for LM hashes are provided for free by the developers. By default, ophcrack is bundled with tables that allow it to crack passwords no longer than 14 characters using only alphanumeric characters. Available for free download are four Windows XP tables and four Windows Vista tables. Objectif Sécurité has even larger tables for purchase that are intended for professional use. Larger rainbow tables are NTLM hash for cracking Windows Vista/Windows 7. Ophcrack is also available as Live CD distributions, which automates the retrieval, decryption, and cracking of passwords from a Windows system. One Live CD distribution is available for Windows XP and lower and another for Windows Vista and Windows 7. The Live CD distributions of ophcrack are built with SliTaz GNU/Linux. Starting with version 2.3, Ophcrack also cracks NTLM hashes. This is necessary if the generation of the LM hash is disabled (this is default for Windows Vista) or if the password is longer than 14 characters (in which case the LM hash is not stored). Starting with version 3.7.0, the source code has been moved from SourceForge to GitLab. See also Aircrack-ng Cain and Abel Crack DaveGrohl Hashcat John the Ripper L0phtCrack NMap RainbowCrack References External links Ophcrack Online Demo - form to submit hashes and instantly crack passwords Ophcrack no table found - how to fix if Ophcrack says "no tables found". Ophcrack LiveCD 2 Tutorial Quick fix if Ophcrack doesn't work OPHCRACK (the time-memory-trade-off-cracker) - École Polytechnique Fédérale de Lausanne Free security software Password cracking software Cryptanalytic
https://en.wikipedia.org/wiki/List%20of%20numerical-analysis%20software
Listed here are notable end-user computer applications intended for use with numerical or data analysis: Numerical-software packages General-purpose computer algebra systems Interface-oriented Language-oriented Historically significant Expensive Desk Calculator written for the TX-0 and PDP-1 in the late 1950s or early 1960s. S is an (array-based) programming language with strong numerical support. R is an implementation of the S language. See also References Lists of software Mathematics-related lists Software
https://en.wikipedia.org/wiki/Carnage%20Heart
is a video game for the PlayStation, developed by Artdink. Its gameplay is a mecha-based, turn-based strategy game, where the player takes the role of a commander in a war fought by robots. The robots, called Overkill Engines (OKEs), cannot be directly controlled in battle; they must be programmed beforehand to behave in a certain way under certain conditions using a flow diagram system. Gameplay The game features a fairly complex negotiation system that allows the player to purchase, research, or upgrade new equipment and parts. The OKEs themselves can be upgraded as well through this system, allowing for extended use of the same model for as long as possible. The various companies involved in the negotiation process can also provide valuable information about the purchases of the enemy, allowing the player to better plan for the next advance in enemy technology. To aid the player in learning the gameplay, Carnage Heart was packaged with an unusually large amount of materials for a console game of the time: in addition to the game disc and above-average length manual, the jewel case contained a 58-page strategy guide and a tutorial disc with a 30-minute, fully voice-guided overview of most aspects of the game. OKE production The main focus of the game is in the design and programming of the OKEs. The OKEs will only be ready to produce once they have a complete hardware and software profile. Both of these profiles are stored in the form of a "card" that can be named as the player likes. It is possible for there to be a total of 28 cards, but in reality the player may use only 25, as there are three pre-made cards that can not be deleted. Before a software profile can be created for a card, there must be a hardware profile. The first choice a player must make in the hardware creation process is that of a body type and style. There are four styles of OKE bodies and three designs in each style to choose from. These styles include a two-legged type, a tank type, a m
https://en.wikipedia.org/wiki/Proizvolov%27s%20identity
In mathematics, Proizvolov's identity is an identity concerning sums of differences of positive integers. The identity was posed by Vyacheslav Proizvolov as a problem in the 1985 All-Union Soviet Student Olympiads. To state the identity, take the first 2N positive integers, 1, 2, 3, ..., 2N − 1, 2N, and partition them into two subsets of N numbers each. Arrange one subset in increasing order: Arrange the other subset in decreasing order: Then the sum is always equal to N2. Example Take for example N = 3. The set of numbers is then {1, 2, 3, 4, 5, 6}. Select three numbers of this set, say 2, 3 and 5. Then the sequences A and B are: A1 = 2, A2 = 3, and A3 = 5; B1 = 6, B2 = 4, and B3 = 1. The sum is which indeed equals 32. Proof A slick proof of the identity is as follows. Note that for any , we have that :. For this reason, it suffices to establish that the sets and : coincide. Since the numbers are all distinct, it therefore suffices to show that for any , . Assume the contrary that this is false for some , and consider positive integers . Clearly, these numbers are all distinct (due to the construction), but they are at most : a contradiction is reached. Notes References . External links Proizvolov's identity at cut-the-knot.org A video illustration (and proof outline) of Proizvolov's identity by Dr. James Grime Recreational mathematics Theorems in number theory
https://en.wikipedia.org/wiki/NDISwrapper
NDISwrapper is a free software driver wrapper that enables the use of Windows XP network device drivers (for devices such as PCI cards, USB modems, and routers) on Linux operating systems. NDISwrapper works by implementing the Windows kernel and NDIS APIs and dynamically linking Windows network drivers to this implementation. As a result, it only works on systems based on the instruction set architectures supported by Windows, namely IA-32 and x86-64. Native drivers for some network adapters are not available on Linux as some manufacturers maintain proprietary interfaces and do not write cross-platform drivers. NDISwrapper allows the use of Windows drivers, which are available for virtually all modern PC network adapters. Use There are three steps: Creating a Linux driver, installing it, and using it. NDISwrapper is composed of two main parts, a command-line tool used at installation time and a Windows subsystem used when an application calls the Wi-Fi subsystem. As the outcome of an NDISwrapper installation should be some sort of Linux driver to be able to work with Linux applications, the first action the user does is to "compile" a couple or more of Windows files, and the NDISwrapper's version of Windows DDK into a Linux Kernel Module. This is done with a tool named "ndiswrapper". The resultant linux driver is then installed (often manually) in the OS. A Linux application can then send request to this Linux driver that automatically does the needed adaptations to call its—now—internal Windows driver and DDK. To achieve this "compilation" NDISwrapper requires at least the ".inf" and the ".sys" files invariably supplied as parts of the Windows driver. For example, if the driver is called "mydriver", with the files mydriver.inf and mydriver.sys and vendorid:productid 0000:0000, then NDISwrapper installs the driver to /etc/ndiswrapper/mydriver/. This directory contains three files: 0000:0000.conf, which contains information extracted from the inf file mydr
https://en.wikipedia.org/wiki/Immunocompetence
In immunology, immunocompetence is the ability of the body to produce a normal immune response following exposure to an antigen. Immunocompetence is the opposite of immunodeficiency (also known as immuno-incompetence or being immuno-compromised). Examples include: a newborn who does not yet have a fully functioning immune system but may have maternally transmitted antibodies – immunodeficient; a late stage AIDS patient with a failed or failing immune system – immuno-incompetent; or a transplant recipient taking medication so their body will not reject the donated organ – immunocompromised. There may be cases of overlap but these terms all describe immune system not fully functioning. The US Centers for Disease Control and Prevention (CDC) recommends that household and other close contacts of persons with altered immunocompetence receive the MMR, varicella, and rotavirus vaccines according to the standard schedule of vaccines, as well as receiving an annual flu shot. All other vaccines may be administered to contacts without alteration to the vaccine schedule, with the exception of the smallpox vaccine. Persons with altered immunocompetence should not receive live, attenuated vaccines (viral or bacterial), and may not receive the full benefit of inactivated vaccines. In reference to lymphocytes, immunocompetence means that a B cell or T cell is mature and can recognize antigens and allow a person to mount an immune response. In order for lymphocytes such as T cells to become immunocompetent, which refers to the ability of lymphocyte cell receptors to recognize MHC molecules, they must undergo positive selection. Adaptive immunocompetence is regulated by growth hormone (GH), prolactin (PRL), and vasopressin (VP) – hormones secreted by the pituitary gland. See also Immunopharmacology and Immunotoxicology (medical journal) Parasite-stress theory References Immunology de:Immunkompetenz
https://en.wikipedia.org/wiki/Glasstron
The Sony Glasstron was a family of portable head-mounted displays, first released in 1996 with the model PLM-50. The products included two LCD screens and two earphones for video and audio respectively. These products are no longer manufactured nor supported by Sony. The Glasstron was not the first head-mounted display by Sony, with the Visortron being a previous exhibited unit. The Sony HMZ-T1 can be considered successors to Glasstron. The head-mounted display developed for Sony during the mid-1990s by Virtual i-o is completely unrelated to the Glasstron. One application of this technology was in the game MechWarrior 2, which permitted users to adopt a visual perspective from inside the cockpit of the craft, using their own eyes as visual and seeing the battlefield through their craft's own cockpit. Models Five models were released. Supported video inputs included PC (15 pin, VGA interface), Composite and S-Video. A brief list of the models follows: References Sony products Display technology Eyewear Products introduced in 1996 Computer peripherals Head-mounted displays
https://en.wikipedia.org/wiki/Trinomial
In elementary algebra, a trinomial is a polynomial consisting of three terms or monomials. Examples of trinomial expressions with variables with variables with variables , the quadratic polynomial in standard form with variables. with variables, nonnegative integers and any constants. where is variable and constants are nonnegative integers and any constants. Trinomial equation A trinomial equation is a polynomial equation involving three terms. An example is the equation studied by Johann Heinrich Lambert in the 18th century. Some notable trinomials The quadratic trinomial in standard form (as from above): sum or difference of two cubes: A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable ( below). This form is factored as: where For instance, the polynomial is an example of this type of trinomial with . The solution and of the above system gives the trinomial factorization: . The same result can be provided by Ruffini's rule, but with a more complex and time-consuming process. See also Trinomial expansion Monomial Binomial Multinomial Simple expression Compound expression Sparse polynomial Notes References Elementary algebra Polynomials
https://en.wikipedia.org/wiki/Round-robin%20DNS
Round-robin DNS is a technique of load distribution, load balancing, or fault-tolerance provisioning multiple, redundant Internet Protocol service hosts, e.g., Web server, FTP servers, by managing the Domain Name System's (DNS) responses to address requests from client computers according to an appropriate statistical model. In its simplest implementation, round-robin DNS works by responding to DNS requests not only with a single potential IP address, but with a list of potential IP addresses corresponding to several servers that host identical services. The order in which IP addresses from the list are returned is the basis for the term round robin. With each DNS response, the IP address sequence in the list is permuted. Traditionally, IP clients initially attempt connections with the first address returned from a DNS query, so that on different connection attempts, clients would receive service from different providers, thus distributing the overall load among servers. Some resolvers attempt to re-order the list to give priority to numerically "closer" networks. This behaviour was standardized during the definition of IPv6, and has been blamed for defeating round-robin load-balancing. Some desktop clients do try alternate addresses after a connection timeout of up to 30 seconds. Round-robin DNS is often used to load balance requests among a number of Web servers. For example, a company has one domain name and three identical copies of the same web site residing on three servers with three IP addresses. The DNS server will be set up so that domain name has multiple A records, one for each IP address. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and
https://en.wikipedia.org/wiki/Unbounded%20nondeterminism
In computer science, unbounded nondeterminism or unbounded indeterminacy is a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation. Fairness Discussion of unbounded nondeterminism tends to get involved with discussions of fairness. The basic concept is that all computation paths must be "fair" in the sense that if the machine enters a state infinitely often, it must take every possible transition from that state. This amounts to requiring that the machine be guaranteed to service a request if it can, since an infinite sequence of states will only be allowed if there is no transition that leads to the request being serviced. Equivalently, every possible transition must occur eventually in an infinite computation, although it may take an unbounded amount of time for the transition to occur. This concept is to be distinguished from the local fairness of flipping a "fair" coin, by which it is understood that it is possible for the outcome to always be heads for any finite number of steps, although as the number of steps increases, this will almost surely not happen. An example of the role of fair or unbounded nondeterminism in the merging of strings was given by William D. Clinger, in his 1981 thesis. He defined a "fair merge" of two strings to be a third string in which each character of each string must occur eventually. He then considered the set of all fair merges of two strings , assuming it to be a monotone function. Then he argued that , where is the empty stream. Now }, so it must be that is an element of , a contradiction. He concluded that: It appears that a fair merge cannot be written
https://en.wikipedia.org/wiki/Open%20Rights%20Group
The Open Rights Group (ORG) is a UK-based organisation that works to preserve digital rights and freedoms by campaigning on digital rights issues and by fostering a community of grassroots activists. It campaigns on numerous issues including mass surveillance, internet filtering and censorship, and intellectual property rights. History The organisation was started by Danny O'Brien, Cory Doctorow, Ian Brown, Rufus Pollock, James Cronin, Stefan Magdalinski, Louise Ferguson and Suw Charman after a panel discussion at Open Tech 2005. O'Brien created a pledge on PledgeBank, placed on 23 July 2005, with a deadline of 25 December 2005: "I will create a standing order of 5 pounds per month to support an organisation that will campaign for digital rights in the UK but only if 1,000 other people will too." The pledge reached 1000 people on 29 November 2005. The Open Rights Group was launched at a "sell-out" meeting in Soho, London. Work The group has made submissions to the All Party Internet Group (APIG) inquiry into digital rights management and the Gowers Review of Intellectual Property. The group was honoured in the 2008 Privacy International Big Brother Awards alongside No2ID, Liberty, Genewatch UK and others, as a recognition of their efforts to keep state and corporate mass surveillance at bay. In 2010 the group worked with 38 Degrees to oppose the introduction of the Digital Economy Act, which was passed in April 2010. The group opposes measures in the draft Online Safety Bill introduced in 2021, that it sees as infringing free speech rights and online anonymity. The group campaigns against the Department for Digital, Culture, Media and Sport's plan to switch to an opt-out model for cookies. The group spokesperson stated that "[t]he UK government propose to make online spying the default option" in response to the proposed switch. Goals To collaborate with other digital rights and related organisations. To nurture a community of campaigning volunteers, from
https://en.wikipedia.org/wiki/DASK
The DASK was the first computer in Denmark. It was commissioned in 1955, designed and constructed by Regnecentralen, and began operation in September 1957. DASK is an acronym for Dansk Aritmetisk Sekvens Kalkulator or Danish Arithmetic Sequence Calculator. Regnecentralen almost didnot allow the name, as the word dask means "slap" in Danish. In the end, however, it was named so as it fit the pattern of the name BESK, the Swedish computer which provided the initial architecture for DASK. DASK traces its origins to 1947 and a goal set by Akademiet for de Tekniske Videnskaber (Academy for the Technical Sciences or Academy of Applied Sciences), which was to follow the development of the modern computing devices. Initial funding was obtained through the Ministry of Defence (Denmark) as the Danish Military had been given a grant through the Marshall Plan for cipher machines for which the military saw no immediate need. Originally conceived to be a copy of BESK, the rapid advancement in the field allowed improvements to be made during the development such that in the end, it was not a copy of BESK. The DASK was a one-off design that took place in a villa. The machine became so big that the floor had to be reinforced to support its mass of 3.5 metric tons. DASK is notable for being the subject of one of the earliest ALGOL implementations, referred to as DASK ALGOL, which counted Jørn Jensen and Peter Naur among its contributors. Architecture The DASK was a vacuum tube machine based on the Swedish BESK design. As described in 1956, it contained 2500 vacuum tubes, 1500 solid-state elements, and required a three-phase power supply of at least 15 kW. Fast storage was 1024 40-bit words of magnetic-core memory (cycle time 5µs), directly addressable as 1024 full or 2048 half-words. This was complemented by an additional 8192 words of backing store on magnetic drum (3000 rpm). A full word stored 40-bit numbers in two's-complement form, or two 20-bit instructions. In addition t
https://en.wikipedia.org/wiki/Why%20the%20lucky%20stiff
Jonathan Gillette, known by the pseudonym why the lucky stiff (often abbreviated as _why), is a writer, cartoonist, artist, and programmer notable for his work with the Ruby programming language. Annie Lowrey described him as "one of the most unusual, and beloved, computer programmers" in the world. Along with Yukihiro Matsumoto and David Heinemeier Hansson, he was seen as one of the key figures in the Ruby community. His pseudonym might allude to the exclamation "Why, the lucky stiff!" from The Fountainhead by Ayn Rand. _why made a presentation enigmatically titled "A Starry Afternoon, a Sinking Symphony, and the Polo Champ Who Gave It All Up for No Reason Whatsoever" at the 2005 O'Reilly Open Source Convention. It explored how to teach programming and make the subject more appealing to adolescents. _why gave a presentation and performed with his band, the Thirsty Cups, at RailsConf in 2006. On 19 August 2009, _why's accounts on Twitter and GitHub and his personally maintained websites went offline. Shortly before he disappeared, _why tweeted, "programming is rather thankless. u see your works become replaced by superior ones in a year. unable to run at all in a few more." _why's colleagues have assembled collections of his writings and projects. In 2012, his website briefly went back online with a detailed explanation of his plans for the future. Works Books His best known work is Why's (poignant) Guide to Ruby, which "teaches Ruby with stories." Paul Adams of Webmonkey describes its eclectic style as resembling a "collaboration between Stan Lem and Ed Lear". Chapter three was published in The Best Software Writing I: Selected and Introduced by Joel Spolsky. In April 2013, a complete book attributed to Jonathan Gillette was digitally released via the website whytheluckystiff.net (which has since changed ownership) and the GitHub repository cwales. It was presented as individual files of PCL (Printer Command Language) without any instructions on how to asse
https://en.wikipedia.org/wiki/Superposition%20calculus
The superposition calculus is a calculus for reasoning in equational logic. It was developed in the early 1990s and combines concepts from first-order resolution with ordering-based equality handling as developed in the context of (unfailing) Knuth–Bendix completion. It can be seen as a generalization of either resolution (to equational logic) or unfailing completion (to full clausal logic). Like most first-order calculi, superposition tries to show the unsatisfiability of a set of first-order clauses, i.e. it performs proofs by refutation. Superposition is refutation complete—given unlimited resources and a fair derivation strategy, from any unsatisfiable clause set a contradiction will eventually be derived. , most of the (state-of-the-art) theorem provers for first-order logic are based on superposition (e.g. the E equational theorem prover), although only a few implement the pure calculus. Implementations E SPASS Vampire Waldmeister (official web page) References Rewrite-Based Equational Theorem Proving with Selection and Simplification, Leo Bachmair and Harald Ganzinger, Journal of Logic and Computation 3(4), 1994. Paramodulation-Based Theorem Proving, Robert Nieuwenhuis and Alberto Rubio, Handbook of Automated Reasoning I(7), Elsevier Science and MIT Press, 2001. Mathematical logic Logical calculi
https://en.wikipedia.org/wiki/Integral%20operator
An integral operator is an operator that involves integration. Special instances are: The operator of integration itself, denoted by the integral symbol Integral linear operators, which are linear operators induced by bilinear forms involving integrals Integral transforms, which are maps between two function spaces, which involve integrals Integral calculus
https://en.wikipedia.org/wiki/Highway%20Addressable%20Remote%20Transducer%20Protocol
The HART Communication Protocol (Highway Addressable Remote Transducer) is a hybrid analog+digital industrial automation open protocol. Its most notable advantage is that it can communicate over legacy 4–20 mA analog instrumentation current loops, sharing the pair of wires used by the analog-only host systems. HART is widely used in process and instrumentation systems ranging from small automation applications up to highly sophisticated industrial applications. HART is a in the OSI model a Layer 7, Application. Layers 3–6 are not used. When sent over 4–20 mA it uses a Bell 202 for layer 1. But it is often converted to RS485 or RS232. According to Emerson, due to the huge installation base of 4–20 mA systems throughout the world, the HART Protocol is one of the most popular industrial protocols today. HART protocol has made a good transition protocol for users who wished to use the legacy 4–20 mA signals, but wanted to implement a "smart" protocol. OSI layer History The protocol was developed by Rosemount Inc., built off the Bell 202 early communications standard in the mid-1980s as a proprietary digital communication protocol for their smart field instruments. Soon it evolved into HART and in 1986 it was made an open protocol. Since then, the capabilities of the protocol have been enhanced by successive revisions to the specification. Modes There are two main operational modes of HART instruments: point-to-point (analog/digital) mode, and multi-drop mode. Point to point In point-to-point mode the digital signals are overlaid on the 4–20 mA loop current. Both the 4–20 mA current and the digital signal are valid signalling protocols between the controller and measuring instrument or final control element. The polling address of the instrument is set to "0". Only one instrument can be put on each instrument cable signal pair. One signal, generally specified by the user, is specified to be the 4–20 mA signal. Other signals are sent digitally on top of the 4
https://en.wikipedia.org/wiki/BelAZ
BelAZ (, ) is a Belarusian automobile plant and one of the world's largest manufacturers of large and especially large dump trucks, as well as other heavy transport equipment for the mining and construction industries. BelAZ is a site for one of the largest Commonwealth of Independent States investment projects. The factory finalized two of the three scheduled phases of the technical re-equipment and upgrades. The Quality Management System applied in research and development, fabrication, erection and after-sale service of the equipment complies with international ISO 9000 standards. History In 1948, a peat extraction machinery plant was constructed by the railroad station Žodzina. In 1958 it was renamed into BelAZ. Initially it produced MAZ trucks. In 1961 the first 27-tonne BelAZ pit and quarry dump truck was manufactured. In 2006 the independent Mogilev Automobile Plant (MoAZ) was merged into BelAZ. In fall of 2006 the first delivery of BelAZ-75600. In April 2012, BelAZ announced it would hold an IPO – the first in Belarus. In September 2013, BelAZ presented the first sample of mining dump truck BelAZ-75710, the world largest dump truck with 450 tons load capacity. Political repressions, international sanctions On 21 June 2021, BelAZ was added to the sanctions list of the European Union for repressions against workers who participated in mass protests against the authoritarian regime of Alexander Lukashenka following the controversial presidential election of 2020. According to the official decision of the EU, "[BelAZ] is a source of significant revenue for the Lukashenka regime. Lukashenka stated that the government will always support the company, and described it as “Belarusian brand” and “part of the national legacy”. OJSC BelAZ has offered its premises and equipment to stage a political rally in support of the regime. Therefore OJSC “Belaz” benefits from and supports the Lukashenka regime." Moreover, "the employees of OJSC “Belaz” who took part in strik
https://en.wikipedia.org/wiki/Edge%20enhancement
Edge enhancement is an image processing filter that enhances the edge contrast of an image or video in an attempt to improve its acutance (apparent sharpness). The filter works by identifying sharp edge boundaries in the image, such as the edge between a subject and a background of a contrasting color, and increasing the image contrast in the area immediately around the edge. This has the effect of creating subtle bright and dark highlights on either side of any edges in the image, called overshoot and undershoot, leading the edge to look more defined when viewed from a typical viewing distance. The process is prevalent in the video field, appearing to some degree in the majority of TV broadcasts and DVDs. A modern television set's "sharpness" control is an example of edge enhancement. It is also widely used in computer printers especially for font or/and graphics to get a better printing quality. Most digital cameras also perform some edge enhancement, which in some cases cannot be adjusted. Edge enhancement can be either an analog or a digital process. Analog edge enhancement may be used, for example, in all-analog video equipment such as modern CRT televisions. Properties Edge enhancement applied to an image can vary according to a number of properties; the most common algorithm is unsharp masking, which has the following parameters: Amount. This controls the extent to which contrast in the edge detected area is enhanced. Radius or aperture. This affects the size of the edges to be detected or enhanced, and the size of the area surrounding the edge that will be altered by the enhancement. A smaller radius will result in enhancement being applied only to sharper, finer edges, and the enhancement being confined to a smaller area around the edge. Threshold. Where available, this adjusts the sensitivity of the edge detection mechanism. A lower threshold results in more subtle boundaries of colour being identified as edges. A threshold that is too lo
https://en.wikipedia.org/wiki/AVR%20Butterfly
The AVR Butterfly is a battery-powered single-board microcontroller developed by Atmel. It consists of an Atmel ATmega169PV Microcontroller, a liquid crystal display, joystick, speaker, serial port, real-time clock (RTC), internal flash memory, and sensors for temperature and voltage. The board is the size of a name tag and has a clothing pin on back so it can be worn as such after the user enters their name onto the LCD. Feature set LCD The AVRButterfly demonstrates LCD driving by running a 14 segment, six alpha-numeric character display. However, the LCD interface consumes many of the I/O pins. CPU & Speed The Butterfly's ATmega169 CPU is capable of speeds up to 8 MHz, however it is factory set by software to 2 MHz to preserve the button battery life. There are free replacement bootloaders available that will launch programs at 1, 2, 4 or 8 MHz speeds. Alternatively, this may be accomplished by changing the CPU prescaler in the application code. Features ATmega169V AVR 8-bit CPU, including 16 Kbyte of Flash memory for code storage and 512 bytes of EEPROM for data storage 100-segment LCD (without backlight) 4-Mbit (512-Kbyte) AT45 flash memory 4-way Mini-Joystick with center push-button Light, temperature, and voltage (0-5 V range) sensors (light sensor no longer included due to the RoHS directive) Piezo speaker Solder pads for user-supplied connectors: 2 8-bit I/O ports, ISP, USI, JTAG RS232 level converter & interface (Cable and connector provided by end user) 3 V battery holder (CR2450 battery included) Software The Butterfly comes preloaded with software that demonstrates many features of the ATmega169, including reading of the ambient light level and temperature and playback of musical notes. The device has a clothing-pin attached to the back, so it may be worn as a name tag — the "name" may be entered via the joystick or over the RS-232 port, and will scroll across the LCD. Reprogramming The Butterfly can be freely reprogrammed using
https://en.wikipedia.org/wiki/Tautonym
A tautonym is a scientific name of a species in which both parts of the name have the same spelling, such as Rattus rattus. The first part of the name is the name of the genus and the second part is referred to as the specific epithet in the International Code of Nomenclature for algae, fungi, and plants and the specific name in the International Code of Zoological Nomenclature. Tautonymy (i.e., the usage of tautonymous names) is permissible in zoological nomenclature (see List of tautonyms for examples). In past editions of the zoological Code, the term tautonym was used, but it has now been replaced by the more inclusive "tautonymous names"; these include trinomial names such as Gorilla gorilla gorilla and Bison bison bison. For animals, a tautonym implicitly (though not always) indicates that the species is the type species of its genus. This can also be indicated by a species name with the specific epithet typus or typicus, although more commonly the type species is designated another way. Botanical nomenclature In the current rules for botanical nomenclature (which apply retroactively), tautonyms are explicitly prohibited. One example of a botanical tautonym is 'Larix larix'. The earliest name for the European larch is Pinus larix L. (1753) but Gustav Karl Wilhelm Hermann Karsten did not agree with the placement of the species in Pinus and decided to move it to Larix in 1880. His proposed name created a tautonym. Under rules first established in 1906, which are applied retroactively, Larix larix cannot exist as a formal name. In such a case either the next earliest validly published name must be found, in this case Larix decidua Mill. (1768), or (in its absence) a new epithet must be published. However, it is allowed for both parts of the name of a species to mean the same (pleonasm), without being identical in spelling. For instance, Arctostaphylos uva-ursi means bearberry twice, in Greek and Latin respectively; Picea omorika uses the Latin and Serbian t
https://en.wikipedia.org/wiki/Imaging%20genetics
Imaging genetics refers to the use of anatomical or physiological imaging technologies as phenotypic assays to evaluate genetic variation. Scientists that first used the term imaging genetics were interested in how genes influence psychopathology and used functional neuroimaging to investigate genes that are expressed in the brain (neuroimaging genetics). Imaging genetics uses research approaches in which genetic information and fMRI data in the same subjects are combined to define neuro-mechanisms linked to genetic variation. With the images and genetic information, it can be determined how individual differences in single nucleotide polymorphisms, or SNPs, lead to differences in brain wiring structure, and intellectual function. Imaging genetics allows the direct observation of the link between genes and brain activity in which the overall idea is that common variants in SNPs lead to common diseases. A neuroimaging phenotype is attractive because it is closer to the biology of genetic function than illnesses or cognitive phenotypes. Alzheimer's disease By combining the outputs of the polygenic and neuro-imaging within a linear model, it has been shown that genetic information provides additive value in the task of predicting Alzheimer's disease (AD). AD traditionally has been considered a disease marked by neuronal cell loss and widespread gray matter atrophy and the apolipoprotein E allele (APOE4) is a widely confirmed genetic risk factor for late-onset AD. Another gene risk variant is associated with Alzheimer's, which is known as the CLU gene risk variant. The CLU gene risk variant showed a distinct profile of lower white matter integrity that may increase vulnerability to developing AD later in life. Each CLU-C allele was associated with lower FA in frontal, temporal, parietal, occipital, and subcortical white matter. Brain regions with lower FA included corticocortical pathways previously demonstrated to have lower FA in AD patients and APOE4 carriers
https://en.wikipedia.org/wiki/Irreducible%20component
In algebraic geometry, an irreducible algebraic set or irreducible variety is an algebraic set that cannot be written as the union of two proper algebraic subsets. An irreducible component is an algebraic subset that is irreducible and maximal (for set inclusion) for this property. For example, the set of solutions of the equation is not irreducible, and its irreducible components are the two lines of equations and . It is a fundamental theorem of classical algebraic geometry that every algebraic set may be written in a unique way as a finite union of irreducible components. These concepts can be reformulated in purely topological terms, using the Zariski topology, for which the closed sets are the algebraic subsets: A topological space is irreducible if it is not the union of two proper closed subsets, and an irreducible component is a maximal subspace (necessarily closed) that is irreducible for the induced topology. Although these concepts may be considered for every topological space, this is rarely done outside algebraic geometry, since most common topological spaces are Hausdorff spaces, and, in a Hausdorff space, the irreducible components are the singletons. In topology A topological space X is reducible if it can be written as a union of two closed proper subsets , of A topological space is irreducible (or hyperconnected) if it is not reducible. Equivalently, X is irreducible if all non empty open subsets of X are dense, or if any two nonempty open sets have nonempty intersection. A subset F of a topological space X is called irreducible or reducible, if F considered as a topological space via the subspace topology has the corresponding property in the above sense. That is, is reducible if it can be written as a union where are closed subsets of , neither of which contains An irreducible component of a topological space is a maximal irreducible subset. If a subset is irreducible, its closure is also irreducible, so irreducible components are
https://en.wikipedia.org/wiki/Personalized%20medicine
Personalized medicine, also referred to as precision medicine, is a medical model that separates people into different groups—with medical decisions, practices, interventions and/or products being tailored to the individual patient based on their predicted response or risk of disease. The terms personalized medicine, precision medicine, stratified medicine and P4 medicine are used interchangeably to describe this concept though some authors and organisations use these expressions separately to indicate particular nuances. While the tailoring of treatment to patients dates back at least to the time of Hippocrates, the term has risen in usage in recent years given the growth of new diagnostic and informatics approaches that provide understanding of the molecular basis of disease, particularly genomics. This provides a clear evidence base on which to stratify (group) related patients. Among the 14 Grand Challenges for Engineering, an initiative sponsored by National Academy of Engineering (NAE), personalized medicine has been identified as a key and prospective approach to "achieve optimal individual health decisions", therefore overcoming the challenge to "Engineer better medicines". Development of concept In personalised medicine, diagnostic testing is often employed for selecting appropriate and optimal therapies based on the context of a patient's genetic content or other molecular or cellular analysis. The use of genetic information has played a major role in certain aspects of personalized medicine (e.g. pharmacogenomics), and the term was first coined in the context of genetics, though it has since broadened to encompass all sorts of personalization measures, including the use of proteomics, imaging analysis, nanoparticle-based theranostics, among others. Relationship to personalized medicine Precision medicine (PM) is a medical model that proposes the customization of healthcare, with medical decisions, treatments, practices, or products being tailored to
https://en.wikipedia.org/wiki/UTM%20theorem
In computability theory, the theorem, or universal Turing machine theorem, is a basic result about Gödel numberings of the set of computable functions. It affirms the existence of a computable universal function, which is capable of calculating any other computable function. The universal function is an abstract version of the universal Turing machine, thus the name of the theorem. Roger's equivalence theorem provides a characterization of the Gödel numbering of the computable functions in terms of the smn theorem and the UTM theorem. Theorem The theorem states that a partial computable function u of two variables exists such that, for every computable function f of one variable, an e exists such that for all x. This means that, for each x, either f(x) and u(e,x) are both defined and are equal, or are both undefined. The theorem thus shows that, defining φe(x) as u(e, x), the sequence φ1, φ2, … is an enumeration of the partial computable functions. The function in the statement of the theorem is called a universal function. References Theorems in theory of computation Computability theory
https://en.wikipedia.org/wiki/Gunning%20transceiver%20logic
Gunning transceiver logic (GTL) is a type of logic signaling used to drive electronic backplane buses. It has a voltage swing between 0.4 volts and 1.2 volts—much lower than that used in TTL and CMOS logic—and symmetrical parallel resistive termination. The maximum signaling frequency is specified to be 100 MHz, although some applications use higher frequencies. GTL is defined by JEDEC standard JESD 8-3 (1993) and was invented by William Gunning while working for Xerox at the Palo Alto Research Center. All Intel front-side buses use GTL. As of 2008, GTL in these FSBs has a maximum frequency of 1.6 GHz. The front-side bus of the Intel Pentium Pro, Pentium II and Pentium III microprocessors uses GTL+ (or GTLP) developed by Fairchild Semiconductor, an upgraded version of GTL which has defined slew rates and higher voltage levels. AGTL+ stands for either assisted Gunning transceiver logic or advanced Gunning transceiver logic. These are GTL signaling derivatives used by Intel microprocessors. References Computer buses JEDEC standards Logic families
https://en.wikipedia.org/wiki/Failure%20analysis
Failure analysis is the process of collecting and analyzing data to determine the cause of a failure, often with the goal of determining corrective actions or liability. According to Bloch and Geitner, ”machinery failures reveal a reaction chain of cause and effect… usually a deficiency commonly referred to as the symptom…”. Failure analysis can save money, lives, and resources if done correctly and acted upon. It is an important discipline in many branches of manufacturing industry, such as the electronics industry, where it is a vital tool used in the development of new products and for the improvement of existing products. The failure analysis process relies on collecting failed components for subsequent examination of the cause or causes of failure using a wide array of methods, especially microscopy and spectroscopy. Nondestructive testing (NDT) methods (such as industrial computed tomography scanning) are valuable because the failed products are unaffected by analysis, so inspection sometimes starts using these methods. Forensic investigation Forensic inquiry into the failed process or product is the starting point of failure analysis. Such inquiry is conducted using scientific analytical methods such as electrical and mechanical measurements, or by analyzing failure data such as product reject reports or examples of previous failures of the same kind. The methods of forensic engineering are especially valuable in tracing product defects and flaws. They may include fatigue cracks, brittle cracks produced by stress corrosion cracking or environmental stress cracking for example. Witness statements can be valuable for reconstructing the likely sequence of events and hence the chain of cause and effect. Human factors can also be assessed when the cause of the failure is determined. There are several useful methods to prevent product failures occurring in the first place, including failure mode and effects analysis (FMEA) and fault tree analysis (FTA), methods wh
https://en.wikipedia.org/wiki/Leonid%20Bunimovich
Leonid Abramowich Bunimovich (born August 1, 1947) is a Soviet and American mathematician, who made fundamental contributions to the theory of Dynamical Systems, Statistical Physics and various applications. Bunimovich received his bachelor's degree in 1967, master's degree in 1969 and PhD in 1973 from the University of Moscow. His masters and PhD thesis advisor was Yakov G. Sinai. In 1986 (after Perestroika started) he finally received Doctor of Sciences degree in "Theoretical and Mathematical Physics". Bunimovich is a Regents' Professor of Mathematics at the Georgia Institute of Technology. Bunimovich is a Fellow of the Institute of Physics and was awarded Humboldt Prize in Physics. Biography His Master's proved that some classes of quadratic maps of an interval have an absolutely continuous invariant measure and strong stochastic properties. Bunimovich is mostly known for discovery of a fundamental mechanism of chaos in dynamical systems called the mechanism of defocusing. This discovery came as a striking surprise not only to mathematics but to physics community as well. Physicists could not believe that such (physical!) phenomenon is possible (even though a rigorous mathematical proof was provided) until they conducted massive numerical experiments. The most famous class of chaotic dynamical systems of this type, dynamical billiards are focusing chaotic billiards such as the Bunimovich stadium ("Bunimovich flowers", elliptic flowers, etc.). Later Bunimovich proved that his mechanism of defocusing works in all dimensions despite of the phenomenon of astigmatism. Bunimovich introduced absolutely focusing mirrors, which is a new notion in geometric optics, and proved that only such mirrors could be focusing parts of chaotic billiards. He also constructed so called Bunimovich mushrooms, which are visual examples of billiards with mixed regular and chaotic dynamics. Physical realizations of Bunimovich stadia have been constructed for both classical and quantum i
https://en.wikipedia.org/wiki/Kuso
Kuso is a term used in East Asia for the internet culture that generally includes all types of camp and parody. In Japanese, is a word that is commonly translated to English as curse words such as fuck, shit, damn, and bullshit (both kuso and shit refer to feces), and is often said as an interjection. It is also used to describe outrageous matters and objects of poor quality. This usage of kuso was brought into Taiwan around 2000 by young people who frequently visited Japanese websites and quickly became an internet phenomenon, spreading to Taiwan and Hong Kong and subsequently to Mainland China. From Japanese kusogē to Taiwanese kuso The root of Taiwanese "kuso" was not the Japanese word kuso itself but . The word kusogē is a clipped compound of and , which means, quite literally, "crappy (video) games". This term was eventually brought outside of Japan and its meaning shifted in the West, becoming a term of endearment (and even a category) towards either bad games of nostalgic value and/or poorly-developed games that still remain enjoyable as a whole. This philosophy soon spread to Taiwan, where people would share the games and often satirical comments on BBSes, and the term was further shortened. Games generally branded as kuso in Taiwan include Hong Kong 97 and the Death Crimson series. Because kusogē were often unintentionally funny, soon the definition of kuso in Taiwan shifted to "anything hilarious", and people started to brand anything outrageous and funny as kuso. Parodies, such as the Chinese robot Xianxingzhe ridiculed by a Japanese website, were marked as kuso. Mo lei tau films by Stephen Chow are often said to be kuso as well. The Cultural Revolution is often a subject of parody too, with songs such as I Love Beijing Tiananmen spread around the internet for laughs. Some, however, limit the definition of kuso to "humour limited to those about Hong Kong comics or Japanese anime, manga, and games". Kuso by such definitions are primarily doujin or f
https://en.wikipedia.org/wiki/Millieme
Millieme is a French word meaning one thousandth of something. In English it may refer to: Millieme (angle), a French unit of plane angle similar to a milliradian. One thousandth of an Egyptian pound, Tunisian dinar, or Libyan pound. Numbers
https://en.wikipedia.org/wiki/Pyongyang%20TV%20Tower
Pyongyang TV Tower is a free-standing concrete TV tower with an observation deck and a panorama restaurant at a height of in Pyongyang, North Korea. The tower stands in Kaeson Park in Moranbong-guyok, north of Kim Il-sung Stadium. The tower broadcasts signals for Korean Central Television. History It was built in 1967 to enhance the broadcasting area, which was very poor at the time, and to start colour TV broadcasts. The Pyongyang TV Tower is chiefly based on the design of the Ostankino Tower in Moscow, which was built at the same time. Features There are broadcast antennas and technical equipment at the height of , located at circular platforms. An observation deck is located above the ground, and the tower is topped by a antenna. It uses its high-gain reflector antennas and panel antennas to produce a wide coverage of Analog and Digital TV reception, as well for radio reception. See also List of towers Television in North Korea References Towers in North Korea Buildings and structures in Pyongyang Radio masts and towers Observation towers Restaurant towers Towers completed in 1968 1968 establishments in North Korea 20th-century architecture in North Korea
https://en.wikipedia.org/wiki/Uniform%20boundedness
In mathematics, a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family. Definition Real line and complex plane Let be a family of functions indexed by , where is an arbitrary set and is the set of real or complex numbers. We call uniformly bounded if there exists a real number such that Metric space In general let be a metric space with metric , then the set is called uniformly bounded if there exists an element from and a real number such that Examples Every uniformly convergent sequence of bounded functions is uniformly bounded. The family of functions defined for real with traveling through the integers, is uniformly bounded by 1. The family of derivatives of the above family, is not uniformly bounded. Each is bounded by but there is no real number such that for all integers References Mathematical analysis
https://en.wikipedia.org/wiki/Biopharmaceutical
A biopharmaceutical, also known as a biological medical product, or biologic, is any pharmaceutical drug product manufactured in, extracted from, or semisynthesized from biological sources. Different from totally synthesized pharmaceuticals, they include vaccines, whole blood, blood components, allergenics, somatic cells, gene therapies, tissues, recombinant therapeutic protein, and living medicines used in cell therapy. Biologics can be composed of sugars, proteins, nucleic acids, or complex combinations of these substances, or may be living cells or tissues. They (or their precursors or components) are isolated from living sources—human, animal, plant, fungal, or microbial. They can be used in both human and animal medicine. Terminology surrounding biopharmaceuticals varies between groups and entities, with different terms referring to different subsets of therapeutics within the general biopharmaceutical category. Some regulatory agencies use the terms biological medicinal products or therapeutic biological product to refer specifically to engineered macromolecular products like protein- and nucleic acid-based drugs, distinguishing them from products like blood, blood components, or vaccines, which are usually extracted directly from a biological source. Biopharmaceutics is pharmaceutics that works with biopharmaceuticals. Biopharmacology is the branch of pharmacology that studies biopharmaceuticals. Specialty drugs, a recent classification of pharmaceuticals, are high-cost drugs that are often biologics. The European Medicines Agency uses the term advanced therapy medicinal products (ATMPs) for medicines for human use that are "based on genes, cells, or tissue engineering", including gene therapy medicines, somatic-cell therapy medicines, tissue-engineered medicines, and combinations thereof. Within EMA contexts, the term advanced therapies refers specifically to ATMPs, although that term is rather nonspecific outside those contexts. Gene-based and cellular bi
https://en.wikipedia.org/wiki/Southwestern%20blot
The southwestern blot, is a lab technique that involves identifying as well as characterizing DNA-binding proteins by their ability to bind to specific oligonucleotide probes. Determination of molecular weight of proteins binding to DNA is also made possible by the technique. The name originates from a combination of ideas underlying Southern blotting and Western blotting techniques of which they detect DNA and protein respectively. Similar to other types of blotting, proteins are separated by SDS-PAGE and are subsequently transferred to nitrocellulose membranes. Thereafter southwestern blotting begins to vary with regards to procedure as since the first blotting’s, many more have been proposed and discovered with goals of enhancing results. Former protocols were hampered by the need for large amounts of proteins and their susceptibility to degradation while being isolated. Southwestern blotting was first described by Brian Bowen, Jay Steinberg, U.K. Laemmli, and Harold Weintraub in 1979. During the time the technique was originally called "protein blotting". While there were existing techniques for purification of proteins associated with DNA, they often had to be used together to yield desired results. Thus, Bowen and colleagues sought to describe a procedure that could simplify the current methods of their time. Method Original Method To begin, proteins of interest are prepared for the SDS-PAGE technique and subsequently loaded onto the gel for separation on the basis of molecular size. Large proteins will have difficulty navigating through the mesh-like structure of the gel as they can not fit through the pores with the ease that smaller proteins can. As a result, large proteins do not travel very far on the gel in comparison to smaller proteins that travel further. After enough time, this results in distinct bands that can be visualized from a number of post-gel electrophoresis staining procedures. The bands are at different positions on the gel relative
https://en.wikipedia.org/wiki/Image%20histogram
An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance. Image histograms are present on many modern services. Photographers can use them as an aid to show the distribution of tones captured, and whether image detail has been lost to blown-out highlights or blacked-out shadows. This is less useful when using a raw image format, as the dynamic range of the displayed image may only be an approximation to that in the raw file. The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the total number of pixels in that particular tone. The left side of the horizontal axis represents the dark areas, the middle represents mid-tone values and the right hand side represents light areas. The vertical axis represents the size of the area (total number of pixels) that is captured in each one of these zones. Thus, the histogram for a very dark image will have most of its data points on the left side and center of the graph. Conversely, the histogram for a very bright image with few dark areas and/or shadows will have most of its data points on the right side and center of the graph. Image manipulation and histograms Image editors typically create a histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness or tonal value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made. Histogram equalization is a popular example of these algorithms. Improvements in picture brightness and contrast can thus be obtained. In the field of computer vision, image histograms can be useful tools f
https://en.wikipedia.org/wiki/Image%20noise
Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the image sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that obscures the desired information. Typically the term “image noise” is used to refer to noise in 2D images, not 3D images. The original meaning of "noise" was "unwanted signal"; unwanted electrical fluctuations in signals received by AM radios caused audible acoustic noise ("static"). By analogy, unwanted electrical fluctuations are also called "noise". Image noise can range from almost imperceptible specks on a digital photograph taken in good light, to optical and radioastronomical images that are almost entirely noise, from which a small amount of information can be derived by sophisticated processing. Such a noise level would be unacceptable in a photograph since it would be impossible even to determine the subject. Types Gaussian noise Principal sources of Gaussian noise in digital images arise during acquisition. The sensor has inherent noise due to the level of illumination and its own temperature, and the electronic circuits connected to the sensor inject their own share of electronic circuit noise. A typical model of image noise is Gaussian, additive, independent at each pixel, and independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors ("kTC noise"). Amplifier noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level in dark areas of the image. In color cameras where more amplification is used in the blue color channel than in the green or red channel, there can be more noise in the blue channel. At higher exposures, however, image senso
https://en.wikipedia.org/wiki/Lists%20of%20biologists%20by%20author%20abbreviation
Lists of biologists by author abbreviation include lists of botanists and of zoologists. The abbreviations are typically used in articles on species described or named by the biologist. Botanists Zoologists List of authors of names published under the ICZN Lists of biology lists
https://en.wikipedia.org/wiki/Security%20association
A security association (SA) is the establishment of shared security attributes between two network entities to support secure communication. An SA may include attributes such as: cryptographic algorithm and mode; traffic encryption key; and parameters for the network data to be passed over the connection. The framework for establishing security associations is provided by the Internet Security Association and Key Management Protocol (ISAKMP). Protocols such as Internet Key Exchange (IKE) and Kerberized Internet Negotiation of Keys (KINK) provide authenticated keying material. An SA is a simplex (one-way channel) and logical connection which endorses and provides a secure data connection between the network devices. The fundamental requirement of an SA arrives when the two entities communicate over more than one channel. Take, for example, a mobile subscriber and a base station. The subscriber may subscribe itself to more than one service. Therefore, each service may have different service primitives, such as a data encryption algorithm, public key, or initialization vector. To make things easier, all of this security information is grouped logically, and the logical group itself is a Security Association. Each SA has its own ID called SAID. So both the base station and mobile subscriber will share the SAID, and they will derive all the security parameters. In other words, an SA is a logical group of security parameters that enable the sharing of information to another entity. See also IPsec Virtual private network (VPN) Notes References Internet Key Exchange (IKEv2) Protocol - RFC 5996 IPsec Cryptography
https://en.wikipedia.org/wiki/User%20story
In software development and product management, a user story is an informal, natural language description of features of a software system. They are written from the perspective of an end user or user of a system, and may be recorded on index cards, Post-it notes, or digitally in project management software. Depending on the project, user stories may be written by different stakeholders like client, user, manager, or development team. User stories are a type of boundary object. They facilitate sensemaking and communication; and may help software teams document their understanding of the system and its context. History 1997: Kent Beck introduces user stories at the Chrysler C3 project in Detroit. 1998: Alistair Cockburn visited the C3 project and coined the phrase "A user story is a promise for a conversation." 1999: Kent Beck published the first edition of the book Extreme Programming Explained, introducing Extreme Programming (XP), and the usage of user stories in the planning game. 2001: Ron Jeffries proposed a "Three Cs" formula for user story creation: The Card (or often a post-it note) is a tangible physical token to hold the concepts; The Conversation is between the stakeholders (customers, users, developers, testers, etc.). It is verbal and often supplemented by documentation; The Confirmation ensures that the objectives of the conversation have been reached. 2001: The XP team at Connextra in London devised the user story format and shared examples with others. 2004: Mike Cohn generalized the principles of user stories beyond the usage of cards in his book User Stories Applied: For Agile Software Development that is now considered the standard reference for the topic according to Martin Fowler. Cohn names Rachel Davies as the inventor of user stories. While Davies was a team member at Connextra she credits the team as a whole with the invention. 2014: After a first article in 2005 and a blog post in 2008, in 2014 Jeff Patton published the user-st
https://en.wikipedia.org/wiki/After%20Man
After Man: A Zoology of the Future is a 1981 speculative evolution book written by Scottish geologist and palaeontologist Dougal Dixon and illustrated by several illustrators including Diz Wallis, John Butler, Brian McIntyre, Philip Hood, Roy Woodard and Gary Marsh. The book features a foreword by Desmond Morris. After Man explores a hypothetical future set 50 million years after extinction of humanity, a time period Dixon dubs the "Posthomic", which is inhabited by animals that have evolved from survivors of a mass extinction succeeding our own time. After Man used a fictional setting and hypothetical animals to explain the natural processes behind evolution and natural selection. In total, over a hundred different invented animal species are featured in the book, described as part of fleshed-out fictional future ecosystems. Reviews for After Man were highly positive and its success spawned two follow-up speculative evolution books which used new fictional settings and creatures to explain other natural processes: The New Dinosaurs (1988) and Man After Man (1990). After Man and Dixon's following books inspired the speculative evolution artistic movement which focuses on speculative scenarios in the evolution of life, often possible future scenarios (such as After Man) or alternative paths in the past (such as The New Dinosaurs). Dixon is often considered the founder of the modern speculative evolution movement. Summary After Man explores an imagined future Earth, set 50 million years from the present, hypothesizing what new animals might evolve in the timespan between its setting and the present day. Ecology and evolutionary theory are applied to create believable creatures, all of which have their own binomial names and text describing their behaviour and interactions with other contemporary animals. In this new period of the Cenozoic, which Dixon calls the "Posthomic", Europe and Africa have fused, closing the Mediterranean Sea; whereas Asia and North Ameri
https://en.wikipedia.org/wiki/Front-end%20processor
A front-end processor (FEP), or a communications processor, is a small-sized computer which interfaces to the host computer a number of networks, such as SNA, or a number of peripheral devices, such as terminals, disk units, printers and tape units. Data is transferred between the host computer and the front-end processor using a high-speed parallel interface. The front-end processor communicates with peripheral devices using slower serial interfaces, usually also through communication networks. The purpose is to off-load from the host computer the work of managing the peripheral devices, transmitting and receiving messages, packet assembly and disassembly, error detection, and error correction. Two examples are the IBM 3705 Communications Controller and the Burroughs Data Communications Processor. Sometimes FEP is synonymous with a communications controller, although the latter is not necessarily as flexible. Early communications controllers such as the IBM 270x series were hard wired, but later units were programmable devices. Front-end processor is also used in a more general sense in asymmetric multi-processor systems. The FEP is a processing device (usually a computer) which is closer to the input source than is the main processor. It performs some task such as telemetry control, data collection, reduction of raw sensor data, analysis of keyboard input, etc. Front-end processes relates to the software interface between the user (client) and the application processes (server) in the client/server architecture. The user enters input (data) into the front-end process where it is collected and processed in such a way that it conforms to what the receiving application (back end) on the server can accept and process. As an example, the user enters a URL into a GUI (front-end process) such as Microsoft Internet Explorer. The GUI then processes the URL in such a way that the user is able to reach or access the intended web pages on the web server (application serve
https://en.wikipedia.org/wiki/Hyperbolic%20set
In dynamical systems theory, a subset Λ of a smooth manifold M is said to have a hyperbolic structure with respect to a smooth map f if its tangent bundle may be split into two invariant subbundles, one of which is contracting and the other is expanding under f, with respect to some Riemannian metric on M. An analogous definition applies to the case of flows. In the special case when the entire manifold M is hyperbolic, the map f is called an Anosov diffeomorphism. The dynamics of f on a hyperbolic set, or hyperbolic dynamics, exhibits features of local structural stability and has been much studied, cf. Axiom A. Definition Let M be a compact smooth manifold, f: M → M a diffeomorphism, and Df: TM → TM the differential of f. An f-invariant subset Λ of M is said to be hyperbolic, or to have a hyperbolic structure, if the restriction to Λ of the tangent bundle of M admits a splitting into a Whitney sum of two Df-invariant subbundles, called the stable bundle and the unstable bundle and denoted Es and Eu. With respect to some Riemannian metric on M, the restriction of Df to Es must be a contraction and the restriction of Df to Eu must be an expansion. Thus, there exist constants 0<λ<1 and c>0 such that and and for all and for all and and for all and . If Λ is hyperbolic then there exists a Riemannian metric for which c = 1 — such a metric is called adapted. Examples Hyperbolic equilibrium point p is a fixed point, or equilibrium point, of f, such that (Df)p has no eigenvalue with absolute value 1. In this case, Λ = {p}. More generally, a periodic orbit of f with period n is hyperbolic if and only if Dfn at any point of the orbit has no eigenvalue with absolute value 1, and it is enough to check this condition at a single point of the orbit. References Dynamical systems Limit sets
https://en.wikipedia.org/wiki/Creamery
A creamery is a place where milk and cream are processed and where butter and cheese is produced. Cream is separated from whole milk; pasteurization is done to the skimmed milk and cream separately. Whole milk for sale has had some cream returned to the skimmed milk. The creamery is the source of butter from a dairy. Cream is an emulsion of fat-in-water; the process of churning causes a phase inversion to butter which is an emulsion of water-in-fat. Excess liquid as buttermilk is drained off in the process. Modern creameries are automatically controlled industries, but the traditional creamery needed skilled workers. Traditional tools included the butter churn and Scotch hands. The term "creamery" is sometimes used in retail trade as a place to buy milk products such as yogurt and ice cream. Under the banner of a creamery one might find a store also stocking pies and cakes or even a coffeehouse with confectionery. See also List of cheesemakers List of dairy products References Kanes K. Rajah & Ken J. Burgess editors (1991) Milk Fat: Production, Technology, Utilization, Society of Dairy Technology. R.K. Robinson editor (1994) Modern Dairy Technology, 2nd edition, Chapman & Hall, . R.A. Wilbey (1994) "Production of butter and dairy based spreads", in Robinson (1994). Butter Food processing Industrial processes
https://en.wikipedia.org/wiki/Norton%20Personal%20Firewall
Norton Personal Firewall, developed by Symantec, is a discontinued personal firewall with ad blocking, program control and privacy protection capabilities. Norton Personal Firewall program control module is able to allow or deny individual applications access to the Internet. Programs are automatically allowed or denied Internet access by Norton Personal Firewall. It uses a blacklist and a whitelist to determine whether the program should be allowed Internet access. The advertisement-blocking feature of this software rewrites the HTML that one's browser uses to display Web pages. It searches for code related to advertisements against a blacklist and prevents the web page from being displayed. The Privacy Control component blocks browser cookies and active content, and prevents the transmission of sensitive data through standard POP3 e-mail clients, Microsoft Office e-mail attachments and Instant Messaging services such as MSN Messenger, Windows Messenger and AOL Instant Messenger without the user's consent. See also Internet Security Comparison of antivirus software Comparison of firewalls References Firewall software Personal Firewall Gen Digital software Proprietary software
https://en.wikipedia.org/wiki/Power%20MOSFET
A power MOSFET is a specific type of metal–oxide–semiconductor field-effect transistor (MOSFET) designed to handle significant power levels. Compared to the other power semiconductor devices, such as an insulated-gate bipolar transistor (IGBT) or a thyristor, its main advantages are high switching speed and good efficiency at low voltages. It shares with the IGBT an isolated gate that makes it easy to drive. They can be subject to low gain, sometimes to a degree that the gate voltage needs to be higher than the voltage under control. The design of power MOSFETs was made possible by the evolution of MOSFET and CMOS technology, used for manufacturing integrated circuits since the 1960s. The power MOSFET shares its operating principle with its low-power counterpart, the lateral MOSFET. The power MOSFET, which is commonly used in power electronics, was adapted from the standard MOSFET and commercially introduced in the 1970s. The power MOSFET is the most common power semiconductor device in the world, due to its low gate drive power, fast switching speed, easy advanced paralleling capability, wide bandwidth, ruggedness, easy drive, simple biasing, ease of application, and ease of repair. In particular, it is the most widely used low-voltage (less than 200 V) switch. It can be found in a wide range of applications, such as most power supplies, DC-to-DC converters, low-voltage motor controllers, and many other applications. History The MOSFET was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. It was a breakthrough in power electronics. Generations of MOSFETs enabled power designers to achieve performance and density levels not possible with bipolar transistors. In 1969, Hitachi introduced the first vertical power MOSFET, which would later be known as the VMOS (V-groove MOSFET). The same year, the DMOS (double-diffused MOSFET) with self-aligned gate was first reported by Y. Tarui, Y. Hayashi and Toshihiro Sekigawa of the Electrotechnical Laboratory
https://en.wikipedia.org/wiki/Shortlex%20order
In mathematics, and particularly in the theory of formal languages, shortlex is a total ordering for finite sequences of objects that can themselves be totally ordered. In the shortlex ordering, sequences are primarily sorted by cardinality (length) with the shortest sequences first, and sequences of the same length are sorted into lexicographical order. Shortlex ordering is also called radix, length-lexicographic, military, or genealogical ordering. In the context of strings on a totally ordered alphabet, the shortlex order is identical to the lexicographical order, except that shorter strings precede longer strings. For example, the shortlex order of the set of strings on the English alphabet (in its usual order) is [ε, a, b, c, ..., z, aa, ab, ac, ..., zz, aaa, aab, aac, ..., zzz, ...], where ε denotes the empty string. The strings in this ordering over a fixed finite alphabet can be placed into one-to-one order-preserving correspondence with the natural numbers, giving the bijective numeration system for representing numbers. The shortlex ordering is also important in the theory of automatic groups. See also Graded lexicographic order References Order theory
https://en.wikipedia.org/wiki/WUPV
WUPV (channel 65) is a television station licensed to Ashland, Virginia, United States, serving the Richmond area as an affiliate of The CW. It is owned by Gray Television alongside Richmond-licensed NBC affiliate WWBT (channel 12) and WRID-LD (channel 48). The stations share studios on Midlothian Turnpike (US 60) in Richmond, while WUPV's transmitter is located northeast of Richmond in King William County, west of Enfield. WRID repeats its main channel from the WWBT transmitter behind the studios in the inner ring of Richmond on its third subchannel, mapped to WUPV-DT6. Established as a religious TV station in 1990, WZXK joined The WB in 1995 (as WAWB) and switched to UPN in 1997, adopting its present call sign. The result of the switch was to leave The WB without a full-time outlet in Richmond. The station joined The CW on its 2006 launch and today serves as one of two ATSC 3.0 (Next Gen TV) transmitters in central Virginia. The station airs one local newscast from the WWBT newsroom. History Christel Inc., run by James E. Campana, had broadcast leased-access religious programming on cable systems in Henrico County since 1978. In 1986, he was granted a construction permit for channel 65 in Ashland after settling with a competing applicant and began a years-long construction process that would involve more than $2 million in funds. The call letters WZXK were chosen at the suggestion of an attorney who knew they'd be available and after 10 suggestions were turned down by the FCC. Meanwhile, a tower was built in King William County in 1989 after Hanover County refused to concede a zoning variance to build the mast. Construction was almost halted on the rest of the project due to a sudden cash crunch; the transmitter was left sitting in a warehouse in Kentucky for a time because Christel needed to pay another $118,000. Channel 65 finally appeared on March 9, 1990. It aired primarily religious programming with some secular shows. However, it also dealt with financia
https://en.wikipedia.org/wiki/Sine%20and%20cosine%20transforms
In mathematics, the Fourier sine and cosine transforms are forms of the Fourier transform that do not use complex numbers or require negative frequency. They are the forms originally used by Joseph Fourier and are still preferred in some applications, such as signal processing or statistics. Definition The Fourier sine transform of , sometimes denoted by either or , is If means time, then is frequency in cycles per unit time, but in the abstract, they can be any pair of variables which are dual to each other. This transform is necessarily an odd function of frequency, i.e. for all : The numerical factors in the Fourier transforms are defined uniquely only by their product. Here, in order that the Fourier inversion formula not have any numerical factor, the factor of 2 appears because the sine function has norm of The Fourier cosine transform of , sometimes denoted by either or , is It is necessarily an even function of frequency, i.e. for all : Since positive frequencies can fully express the transform, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. Simplification to avoid negative t Some authors only define the cosine transform for even functions of , in which case its sine transform is zero. Since cosine is also even, a simpler formula can be used, Similarly, if is an odd function, then the cosine transform is zero and the sine transform can be simplified to Other conventions Just like the Fourier transform takes the form of different equations with different constant factors (see ), other authors also define the cosine transform as and sine as or, the cosine transform as and the sine transform as using as the transformation variable. And while is typically used to represent the time domain, is often used alternatively, particularly when representing frequencies in a spatial domain. Fourier inversion The original function can be recovered from its transform under the usual hypoth
https://en.wikipedia.org/wiki/Classification%20of%20discontinuities
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. The oscillation of a function at a point quantifies these discontinuities as follows: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant. A special case is if the function diverges to infinity or minus infinity, in which case the oscillation is not defined (in the extended real numbers, this is a removable discontinuity). Classification For each of the following, consider a real valued function of a real variable defined in a neighborhood of the point at which is discontinuous. Removable discontinuity Consider the piecewise function The point is a removable discontinuity. For this kind of discontinuity: The one-sided limit from the negative direction: and the one-sided limit from the positive direction: at both exist, are finite, and are equal to In other words, since the two one-sided limits exist and are equal, the limit of as approaches exists and is equal to this same value. If the actual value of is not equal to then is called a . This discontinuity can be removed to make continuous at or more precisely, the function is continuous at The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point This use is an abuse of terminology b
https://en.wikipedia.org/wiki/Auto-Tune
Auto-Tune (or autotune) is an audio processor introduced in 1997 by the American company Antares Audio Technologies. It uses a proprietary device to measure and alter pitch in vocal and instrumental music recording and performances. Auto-Tune was originally intended to disguise or correct off-key inaccuracies, allowing vocal tracks to be perfectly tuned. The 1998 Cher song "Believe" popularized the technique of using Auto-Tune to distort vocals. In 2018, the music critic Simon Reynolds observed that Auto-Tune had "revolutionized popular music", calling its use for effects "the fad that just wouldn't fade. Its use is now more entrenched than ever." In its role distorting vocals, Auto-Tune operates on different principles from the vocoder or talk box and produces different results. Function Auto-Tune is available as a plug-in for digital audio workstations used in a studio setting and as a stand-alone, rack-mounted unit for live performance processing. The processor slightly shifts pitches to the nearest true, correct semitone (to the exact pitch of the nearest note in traditional equal temperament). Auto-Tune can also be used as an effect to distort the human voice when pitch is raised or lowered significantly, such that the voice is heard to leap from note to note stepwise, like a synthesizer. Auto-Tune has become standard equipment in professional recording studios. Instruments such as the Peavey AT-200 guitar seamlessly use Auto-Tune technology for real-time pitch correction. Development Auto-Tune was developed by Andy Hildebrand, a Ph.D. research engineer who specialized in stochastic estimation theory and digital signal processing. Hildebrand conceived the vocal pitch correction technology on the suggestion of a colleague's wife, who had joked that she could benefit from a device to help her sing in tune. Over several months in early 1996, he implemented the algorithm on a custom Macintosh computer and presented the result at the NAMM Show later that yea
https://en.wikipedia.org/wiki/Fly%20%28pentop%20computer%29
The Fly Pentop Computer and FLY Fusion Pentop Computer are personal electronics products manufactured by LeapFrog Enterprises Inc. They are called a "pentop" computer by its manufacturer, because they consist of a pen with a computer inside. In 2009, LeapFrog discontinued both the manufacture and support of the device and all accessory products, such as notepads and ink refills which are required for continued use. The inventor of the FLY Pentop, Jim Marggraff, left LeapFrog and founded Livescribe in January 2007. Description The Fly, released in 2005, is a customizable pen that is intended to assist children with schoolwork. There are several bundled and add-on applications available, including a notepad, calculator, language and writing assistant, and educational games; many of these require the use of a small cartridge that can be inserted into a port built into the rear of the pen. The Fly only works on its own proprietary digital paper, which is lightly printed with a pattern of dots to provide positioning information to the pen via a tiny infrared camera. The ink tip itself can be retracted into the body of the pen when no physical notes are desired. The pen uses digital paper and pattern decoding technology developed by Anoto to track where the user writes on the page. It uses Vision Objects' MyScript character recognition technology to read what's been written, and can read aloud nearly any word in U.S. English. One notable thing is that the Fly uses only capital letters. To start the main menu of the base pen, the user writes "M" and circles it. After recognizing the circled "M", the pen switches to "menu mode". There are several different circle-letter codes for activating different applications, these codes are officially known as "Fly-Cons." Once an application is activated, the user uses the pen to draw on the paper to interact with the application. In much of the applications, users are told what to draw, rather than having the freedom to d
https://en.wikipedia.org/wiki/Cray%20T3E
The Cray T3E was Cray Research's second-generation massively parallel supercomputer architecture, launched in late November 1995. The first T3E was installed at the Pittsburgh Supercomputing Center in 1996. Like the previous Cray T3D, it was a fully distributed memory machine using a 3D torus topology interconnection network. The T3E initially used the DEC Alpha 21164 (EV5) microprocessor and was designed to scale from 8 to 2,176 Processing Elements (PEs). Each PE had between 64 MB and 2 GB of DRAM and a 6-way interconnect router with a payload bandwidth of 480 MB/s in each direction. Unlike many other MPP systems, including the T3D, the T3E was fully self-hosted and ran the UNICOS/mk distributed operating system with a GigaRing I/O subsystem integrated into the torus for network, disk and tape I/O. The original T3E (retrospectively known as the T3E-600) had a 300 MHz processor clock. Later variants, using the faster 21164A (EV56) processor, comprised the T3E-900 (450 MHz), T3E-1200 (600 MHz), T3E-1200E (with improved memory and interconnect performance) and T3E-1350 (675 MHz). The T3E was available in both air-cooled (AC) and liquid-cooled (LC) configurations. AC systems were available with 16 to 128 user PEs, LC systems with 64 to 2048 user PEs. A 1480-processor T3E-1200 was the first supercomputer to achieve a performance of more than 1 teraflops running a computational science application, in 1998. After Cray Research was acquired by Silicon Graphics in February 1996, development of new Alpha-based systems was stopped. While providing the -900, -1200 and -1200E upgrades to the T3E, in the long term Silicon Graphics intended Cray T3E users to migrate to the Origin 3000, a MIPS-based distributed shared memory computer, introduced in 2000. However, the T3E continued in production after SGI sold the Cray business the same year. See also History of supercomputing References External links Top500 description of T3E Inside Cray T3E-900 Serial Number 6702,
https://en.wikipedia.org/wiki/Traffic%20announcement%20%28radio%20data%20systems%29
Traffic announcement (TA) refers to the broadcasting of a specific type of traffic report on the Radio Data System. It is generally used by motorists, to assist with route planning, and for the avoidance of traffic congestion. The RDS-enabled receiver can be set to pay special attention to this TA flag and e.g. stop the tape/pause the CD or retune to receive a Traffic bulletin. The related TP (Traffic Programme) flag is used to allow the user to find only those stations that regularly broadcast traffic bulletins, whereas the TA flag is used to stop the tape or raise the volume during a traffic bulletin. On some modern units, traffic reports can also be recorded and stored within the unit, both while the unit is switched on, but also for a pre-set period after the unit is turned off. It may also have in-built timers to seek and record the same for two separate daily occasions, i.e., one setting for before the morning commute, and the second for the evening return journey. These messages may then subsequently be recalled on demand by the driver. This specific function is known as TIM, or traffic information message. See also Traffic Message Channel – an automated service operational in Europe. External links BBC factsheet – Radio Data System (RDS) in HTML format RDSList.com Broadcast engineering Radio technology
https://en.wikipedia.org/wiki/Netopia
Farallon, later renamed Netopia, was a computer networking company headquartered in Berkeley, and subsequently Emeryville, California, that produced a wide variety of products including bridges, repeaters and switches, and in their later Netopia incarnation, modems, routers, gateways, and Wi-Fi devices. The company also produced the NBBS (Netopia Broadband Server Software) and, as Farallon, Timbuktu remote administration software, as well as the MacRecorder, the first audio capture and manipulation products for the Macintosh (later sold to Macromedia). The company was founded in 1986 and changed its name to Netopia in 1998. Farallon originated several notable technologies, including: PhoneNet, an implementation of AppleTalk over plain ("Cat-3") telephone wiring or, more commonly, EIA-TIA 568A/B structured cabling systems. Many versions of the product were produced, but the original product was a commercialized version of a kit developed and produced by BMUG, the Berkeley Macintosh Users Group in 1986. The StarController, a line of LocalTalk and Ethernet bridges and switches released in 1988 which integrated directly with EIA-TIA 568A/B structured cabling systems. EtherWave, an ADB-powered serial-to-ethernet bridge in a dongle form-factor which looked something like a manta ray. The two external ports were 10BASE-T and the serial pigtail spoke an overclocked 690kbps version of LocalTalk. This served both to allow devices without expansion busses (commonly early Macintosh computers and LaserWriter printers) to connect directly to Ethernet networks, and also to allow the daisy-chaining of multiple devices from a single Ethernet switch or bridge port. Later versions used Apple's "AAUI" version of the Attachment Unit Interface to achieve full 10mbps host connections. AirDock, a Serial-to-IrDA gateway which allowed devices with LocalTalk ports to communicate on IrDA infrared wireless networks. Netopia acquired multiple companies in the home networking space inclu
https://en.wikipedia.org/wiki/Center%20of%20percussion
The center of percussion is the point on an extended massive object attached to a pivot where a perpendicular impact will produce no reactive shock at the pivot. Translational and rotational motions cancel at the pivot when an impulsive blow is struck at the center of percussion. The center of percussion is often discussed in the context of a bat, racquet, door, sword or other extended object held at one end. The same point is called the center of oscillation for the object suspended from the pivot as a pendulum, meaning that a simple pendulum with all its mass concentrated at that point will have the same period of oscillation as the compound pendulum. In sports, the center of percussion of a bat, racquet, or club is related to the so-called "sweet spot", but the latter is also related to vibrational bending of the object. Explanation Imagine a rigid beam suspended from a wire by a fixture that can slide freely along the wire at point P, as shown in the Figure. An impulsive blow is applied from the left. If it is below the center of mass (CM) it will cause the beam to rotate counterclockwise around the CM and also cause the CM to move to the right. The center of percussion (CP) is below the CM. If the blow falls above the CP, the rightward translational motion will be bigger than the leftward rotational motion at P, causing the net initial motion of the fixture to be rightward. If the blow falls below the CP the opposite will occur, rotational motion at P will be larger than translational motion and the fixture will move initially leftward. Only if the blow falls exactly on the CP will the two components of motion cancel out to produce zero net initial movement at point P. When the sliding fixture is replaced with a pivot that cannot move left or right, an impulsive blow anywhere but at the CP results in an initial reactive force at the pivot. Calculating the center of percussion For a free, rigid beam, an impulse applied at right angle at a distance
https://en.wikipedia.org/wiki/Fire%20whirl
A fire whirl or fire devil (sometimes referred to as a fire tornado) is a whirlwind induced by a fire and often (at least partially) composed of flame or ash. These start with a whirl of wind, often made visible by smoke, and may occur when intense rising heat and turbulent wind conditions combine to form whirling eddies of air. These eddies can contract a tornado-like vortex that sucks in debris and combustible gases. The phenomenon is sometimes labeled a fire tornado, firenado, fire swirl, or fire twister, but these terms usually refer to a separate phenomenon where a fire has such intensity that it generates an actual tornado. Fire whirls are not usually classifiable as tornadoes as the vortex in most cases does not extend from the surface to cloud base. Also, even in such cases, those fire whirls very rarely are classic tornadoes, as their vorticity derives from surface winds and heat-induced lifting, rather than from a tornadic mesocyclone aloft. The phenomenon was first verified in the 2003 Canberra bushfires and has since been verified in the 2018 Carr Fire in California and 2020 Loyalton Fire in California and Nevada. Formation A fire whirl consists of a burning core and a rotating pocket of air. A fire whirl can reach up to . Fire whirls become frequent when a wildfire, or especially firestorm, creates its own wind, which can spawn large vortices. Even bonfires often have whirls on a smaller scale and tiny fire whirls have been generated by very small fires in laboratories. Most of the largest fire whirls are spawned from wildfires. They form when a warm updraft and convergence from the wildfire are present. They are usually tall, a few meters (several feet) wide, and last only a few minutes. Some, however, can be more than tall, contain wind speeds over , and persist for more than 20 minutes. Fire whirls can uproot trees that are tall or more. These can also aid the 'spotting' ability of wildfires to propagate and start new fires as they lift burn
https://en.wikipedia.org/wiki/Galerkin%20method
In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used: Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions. Bubnov–Galerkin method (after Ivan Bubnov) does not require the bilinear form to be symmetric and substitutes the energy minimization with orthogonality constraints determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator. Petrov–Galerkin method (after Georgii I. Petrov) allows using basis functions for orthogonality constraints (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation. Examples of Galerkin methods are: the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method, the boundary element method for solving integral equations, Krylov subspace methods. Example: Matrix linear system We first introduce and illustrate the Galerkin method as being
https://en.wikipedia.org/wiki/Modified%20triadan%20system
The modified triadan system is a scheme of dental nomenclature that can be used widely across different animal species. It is used worldwide among veterinary surgeons. Each tooth is given a three digit number. The first number relates to the quadrant of the mouth in which the tooth lies: upper right upper left lower left lower right If it is a deciduous tooth that is being referred to, then a different number is used: upper right upper left lower left lower right The second and third numbers refer to the location of the tooth from front to back (or rostral to caudal). This starts at 01 and goes up to 11 for many species, depending on the total number of teeth. References Mammal anatomy Horse anatomy Zoology
https://en.wikipedia.org/wiki/Chemotype
A chemotype (sometimes chemovar) is a chemically distinct entity in a plant or microorganism, with differences in the composition of the secondary metabolites. Minor genetic and epigenetic changes with little or no effect on morphology or anatomy may produce large changes in the chemical phenotype. Chemotypes are often defined by the most abundant chemical produced by that individual and the concept has been useful in work done by chemical ecologists and natural product chemists. With respect to plant biology, the term "chemotype" was first coined by Rolf Santesson and his son Johan in 1968, defined as, "...chemically characterized parts of a population of morphologically indistinguishable individuals." In microbiology, the term "chemoform" or "chemovar" is preferred in the 1990 edition of the International Code of Nomenclature of Bacteria (ICNB), the former referring to the chemical constitution of an organism and the latter meaning "production or amount of production of a particular chemical." Terms with the suffix -type are discouraged so as to avoid confusion with type specimens. The terms chemotype and chemovar were originally introduced to the ICNB in a proposed revision to one of the nomenclatural rules dealing with infrasubspecific taxonomic subdivisions at the 1962 meeting of the International Microbiological Congress in Montreal. The proposed change argued that nomenclatural regulation of these ranks, such as serotype and morphotype, is necessary to avoid confusion. In proposed recommendation 8a(7), it was asked that "authorization be given for the use of the terms chemovar and chemotype," defining the terms as being "used to designate an infrasubspecific subdivision to include infrasubspecific forms or strains characterized by the production of some chemical not normally produced by the type strain of the species." The change to the Code was approved in August 1962 by the Judicial Commission of the International Committee of Bacteriological Nomenclature
https://en.wikipedia.org/wiki/Sysctl
sysctl is a software utility of some Unix-like operating systems that reads and modifies the attributes of the system kernel such as its version number, maximum limits, and security settings. It is available both as a system call for compiled programs, and an administrator command for interactive use and scripting. Linux additionally exposes sysctl as a virtual file system. BSD In BSD, these parameters are generally objects in a management information base (MIB) that describe tunable limits such as the size of a shared memory segment, the number of threads the operating system will use as an NFS client, or the maximum number of processes on the system; or describe, enable or disable behaviors such as IP forwarding, security restrictions on the superuser (the "securelevel"), or debugging output. In OpenBSD and DragonFly BSD, sysctl is also used as the transport layer for the hw.sensors framework for hardware monitoring, whereas NetBSD uses the ioctl system call for its sysmon envsys counterpart. Both sysctl and ioctl are the two system calls which can be used to add extra functionality to the kernel without adding yet another system call; for example, in 2004 with OpenBSD 3.6, when the tcpdrop utility was introduced, sysctl was used as the underlying system call. In FreeBSD, although there is no sensors framework, the individual temperature and other sensors are still commonly exported through the sysctl tree through Newbus, for example, as is the case with the aibs(4) driver that's available in all the 4 BSD systems, including FreeBSD. In BSD, a system call or system call wrapper is usually provided for use by programs, as well as an administrative program and a configuration file (for setting the tunable parameters when the system boots). This feature first appeared in 4.4BSD. It has the advantage over hardcoded constants that changes to the parameters can be made dynamically without recompiling the kernel. Historically, although kernel variables themselves
https://en.wikipedia.org/wiki/Rare%20species
A rare species is a group of organisms that are very uncommon, scarce, or infrequently encountered. This designation may be applied to either a plant or animal taxon, and is distinct from the term endangered or threatened. Designation of a rare species may be made by an official body, such as a national government, state, or province. The term more commonly appears without reference to specific criteria. The International Union for Conservation of Nature does not normally make such designations, but may use the term in scientific discussion. Rarity rests on a specific species being represented by a small number of organisms worldwide, usually fewer than 10,000. However, a species having a very narrow endemic range or fragmented habitat also influences the concept. Almost 75% of known species can be classified as "rare". Rare species are species with small populations. Many will move into the endangered or vulnerable category if the negative factors affecting them continue to operate. Well-known examples of rare species - because these are large terrestrial animals - include the Himalayan brown bear, Fennec fox, Wild Asiatic buffalo, or the Hornbill. They are not endangered yet, but classified as "at risk", although the frontier between these categories is increasingly difficult to draw given the general paucity of data on rare species. This is especially the case in the world Ocean where many 'rare' species not seen for decades may well have gone extinct unnoticed, if they are not already on the verge of extinction like the Mexican Vaquita. A species may be endangered or vulnerable, but not considered rare if it has a large, dispersed population. IUCN uses the term "rare" as a designation for species found in isolated geographical locations. Rare species are generally considered threatened because a small population size is obviously less likely to recover from ecological disasters. A rare plant's legal status can be observed through the USDA's Plants Databas
https://en.wikipedia.org/wiki/Nitro%20blue%20tetrazolium%20chloride
Nitro blue tetrazolium is a chemical compound composed of two tetrazole moieties. It is used in immunology for sensitive detection of alkaline phosphatase (with BCIP). NBT serves as the oxidant and BCIP is the AP-substrate (and gives also dark blue dye). Clinical significance In immunohistochemistry the alkaline phosphatase is often used as a marker, conjugated to an antibody. The colored product can either be of the NBT/BCIP reaction reveals where the antibody is bound, or can be used in immunofluorescence. The NBT/BCIP reaction is also used for colorimetric/spectrophotometric activity assays of oxidoreductases. One application is in activity stains in gel electrophoresis, such as with the mitochondrial electron transport chain complexes. Nitro blue tetrazolium is used in a diagnostic test, particularly for chronic granulomatous disease and other diseases of phagocyte function. When there is an NADPH oxidase defect, the phagocyte is unable to make reactive oxygen species or radicals required for bacterial killing. As a result, bacteria may thrive within the phagocyte. The higher the blue score, the better the cell is at producing reactive oxygen species. References Biochemistry detection reactions Immunologic tests Nitrobenzenes Phenol ethers Tetrazoles
https://en.wikipedia.org/wiki/SIGSOFT
The Association for Computing Machinery's Special Interest Group on Software Engineering provides a forum for computing professionals from industry, government and academia to examine principles, practices, and new research results in software engineering. SIGSOFT focuses on issues related to all aspects of software development and maintenance, with emphasis on requirements, specification and design, software architecture, validation, verification, debugging, software safety, software processes, software management, measurement, user interfaces, configuration management, software engineering environments, and CASE tools. SIGSOFT (co-)sponsors conferences and symposia including the International Conference on Software Engineering (ICSE), the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) and other events. SIGSOFT publishes the informal bimonthly newsletter Software Engineering Notes (SEN) newsletter with papers, reports and other material related to the cost-effective, timely development and maintenance of high-quality software. SIGSOFT's mission is to improve the ability to engineer software by stimulating interaction among practitioners, researchers, and educators; by fostering the professional development of software engineers; and by representing software engineers to professional, legal, and political entities. References External links SIGSOFT website Association for Computing Machinery Special Interest Groups Software engineering organizations Organizations established in 1976
https://en.wikipedia.org/wiki/Software%20Engineering%20Notes
The ACM SIGSOFT Software Engineering Notes (SEN) is published by the Association for Computing Machinery (ACM) for the Special Interest Group on Software Engineering (SIGSOFT). It was established in 1976, and the first issue appeared in May 1976. It provides a forum for informal articles and other information on software engineering. The headquarters is in New York City. Since 1990, it has been published five times a year. References External links ACM SIGSOFT Software Engineering Notes homepage Computer magazines published in the United States Association for Computing Machinery magazines Engineering magazines Magazines established in 1976 Magazines published in New York City Software engineering publications
https://en.wikipedia.org/wiki/Fermat%20point
In Euclidean geometry, the Fermat point of a triangle, also called the Torricelli point or Fermat–Torricelli point, is a point such that the sum of the three distances from each of the three vertices of the triangle to the point is the smallest possible or, equivalently, the geometric median of the three vertices. It is so named because this problem was first raised by Fermat in a private letter to Evangelista Torricelli, who solved it. The Fermat point gives a solution to the geometric median and Steiner tree problems for three points. Construction The Fermat point of a triangle with largest angle at most 120° is simply its first isogonic center or X(13), which is constructed as follows: Construct an equilateral triangle on each of two arbitrarily chosen sides of the given triangle. Draw a line from each new vertex to the opposite vertex of the original triangle. The two lines intersect at the Fermat point. An alternative method is the following: On each of two arbitrarily chosen sides, construct an isosceles triangle, with base the side in question, 30-degree angles at the base, and the third vertex of each isosceles triangle lying outside the original triangle. For each isosceles triangle draw a circle, in each case with center on the new vertex of the isosceles triangle and with radius equal to each of the two new sides of that isosceles triangle. The intersection inside the original triangle between the two circles is the Fermat point. When a triangle has an angle greater than 120°, the Fermat point is sited at the obtuse-angled vertex. In what follows "Case 1" means the triangle has an angle exceeding 120°. "Case 2" means no angle of the triangle exceeds 120°. Location of X(13) Fig. 2 shows the equilateral triangles attached to the sides of the arbitrary triangle . Here is a proof using properties of concyclic points to show that the three lines in Fig 2 all intersect at the point and cut one another at angles of 60°. The triangles are congr
https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy%20deconvolution
The Richardson–Lucy algorithm, also known as Lucy–Richardson deconvolution, is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon B. Lucy, who described it independently. Description When an image is produced using an optical system and detected using photographic film or a charge-coupled device, for instance, it is inevitably blurred, with an ideal point source not appearing as a point but being spread out into what is known as the point spread function. Extended sources can be decomposed into the sum of many individual point sources, thus the observed image can be represented in terms of a transition matrix p operating on an underlying image: where is the intensity of the underlying image at pixel and is the detected intensity at pixel . In general, a matrix whose elements are describes the portion of light from source pixel j that is detected in pixel i. In most good optical systems (or in general, linear systems that are described as shift invariant) the transfer function p can be expressed simply in terms of the spatial offset between the source pixel j and the observation pixel i: where is called a point spread function. In that case the above equation becomes a convolution. This has been written for one spatial dimension, but of course most imaging systems are two dimensional, with the source, detected image, and point spread function all having two indices. So a two dimensional detected image is a convolution of the underlying image with a two dimensional point spread function plus added detection noise. In order to estimate given the observed and a known , the following iterative procedure is employed in which the estimate of (called ) for iteration number t is updated as follows: where It has been shown empirically that if this iteration converges, it converges to the maximum likelihood solution for . Writing this more generally f
https://en.wikipedia.org/wiki/WRBW
WRBW (channel 65), branded on-air as Fox 35 Plus, is a television station in Orlando, Florida, United States, serving as the local outlet for the MyNetworkTV programming service. It is owned and operated by Fox Television Stations alongside Fox outlet WOFL (channel 35). Both stations share studios on Skyline Drive in Lake Mary, while WRBW's transmitter is located in unincorporated Bithlo, Florida. History WRBW began operation as an independent station on June 6, 1994, airing vintage sitcoms, cartoons and older movies. It was owned by Rainbow Media, a subsidiary of Cablevision Systems Corporation. It originally operated from studio facilities located on the backlot of Universal Studios Florida. WRBW became the Orlando area affiliate of the United Paramount Network (UPN) (a network created by BHC and Paramount), when the network debuted on January 16, 1995. Since UPN only provided two hours of network programming two nights a week at launch, WRBW essentially still programmed itself as an independent station. During the late 1990s, especially during the wildfire plagued summer of 1998, there were occasions to which ABC Sports programming was moved to channel 65 in order for the market's ABC affiliate WFTV (channel 9) to provide wall-to-wall news coverage. Some of ABC's Saturday morning children's programs also aired on WRBW, until WRDQ signed on the air in April 2000. Chris-Craft Industries, part-owner of UPN (through its United Television unit) bought the station in 1998, making WRBW the first owned-and-operated station of a major network in the Orlando market. Fox Television Stations acquired most of Chris-Craft's television stations, including WRBW, in 2001. Fox did not consider moving its affiliation from WOFL to WRBW, however; not only was WOFL one of Fox's strongest affiliates, but WRBW was located on a very high channel number. The buyout of Chris-Craft's stake in UPN by Viacom (which owned 50% of UPN since 1996) and the subsequent purchase of WRBW by Fox effe
https://en.wikipedia.org/wiki/Hacker%20%28video%20game%29
Hacker is a 1985 video game by Activision. It was designed by Steve Cartwright and released for the Amiga, Amstrad CPC, Apple II, Atari 8-bit family, Atari ST, Commodore 64, Macintosh, DOS, MSX2, and ZX Spectrum. Plot Activision executive Jim Levy introduced Hacker to reporters by pretending that something had gone wrong during his attempt to connect on line to company headquarters to demonstrate a new game. After several attempts he logged into a mysterious non-Activision computer, before explaining, "That, ladies and gentlemen, is the game". The player assumes the role of a hacker, a person experienced in breaking into secure computer systems, who accidentally acquires access to a non-public system. The game was shipped with no information on how to play, thus building the concept that the player did hack into a system. BPL2020 The player must attempt to hack into the Magma Ltd. computer system at the beginning of the game by guessing the logon password. The password becomes obvious only after gaining access, through another means of entry, to the later stage of the game, but typing help or h in the initial command line gives a clue. Since initial attempts consist of guessing (and likely failing), access is eventually granted due to a supposed malfunction in the security system. Once the player is in, the player is asked to identify various parts of a robot unit by pointing the cursor at the relevant parts and pressing the joystick button. Most parts have exotic and technical names, such as "asynchronous data compactor" or "phlamson joint"—this again allows more room for error by initially trying to guess which part each name belongs to. Failure to identify each part correctly forces the player to take a retest until a 100 percent identification is made, at which point the player is then allowed to continue. The player gains control of the robot which can travel around the globe via secret tunnels, deep within the earth. The game's text states that the robot is
https://en.wikipedia.org/wiki/Pressure%20Equipment%20Directive%20%28EU%29
The Pressure Equipment Directive (PED) 2014/68/EU (formerly 97/23/EC) of the EU sets out the standards for the design and fabrication of pressure equipment ("pressure equipment" means steam boilers, pressure vessels, piping, safety valves and other components and assemblies subject to pressure loading) generally over one liter in volume and having a maximum pressure more than 0.5 bar gauge. It also sets the administrative procedures requirements for the "conformity assessment" of pressure equipment, for the free placing on the European market without local legislative barriers. It has been mandatory throughout the EU since 30 May 2002, with 2014 revision fully effective as of 19 July 2016. The standards and regulations regarding pressure vessels and boiler safety are also very close to the US standards defined by the American Society of Mechanical Engineers (ASME). This enables most international inspection agencies to provide both verification and certification services to assess compliance to the different pressure equipment directives. From the pressure vessel manufactures PED does not generally require a prior manufacturing permit/certificate/stamp as ASME does. Contents Scope and Definitions (including exemptions of its scope) Market surveillance Technical requirements: classification of pressure equipment according to type and content. Free movement Presumption of conformity Committee on technical standards and regulations Committee on Pressure Equipment Safeguard clause Classification of pressure equipment Conformity assessment European approval for materials Notified bodies Recognized third-party organisations User inspectorates CE marking Unduly affixed CE marking International co-operation Decisions entailing refusal or restriction Repeal Transposition and transitional provisions Addressees of the Directive: the EU member states for implementation in national laws and/or regulations. Annex I: Essential safety requirements General
https://en.wikipedia.org/wiki/MailSlot
A Mailslot is a one-way interprocess communication mechanism, available on the Microsoft Windows operating system, that allows communication between processes both locally and over a network. The use of Mailslots is generally simpler than named pipes or sockets when a relatively small number of relatively short messages are expected to be transmitted, such as for example infrequent state-change messages, or as part of a peer-discovery protocol. The Mailslot mechanism allows for short message broadcasts ("datagrams") to all listening computers across a given network domain. Features Mailslots function as a server-client interface. A server can create a Mailslot, and a client can write to it by name. Only the server can read the mailslot, as such mailslots represent a one-way communication mechanism. A server-client interface could consist of two processes communicating locally or across a network. Mailslots operate over the RPC protocol and work across all computers in the same network domain. Mailslots offer no confirmation that a message has been received. Mailslots are generally a good choice when one client process must broadcast a message to multiple server processes. Uses The most widely known use of the Mailslot IPC mechanism is the Windows Messenger service that is part of the Windows NT-line of products, including Windows XP. The Messenger Service, not to be confused with the MSN Messenger internet chat service, is essentially a Mailslot server that waits for a message to arrive. When a message arrives it is displayed in a popup onscreen. The NET SEND command is therefore a type of Mailslot client, because it writes to specified mailslots on a network. A number of programs also use Mailslots to communicate. Generally these are amateur chat clients and other such programs. Commercial programs usually prefer pipes or sockets. Mailslots are implemented as files in a mailslot file system (MSFS). Examples of Mailslots include: MAILSLOT\Messngr - Microsoft N
https://en.wikipedia.org/wiki/The%20Silver%20Spade
The Silver Spade was a giant power shovel used for strip mining in southeastern Ohio. Manufactured by Bucyrus-Erie, South Milwaukee, Wisconsin, the model 1950-B was one of two of this model built, the other being the GEM of Egypt. Its sole function was to remove the earth and rock overburden from the coal seam. Attempts to purchase and preserve the shovel from Consol to make it the centerpiece of a mining museum exhibit for $2.6 million fell short, and the shovel was dismantled in February 2007. Facts and figures Began operations – November 1965 Speed – 1/4 mph (400 m/h) Bucket capacity – 105 cu yd (80 m3) Operating weight – 14,000,000 lb (7,000 short tons, 6,400 metric tons) Height – 220 ft to top of boom (67 m) Boom length – 200 feet (61 m) Width – 59 ft (18 m) Height of crawlers – 8 ft (2.5 m) Length of crawlers – 34 ft (10 m) Maximum dumping height – 139 ft (42 m) Maximum dumping radius – 195 ft (59 m) Rating on A.C. motors – 13,500 hp (10.1 MW) peak Entire operation of the shovel is controlled by two hand levers and a pair of foot pedals. Digs 315,000 lb (143 metric tons) of earth in a single bite, swings 180° and deposits the load up to 390 ft (119 m) away from the digging points at heights up to 140 ft (42.5 m). Machine's four hoist ropes total 3,000 ft (914 m) in length. Fourteen main digging cycle motors are capable of developing a combined peak of 13,500 hp (10.1 MW) at peak load. Automatically leveled through four hydraulic jacks. Swings a 105 cubic yard (80 m3) dipper from a 200 ft (61 m) boom and a 122 ft (37 m) dipper handle. The "GEM of Egypt", the other large shovel, has similar statistics concerning size and weight, etc. The primary difference is the bucket and boom. The GEM is a 130 cubic-yard (99.4 m3) bucket and 170 ft (52 m) boom, while the Spade sports 105 cubic-yard (80 m3) bucket and 200 ft (61 m) boom. Dipper arm The design is unusual, as it uses a Knee action crowd, and only these two Bucyrus-Erie 1950-Bs were fitt
https://en.wikipedia.org/wiki/Controversy%20over%20Cantor%27s%20theory
In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers. Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change. This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory. Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum). Cantor's argument Cantor's first proof that infinite sets can have different cardinalities was published in 1874. This proof demonstrates that the set of natural numbers and the set of real numbers have different cardinalities. It uses the theorem that a bounded increasing sequence of real numbers has a limit, which can be proved by using Cantor's or Richard Dedekind's construction of the irrational numbers. Because Leopold Kronecker did not accept these constructions, Cantor was motivated to develop a new proof. In 1891, he published "a much simpler proof ... which does not depend on considering the irrational numbers." His new proof uses his diagonal argument to prove that there exists an infinite set with a larger number of elements (or greater cardinality) than the set of natural numbers N = {1, 2, 3, ...}. This larger set consists of the elements (x1, x2, x3, ...), where each xn is either m or w. Each of these elements corresponds to a subset of N—namely, the element (x1, x2, x3, ...) corresponds to {n ∈ N:  xn = w}. So C
https://en.wikipedia.org/wiki/Retransmission%20%28data%20networks%29
Retransmission, essentially identical with automatic repeat request (ARQ), is the resending of packets which have been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication (such as that provided by a reliable byte stream, for example TCP). Such networks are usually "unreliable", meaning they offer no guarantees that they will not delay, damage, or lose packets, or deliver them out of order. Protocols which provide reliable communication over such networks use a combination of acknowledgments (i.e. an explicit receipt from the destination of the data), retransmission of missing or damaged packets (usually initiated by a time-out), and checksums to provide that reliability. Acknowledgment There are several forms of acknowledgement which can be used alone or together in networking protocols: Positive Acknowledgement: the receiver explicitly notifies the sender which packets, messages, or segments were received correctly. Positive Acknowledgement therefore also implicitly informs the sender which packets were not received and provides detail on packets which need to be retransmitted. Negative Acknowledgment (NACK): the receiver explicitly notifies the sender which packets, messages, or segments were received incorrectly and thus may need to be retransmitted (RFC 4077). Selective Acknowledgment (SACK): the receiver explicitly lists which packets, messages, or segments in a stream are acknowledged (either negatively or positively). Positive selective acknowledgment is an option in TCP (RFC 2018) that is useful in Satellite Internet access (RFC 2488). Cumulative Acknowledgment: the receiver acknowledges that it correctly received a packet, message, or segment in a stream which implicitly informs the sender that the previous packets were received correctly. TCP uses cumulative acknowledgment with its TCP sliding window. Retransmission Retransmiss
https://en.wikipedia.org/wiki/Mondrian%20OLAP%20server
Mondrian is an open source OLAP (online analytical processing) server, written in Java. It supports the MDX (multidimensional expressions) query language and the XML for Analysis and olap4j interface specifications. It reads from SQL and other data sources and aggregates data in a memory cache. Mondrian is used for: High performance, interactive analysis of large or small volumes of information Dimensional exploration of data, for example analyzing sales by product line, by region, by time period Parsing the MDX language into Structured Query Language (SQL) to retrieve answers to dimensional queries High-speed queries through the use of aggregate tables in the RDBMS Advanced calculations using the calculation expressions of the MDX language Mondrian History The first public release of Mondrian was on August 9, 2002. See also Business intelligence Comparison of OLAP servers Online analytical processing Software using the Eclipse license
https://en.wikipedia.org/wiki/Java%20Modeling%20Language
The Java Modeling Language (JML) is a specification language for Java programs, using Hoare style pre- and postconditions and invariants, that follows the design by contract paradigm. Specifications are written as Java annotation comments to the source files, which hence can be compiled with any Java compiler. Various verification tools, such as a runtime assertion checker and the Extended Static Checker (ESC/Java) aid development. Overview JML is a behavioural interface specification language for Java modules. JML provides semantics to formally describe the behavior of a Java module, preventing ambiguity with regard to the module designers' intentions. JML inherits ideas from Eiffel, Larch and the Refinement Calculus, with the goal of providing rigorous formal semantics while still being accessible to any Java programmer. Various tools are available that make use of JML's behavioral specifications. Because specifications can be written as annotations in Java program files, or stored in separate specification files, Java modules with JML specifications can be compiled unchanged with any Java compiler. Syntax JML specifications are added to Java code in the form of annotations in comments. Java comments are interpreted as JML annotations when they begin with an @ sign. That is, comments of the form //@ <JML specification> or /*@ <JML specification> @*/ Basic JML syntax provides the following keywords requires Defines a precondition on the method that follows. ensures Defines a postcondition on the method that follows. signals Defines a postcondition for when a given Exception is thrown by the method that follows. signals_only Defines what exceptions may be thrown when the given precondition holds. assignable Defines which fields are allowed to be assigned to by the method that follows. pure Declares a method to be side effect free (like assignable \nothing but can also throw exceptions). Furthermore, a pure method is supposed to always either
https://en.wikipedia.org/wiki/Community%20Z%20Tools
The Community Z Tools (CZT) initiative is based around a SourceForge project to build a set of tools for the Z notation, a formal method useful in software engineering. Tools include support for editing, typechecking and animating Z specifications. There is some support for extensions such as Object-Z and TCOZ. The tools are built using the Java programming language. CZT was proposed by Andrew Martin of Oxford University in 2001. References External links CZT SourceForge website CZT initiative information by Andrew Martin Softpedia information CZT: A Framework for Z Tools by Petra Malik and Mark Utting (PDF) 2001 establishments in England 2001 software Z notation Research projects Free software programmed in Java (programming language) Department of Computer Science, University of Oxford
https://en.wikipedia.org/wiki/Michael%20J.%20C.%20Gordon
Michael John Caldwell Gordon (28 February 1948 – 22 August 2017) was a British computer scientist. Life Mike Gordon was born in Ripon, Yorkshire, England. He attended Dartington Hall School and Bedales School. In 1966, he was accepted to study engineering at Gonville and Caius College, University of Cambridge, but transferred to mathematics. During his studies, in 1969 he worked at the National Physical Laboratory in London during the summer, gaining his first exposure to computers. Gordon studied for his PhD degree at University of Edinburgh, supervised by Rod Burstall, finishing in 1973 with a thesis entitled Evaluation and Denotation of Pure LISP Programs. He was invited to Stanford University in California by John McCarthy, the inventor of LISP, to work in his Artificial Intelligence Laboratory there. Gordon worked at the Cambridge University Computer Laboratory from 1981, initially as a lecturer, promoted to Reader in 1988 and Professor in 1996. He was elected a Fellow of the Royal Society in 1994, and in 2008 a two-day research meeting on Tools and Techniques for Verification of System Infrastructure was held there in honour of his 60th birthday. Mike Gordon was married to Avra Cohn, a PhD student of Robin Milner at the University of Edinburgh, and they undertook research together. He died in Cambridge after a brief illness and is survived by his wife and two sons. Work Gordon led the development of the HOL theorem prover. The HOL system is an environment for interactive theorem proving in a higher-order logic. Its most outstanding feature is its high degree of programmability through the meta-language ML. The system has a wide variety of uses, from formalising pure mathematics to verification of industrial hardware. There has been a series of international conferences on the HOL system, TPHOLs. The first three were informal users' meetings with no published proceedings. The tradition now is for an annual conference in a continent different from the lo
https://en.wikipedia.org/wiki/Bokosuka%20Wars
is a 1983 action-strategy role-playing video game developed by Kōji Sumii (住井浩司) and released by ASCII for the Sharp X1 computer, followed by ports to the MSX, FM-7, NEC PC-6001, NEC PC-8801 and NEC PC-9801 computer platforms, as well as an altered version released for the Family Computer console and later the Virtual Console service. It revolves around a leader who must lead an army in phalanx formation across a battlefield in real-time against overwhelming enemy forces while freeing and recruiting soldiers along the way, with each unit able to gain experience and level up through battle. The player must make sure that the leader stays alive, until the army reaches the enemy castle to defeat the leader of the opposing forces. The game was responsible for laying the foundations for the tactical role-playing game genre, or the "simulation RPG" genre as it is known in Japan, with its blend of role-playing and strategy game elements. The game has also variously been described as an early example of an action role-playing game, an early prototype real-time strategy game, and a unique reverse tower defense game. In its time, the game was considered a major success in Japan. Release Originally developed in 1983 for the Sharp X1 computer, it won ASCII Entertainment's first "Software Contest" and was sold boxed by them that year. An MSX port was then released in 1984, followed in 1985 by versions for the S1, PC-6000mkII, PC-8801, PC-9801, FM-7 and the Family Computer (the latter released on December 14, 1985). LOGiN Magazine's November 1984 issue featured a sequel for the X1 entitled New Bokosuka Wars with the source code included. With all-new enemy characters and redesigned items and traps, the level of difficulty became more balanced. It was also included in Tape Login Magazine's November 1984 issue, but never sold in any other form. The PC-8801 version used to be sold as a download from Enterbrain and was ported for the i-Mode service in 2004. The Famicom version wa
https://en.wikipedia.org/wiki/Pulsed-field%20gel%20electrophoresis
Pulsed-field gel electrophoresis (PFGE) is a technique used for the separation of large DNA molecules by applying to a gel matrix an electric field that periodically changes direction. Pulsed-field gel electrophoresis is a method used to separate large segments of DNA using an alternating and cross field. In a uniform magnetic field, components larger than 50kb move through the gel in a zigzag pattern, allowing for more effective separation of DNA molecules. This method is commonly used in microbiology for typing bacteria and is a valuable tool for epidemiological studies and gene mapping in microbes and mammalian cells. It also played a role in the development of large-insert cloning systems such as bacterial and yeast artificial chromosomes. PFGE can be used to determine the genetic similarity between bacteria, as close and similar species will have similar profiles while dissimilar ones will have different profiles. This feature is useful in identifying the prevalent agent of a disease. Additionally, it can be used to monitor and evaluate micro-organisms in clinical samples, soil and water. It is also considered a reliable and standard method in vaccine preparation. In recent years, PFGE has been widely used as a powerful tool for controlling, preventing and monitoring diseases in different populations Discovery The discovery of PFGE can be traced back to the late 1970s and early 1980s. One of the earliest references to the use of PFGE for DNA analysis is a 1977 paper by Dr. David Burke and colleagues at the University of Colorado, where they described a method of separating DNA molecules based on their size using conventional gel electrophoresis. The first reference to the use of the term "pulsed-field gel electrophoresis" appears in a 1983 paper by Dr. Richard L. Sweeley and colleagues at the DuPont Company, where they described a method of separating large DNA molecules (over 50 kb) by applying a series of alternating electric fields to a gel matrix. In the
https://en.wikipedia.org/wiki/David%20May%20%28computer%20scientist%29
Michael David May FRS FREng (born 24 February 1951) is a British computer scientist. He is a Professor in the Department of Computer Science at the University of Bristol and founder of XMOS Semiconductor, serving until February 2014 as the chief technology officer. May was lead architect for the transputer. As of 2017, he holds 56 patents, all in microprocessors and multi-processing. Life and career May was born in Holmfirth, Yorkshire, England and attended Queen Elizabeth Grammar School, Wakefield. From 1969 to 1972 he was a student at King's College, Cambridge, University of Cambridge, at first studying Mathematics and then Computer Science in the University of Cambridge Mathematical Laboratory, now the University of Cambridge Computer Laboratory. He moved to the University of Warwick and started research in robotics. The challenges of implementing sensing and control systems led him to design and implement an early concurrent programming language, EPL, which ran on a cluster of single-board microcomputers connected by serial communication links. This early work brought him into contact with Tony Hoare and Iann Barron: one of the founders of Inmos. When Inmos was formed in 1978, May joined to work on microcomputer architecture, becoming lead architect of the transputer and designer of the associated programming language Occam. This extended his earlier work and was also influenced by Tony Hoare, who was at the time working on CSP and acting as a consultant to Inmos. The prototype of the transputer was called the Simple 42 and was completed in 1982. The first production transputers, the T212 and T414, followed in 1985; the T800 floating point transputer in 1987. May initiated the design of one of the first VLSI packet switches, the C104, together with the communications system of the T9000 transputer. Working closely with Tony Hoare and the Programming Research Group at Oxford University, May introduced formal verification techniques into the design of the
https://en.wikipedia.org/wiki/Pythagorean%20prime
A Pythagorean prime is a prime number of the Pythagorean primes are exactly the odd prime numbers that are the sum of two squares; this characterization is Fermat's theorem on sums of two squares. Equivalently, by the Pythagorean theorem, they are the odd prime numbers for which is the length of the hypotenuse of a right triangle with integer legs, and they are also the prime numbers for which itself is the hypotenuse of a primitive Pythagorean triangle. For instance, the number 5 is a Pythagorean prime; is the hypotenuse of a right triangle with legs 1 and 2, and 5 itself is the hypotenuse of a right triangle with legs 3 and 4. Values and density The first few Pythagorean primes are By Dirichlet's theorem on arithmetic progressions, this sequence is infinite. More strongly, for each , the numbers of Pythagorean and non-Pythagorean primes up to are approximately equal. However, the number of Pythagorean primes up to is frequently somewhat smaller than the number of non-Pythagorean primes; this phenomenon is known as For example, the only values of up to 600000 for which there are more Pythagorean than non-Pythagorean odd primes less than or equal to n are 26861 Representation as a sum of two squares The sum of one odd square and one even square is congruent to 1 mod 4, but there exist composite numbers such as 21 that are and yet cannot be represented as sums of two squares. Fermat's theorem on sums of two squares states that the prime numbers that can be represented as sums of two squares are exactly 2 and the odd primes congruent to The representation of each such number is unique, up to the ordering of the two squares. By using the Pythagorean theorem, this representation can be interpreted geometrically: the Pythagorean primes are exactly the odd prime numbers such that there exists a right triangle, with integer legs, whose hypotenuse has They are also exactly the prime numbers such that there exists a right triangle with integer sides wh
https://en.wikipedia.org/wiki/Primitive%20%28phylogenetics%29
In phylogenetics, a primitive (or ancestral) character, trait, or feature of a lineage or taxon is one that is inherited from the common ancestor of a clade (or clade group) and has undergone little change since. Conversely, a trait that appears within the clade group (that is, is present in any subgroup within the clade but not all) is called advanced or derived. A clade is a group of organisms that consists of a common ancestor and all its lineal descendants. A primitive trait is the original condition of that trait in the common ancestor; advanced indicates a notable change from the original condition. These terms in biology contain no judgement about the sophistication, superiority, value or adaptiveness of the named trait. "Primitive" in biology means only that the character appeared first in the common ancestor of a clade group and has been passed on largely intact to more recent members of the clade. "Advanced" means the character has evolved within a later subgroup of the clade. Phylogenetics is utilized to determine evolutionary relationships and relatedness, to ultimately depict accurate evolutionary lineages. Evolutionary relatedness between living species can be connected by descent from common ancestry. These evolutionary lineages can thereby be portrayed through a phylogenetic tree, or cladogram, where varying relatedness amongst species is evidently depicted. Through this tree, organisms can be categorized by divergence from the common ancestor, and primitive characters, to clades of organisms with shared derived character states. Furthermore, cladograms allow researchers to view the changes and evolutionary alterations occurring in a species over time as they move from primitive characters to varying derived character states. Cladograms are important for scientists as they allow them to classify and hypothesize the origin and future of organisms. Cladograms allow scientists to propose their evolutionary scenarios about the lineage from a prim
https://en.wikipedia.org/wiki/Domainz
Domainz Limited was the original .nz registry operator and is now an ICANN accredited domain name registrar and web host. IANA delegated the .nz namespace to John Houlker on 19 January 1987, and the University of Waikato issued .nz domain names and maintained the .nz registry during the early part of Internet availability in New Zealand. During 1996, as Internet use was flourishing in New Zealand, and operation of the .nz registry was becoming burdensome on the University of Waikato, John Houlker, IANA and The Internet Society of New Zealand (Isocnz) agreed to a redelegation of the .nz name to Isocnz. The University of Waikato was contracted to continue hosting the .nz namespace until Isocnz was in a position to assume full responsibility for the Domain Name System (DNS). Isocnz established a subsidiary company, “The New Zealand Internet Registry Ltd”, trading as Domainz, to run the .nz registry, on 15 April 1997. Domainz commenced allocating domain names, to both companies and individuals, evolving what was known as the Domainz Registration System (DRS). Concern over a new online registry system, which was suffering a welter of problems, and opposition to a lawsuit (against Alan Brown, the founder of ORBS) both being championed by Domainz CEO Patrick O'Brien saw all available Isocnz council seats (and subsequently the Domainz board) filled by "rebel" members in elections in July 2000. The SRS was implemented and became live on 14 October 2002, with Domainz as the sole registrar, acting in a stabilising role, until the first competitive registrar connected to the shared registry on 7 December 2002. Domainz remained as the stabilising registrar until September 2003. In September 2003, Domainz was acquired by Australian-based registrar Melbourne IT Limited. In October 2003 there were in excess of 40 registrars interacting with the .nz Shared Registry System. References and sources History from New Zealand Commerce Commission (PDF) External links Internet New Z
https://en.wikipedia.org/wiki/Gordon%20Plotkin
Gordon David Plotkin, (born 9 September 1946) is a theoretical computer scientist in the School of Informatics at the University of Edinburgh. Plotkin is probably best known for his introduction of structural operational semantics (SOS) and his work on denotational semantics. In particular, his notes on A Structural Approach to Operational Semantics were very influential. He has contributed to many other areas of computer science. Education Plotkin was educated at the University of Glasgow and the University of Edinburgh, gaining his Bachelor of Science degree in 1967 and PhD in 1972 supervised by Rod Burstall. Career and research Plotkin has remained at Edinburgh, and was, with Burstall and Robin Milner, a co-founder of the Laboratory for Foundations of Computer Science (LFCS). His former doctoral students include Luca Cardelli, Philippa Gardner, Doug Gurr, Eugenio Moggi, and Lǐ Wèi. Awards and honours Plotkin was elected a Fellow of the Royal Society (FRS) in 1992, and a Fellow of the Royal Society of Edinburgh (FRSE) and is a Member of the Academia Europæa and the American Academy of Arts and Sciences. He is also a winner of the Royal Society Wolfson Research Merit Award. Plotkin received the Milner Award in 2012 for "his fundamental research into programming semantics with lasting impact on both the principles and design of programming languages." His nomination for the Royal Society reads: References 1946 births Living people British computer scientists Fellows of the Royal Society Members of Academia Europaea Fellows of the American Academy of Arts and Sciences Royal Society Wolfson Research Merit Award holders Formal methods people Programming language researchers Scottish Jews Jewish scientists Alumni of the University of Edinburgh Academics of the University of Edinburgh Fellows of the Royal Society of Edinburgh Jewish British scientists Alumni of the University of Glasgow
https://en.wikipedia.org/wiki/Microsoft%20Windows%20SDK
Microsoft Windows SDK, and its predecessors Platform SDK, and .NET Framework SDK, are software development kits (SDKs) from Microsoft that contain documentation, header files, libraries, samples and tools required to develop applications for Microsoft Windows and .NET Framework. Platform SDK specializes in developing applications for Windows 2000, XP and Windows Server 2003. .NET Framework SDK is dedicated to developing applications for .NET Framework 1.1 and .NET Framework 2.0. Windows SDK is the successor of the two and supports developing applications for Windows XP and later, as well as .NET Framework 3.0 and later. Features Platform SDK is the successor of the original Microsoft Windows SDK for Windows 3.1x and Microsoft Win32 SDK for Windows 9x. It was released in 1999 and is the oldest SDK. Platform SDK contains compilers, tools, documentations, header files, libraries and samples needed for software development on IA-32, x64 and IA-64 CPU architectures. however, came to being with .NET Framework. Starting with Windows Vista, the Platform SDK, .NET Framework SDK, Tablet PC SDK and Windows Media SDK are replaced by a new unified kit called Windows SDK. However, the .NET Framework 1.1 SDK is not included since the .NET Framework 1.1 does not ship with Windows Vista. (Windows Media Center SDK for Windows Vista ships separately.) DirectX SDK was merged into Windows SDK with the release of Windows 8. Windows SDK allows the user to specify the components to be installed and where to install them. It integrates with Visual Studio, so that multiple copies of the components that both have are not installed; however, there are compatibility caveats if either of the two is not from the same era. Information shown can be filtered by content, such as showing only new Windows Vista content, only .NET Framework content, or showing content for a specific language or technology. Windows SDKs are available for free; they were once available on Microsoft Download Center bu
https://en.wikipedia.org/wiki/Enon%20%28robot%29
Enon is a personal assistant robot first offered for sale in September 2005. Enon was developed by two companies: Fujitsu Frontech Limited and Fujitsu Laboratories Ltd. The six-million yen (US$60,000) rolling robot is self-guiding, with limited speech recognition and synthesis. Enon can amongst others be used to provide guidance, transport objects, escort guests and perform security patrolling, according to the organization. Enon, an English acronym for "Exciting Nova On Network," can pick up and carry roughly in its arms and comes without software. External links Fujitsu References Robotics at Fujitsu Personal assistant robots Humanoid robots 2005 robots Rolling robots
https://en.wikipedia.org/wiki/BGP%20hijacking
BGP hijacking (sometimes referred to as prefix hijacking, route hijacking or IP hijacking) is the illegitimate takeover of groups of IP addresses by corrupting Internet routing tables maintained using the Border Gateway Protocol (BGP). Background The Internet is a global network in enabling any connected host, identified by its unique IP address, to talk to any other, anywhere in the world. This is achieved by passing data from one router to another, repeatedly moving each packet closer to its destination, until it is hopefully delivered. To do this, each router must be regularly supplied with up-to-date routing tables. At the global level, individual IP addresses are grouped together into prefixes. These prefixes will be originated, or owned, by an autonomous system (AS) and the routing tables between ASes are maintained using the Border Gateway Protocol (BGP). A group of networks that operates under a single external routing policy is known as an autonomous system. For example, Sprint, Verizon, and AT&T each are an AS. Each AS has its own unique AS identifier number. BGP is the standard routing protocol used to exchange information about IP routing between autonomous systems. Each AS uses BGP to advertise prefixes that it can deliver traffic to. For example, if the network prefix is inside AS 64496, then that AS will advertise to its provider(s) and/or peer(s) that it can deliver any traffic destined for . Although security extensions are available for BGP, and third-party route DB resources exist for validating routes, by default the BGP protocol is designed to trust all route announcements sent by peers, and few ISPs rigorously enforce checks on BGP sessions. Mechanism IP hijacking can occur deliberately or by accident in one of several ways: An AS announces that it originates a prefix that it does not actually originate. An AS announces a more specific prefix than what may be announced by the true originating AS. An AS announces that it can ro
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Software%20Engineering
The IEEE Transactions on Software Engineering is a monthly peer-reviewed scientific journal published by the IEEE Computer Society. It was established in 1975 and covers the area of software engineering. It is considered the leading journal in this field. Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index Expanded and Current Contents/Engineering, Computing & Technology. According to the Journal Citation Reports, the journal has a 2021 impact factor of 9.322. Past editors-in-chief See also IEEE Software IET Software References External links Transactions on Software Engineering Computer science journals Software engineering publications Monthly journals Academic journals established in 1975 English-language journals
https://en.wikipedia.org/wiki/Floer%20homology
In mathematics, Floer homology is a tool for studying symplectic geometry and low-dimensional topology. Floer homology is a novel invariant that arises as an infinite-dimensional analogue of finite-dimensional Morse homology. Andreas Floer introduced the first version of Floer homology, now called Lagrangian Floer homology, in his proof of the Arnold conjecture in symplectic geometry. Floer also developed a closely related theory for Lagrangian submanifolds of a symplectic manifold. A third construction, also due to Floer, associates homology groups to closed three-dimensional manifolds using the Yang–Mills functional. These constructions and their descendants play a fundamental role in current investigations into the topology of symplectic and contact manifolds as well as (smooth) three- and four-dimensional manifolds. Floer homology is typically defined by associating to the object of interest an infinite-dimensional manifold and a real valued function on it. In the symplectic version, this is the free loop space of a symplectic manifold with the symplectic action functional. For the (instanton) version for three-manifolds, it is the space of SU(2)-connections on a three-dimensional manifold with the Chern–Simons functional. Loosely speaking, Floer homology is the Morse homology of the function on the infinite-dimensional manifold. A Floer chain complex is formed from the abelian group spanned by the critical points of the function (or possibly certain collections of critical points). The differential of the chain complex is defined by counting the function's gradient flow lines connecting certain pairs of critical points (or collections thereof). Floer homology is the homology of this chain complex. The gradient flow line equation, in a situation where Floer's ideas can be successfully applied, is typically a geometrically meaningful and analytically tractable equation. For symplectic Floer homology, the gradient flow equation for a path in the loop
https://en.wikipedia.org/wiki/Set%20Theory%3A%20An%20Introduction%20to%20Independence%20Proofs
Set Theory: An Introduction to Independence Proofs is a textbook and reference work in set theory by Kenneth Kunen. It starts from basic notions, including the ZFC axioms, and quickly develops combinatorial notions such as trees, Suslin's problem, ◊, and Martin's axiom. It develops some basic model theory (rather specifically aimed at models of set theory) and the theory of Gödel's constructible universe L. The book then proceeds to describe the method of forcing. Kunen completely rewrote the book for the 2011 edition (under the title "Set Theory"), including more model theory. References 1980 non-fiction books Mathematics textbooks Set theory
https://en.wikipedia.org/wiki/Low-dropout%20regulator
A low-dropout regulator (LDO regulator) is a DC linear voltage regulator that can operate even when the supply voltage is very close to the output voltage. The advantages of an LDO regulator over other DC-to-DC voltage regulators include: the absence of switching noise (in contrast to switching regulators); smaller device size (as neither large inductors nor transformers are needed); and greater design simplicity (usually consists of a reference, an amplifier, and a pass element). The disadvantage is that linear DC regulators must dissipate heat in order to operate. History The adjustable low-dropout regulator debuted on April 12, 1977 in an Electronic Design article entitled "Break Loose from Fixed IC Regulators". The article was written by Robert Dobkin, an IC designer then working for National Semiconductor. Because of this, National Semiconductor claims the title of "LDO inventor". Dobkin later left National Semiconductor in 1981 and founded Linear Technology where he was the chief technology officer. Components The main components are a power FET and a differential amplifier (error amplifier). One input of the differential amplifier monitors the fraction of the output determined by the resistor ratio of R1 and R2. The second input to the differential amplifier is from a stable voltage reference (bandgap reference). If the output voltage rises too high relative to the reference voltage, the drive to the power FET changes to maintain a constant output voltage. Regulation Low-dropout (LDO) regulators operate similarly to all linear voltage regulators. The main difference between LDO and non-LDO regulators is their schematic topology. Instead of an emitter follower topology, low-dropout regulators consist of an open collector or open drain topology, where the transistor may be easily driven into saturation with the voltages available to the regulator. This allows the voltage drop from the unregulated voltage to the regulated voltage to be as low as (limited t
https://en.wikipedia.org/wiki/Ultraviolet%20fixed%20point
In a quantum field theory, one may calculate an effective or running coupling constant that defines the coupling of the theory measured at a given momentum scale. One example of such a coupling constant is the electric charge. In approximate calculations in several quantum field theories, notably quantum electrodynamics and theories of the Higgs particle, the running coupling appears to become infinite at a finite momentum scale. This is sometimes called the Landau pole problem. It is not known whether the appearance of these inconsistencies is an artifact of the approximation, or a real fundamental problem in the theory. However, the problem can be avoided if an ultraviolet or UV fixed point appears in the theory. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. This is related to zeroes of the beta-function appearing in the Callan–Symanzik equation. The large length scale/small energy limit counterpart is the infrared fixed point. Specific cases and details Among other things, it means that a theory possessing a UV fixed point may not be an effective field theory, because it is well-defined at arbitrarily small distance scales. At the UV fixed point itself, the theory can behave as a conformal field theory. The converse statement, that any QFT which is valid at all distance scales (i.e. isn't an effective field theory) has a UV fixed point is false. See, for example, cascading gauge theory. Noncommutative quantum field theories have a UV cutoff even though they are not effective field theories. Physicists distinguish between trivial and nontrivial fixed points. If a UV fixed point is trivial (generally known as Gaussian fixed point), the theory is said to be asymptotically free. On the other hand, a scenario, where a non-Gaussian (i.e. nontrivial) fixed point is approached in the UV limit, is referred to as asymptotic safety. Asymptotically
https://en.wikipedia.org/wiki/Pod%20slurping
Pod slurping is the act of using a portable data storage device such as an iPod digital audio player to illicitly download large quantities of confidential data by directly plugging it into a computer where the data are held, and which may be on the inside of a firewall. There has been some work in the development of fixes to the problem, including a number of third-party security products that allow companies to set security policies related to USB device use, and features within operating systems that allow IT administrators or users to disable the USB port altogether. Unix-based or Unix-like systems can easily prevent users from mounting storage devices, and Microsoft has released instructions for preventing users from installing USB mass storage devices on its operating systems. Additional measures include physical obstruction of the USB ports, with measures ranging from the simple filling of ports with epoxy resin to commercial solutions which deposit a lockable plug into the port. See also Data theft Bluesnarfing Sneakernet References External links The following external links act as an indirect mechanism of further learning on this topic (e.g., detailed descriptions, examples, and implementations). How To: Simple Podslurping Script Podslurping and Bluesnarfing – The latest IT threats Summary of Podslurping Podslurping and related risks Pod Slurping - an easy technique for stealing data (PDF file) Pod Slurping or Podslurping Early description of pod slurping activity Pod Slurping example and presentation Data security
https://en.wikipedia.org/wiki/Tinkerbell%20map
The Tinkerbell map is a discrete-time dynamical system given by: Some commonly used values of a, b, c, and d are Like all chaotic maps, the Tinkerbell Map has also been shown to have periods; after a certain number of mapping iterations any given point shown in the map to the right will find itself once again at its starting location. The origin of the name is uncertain; however, the graphical picture of the system (as shown to the right) shows a similarity to the movement of Tinker Bell over Cinderella Castle, as shown at the beginning of all films produced by Disney. See also List of chaotic maps References C.L. Bremer & D.T. Kaplan, Markov Chain Monte Carlo Estimation of Nonlinear Dynamics from Time Series K.T. Alligood, T.D. Sauer & J.A. Yorke, Chaos: An Introduction to Dynamical Systems, Berlin: Springer-Verlag, 1996. P.E. McSharry & P.R.C. Ruffino, Asymptotic angular stability in non-linear systems: rotation numbers and winding numbers R.L. Davidchack, Y.-C. Lai, A. Klebanoff & E.M. Bollt, Towards complete detection of unstable periodic orbits in chaotic systems B. R. Hunt, Judy A. Kennedy, Tien-Yien Li, Helena E. Nusse, "SLYRB measures: natural invariant measures for chaotic systems" A. Goldsztejn, W. Hayes, P. Collins "Tinkerbell is Chaotic" SIAM J. Applied Dynamical Systems 10, n.4 1480-1501, 2011 External links Tinkerbell map visualization with interactive source code Chaotic maps
https://en.wikipedia.org/wiki/UTEC
UTEC (University of Toronto Electronic Computer Mark I) was a computer built at the University of Toronto (UofT) in the early 1950s. It was the first computer in Canada, one of the first working computers in the world, although only built in a prototype form while awaiting funding for expansion into a full-scale version. This funding was eventually used to purchase a surplus Manchester Mark 1 from Ferranti in the UK instead, and UTEC quickly disappeared. Background Immediately after the end of World War II several members of the UofT staff met informally as the Committee on Computing Machines to discuss their computation needs over the next few years. In 1946 a small $1,000 grant was used to send one of the group's members to tour several US research labs to see their progress on computers and try to see what was possible given UofT's likely funding. Due to UofT's preeminent position in the Canadian research world, the tour was also followed by members of the Canadian Research Council. In January 1947 the committee delivered a report suggesting the creation of a formal Computing Center, primarily as a service bureau to provide computing services both to the university and commercial interests, as well as the nucleus of a research group into computing machinery. Specifically they recommended the immediate renting of an IBM mechanical punched card-based calculator, building a simple differential analyzer, and the eventual purchase or construction of an electronic computer. The report noted that funding should be expected from both the National Research Council (NRC) and the Defense Research Board (DRB). The DRB soon provided a grant of $6,500 to set up the Computation Center, with the Committee eventually selecting Kelly Gotlieb to run it. Additional funding followed in February 1948 with a $20,000 a year grant from a combined pool set up by the DRB and NRC. Although this was less than was hoped for, the IBM machinery was soon in place and being used to calculate s
https://en.wikipedia.org/wiki/Numeronym
A numeronym is a number-based word. Anne H. Soukhanov, editor of the Microsoft Encarta College Dictionary, gives the original meaning of the term as "a telephone number that spells a word or a name" on a telephone dial/numpad. A number may also denote how many times the character before or after it is repeated. This is typically used to represent a name or phrase in which several consecutive words start with the same letter, as in W3 (World Wide Web) or W3C (World Wide Web Consortium). Types Homophone Most commonly, a numeronym is a word where a number is used to form an abbreviation (albeit not an acronym or an initialism). Pronouncing the letters and numbers may sound similar to the full word, as in "K9" (pronounced "kay-nine") for "canine, relating to dogs". Examples sk8r: Skater B4: Before l8r: Later; L8R, also sometimes abbreviated as L8ER, is commonly used in chat rooms and other text based communications as a way of saying goodbye. G2G: "Good to go", "got to go", or "get together" P2P: "pay to play" or "peer-to-peer" F2P: "free to play" T2UL/T2YL: "talk to you later" B2B: "business to business" B2C: "business to consumer" Numerical contractions Alternatively, letters between the first and last letters of a word may be replaced by a number representing the number of letters omitted, such as in "i18n" for "internationalization", where "18" stands in for the word's middle eighteen letters ("nternationalizatio"). Sometimes the last letter is also counted and omitted. These word shortenings are sometimes called alphanumeric acronyms, alphanumeric abbreviations, or numerical contractions. According to Tex Texin, the first numeronym of this kind was "S12n", the electronic mail account name given to Digital Equipment Corporation (DEC) employee Jan Scherpenhuizen by a system administrator because his surname was too long to be an account name. By 1985, colleagues who found Jan's name unpronounceable often referred to him verbally as "S12n" (ess-twe
https://en.wikipedia.org/wiki/Extremal%20combinatorics
Extremal combinatorics is a field of combinatorics, which is itself a part of mathematics. Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of sets; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory. Another kind of example: How many people can be invited to a party where among each three people there are two who know each other and two who don't know each other? Ramsey theory shows that at most five persons can attend such a party. Or, suppose we are given a finite set of nonzero integers, and are asked to mark as large a subset as possible of this set under the restriction that the sum of any two marked integers cannot be marked. It appears that (independent of what the given integers actually are) we can always mark at least one-third of them. See also Extremal graph theory Sauer–Shelah lemma Erdős–Ko–Rado theorem Kruskal–Katona theorem Fisher's inequality Union-closed sets conjecture References . . . Combinatorial optimization
https://en.wikipedia.org/wiki/Parapatric%20speciation
In parapatric speciation, two subpopulations of a species evolve reproductive isolation from one another while continuing to exchange genes. This mode of speciation has three distinguishing characteristics: 1) mating occurs non-randomly, 2) gene flow occurs unequally, and 3) populations exist in either continuous or discontinuous geographic ranges. This distribution pattern may be the result of unequal dispersal, incomplete geographical barriers, or divergent expressions of behavior, among other things. Parapatric speciation predicts that hybrid zones will often exist at the junction between the two populations. In biogeography, the terms parapatric and parapatry are often used to describe the relationship between organisms whose ranges do not significantly overlap but are immediately adjacent to each other; they do not occur together except in a narrow contact zone. Parapatry is a geographical distribution opposed to sympatry (same area) and allopatry or peripatry (two similar cases of distinct areas). Various "forms" of parapatry have been proposed and are discussed below. Coyne and Orr in Speciation categorise these forms into three groups: clinal (environmental gradients), "stepping-stone" (discrete populations), and stasipatric speciation in concordance with most of the parapatric speciation literature. Henceforth, the models are subdivided following a similar format. Charles Darwin was the first to propose this mode of speciation. It was not until 1930, when Ronald Fisher published The Genetical Theory of Natural Selection where he outlined a verbal theoretical model of clinal speciation. In 1981, Joseph Felsenstein proposed an alternative, "discrete population" model (the "stepping-stone model). Since Darwin, a great deal of research has been conducted on parapatric speciation—concluding that its mechanisms are theoretically plausible, "and has most certainly occurred in nature". Models Mathematical models, laboratory studies, and observational evidence
https://en.wikipedia.org/wiki/Massey%20product
In algebraic topology, the Massey product is a cohomology operation of higher order introduced in , which generalizes the cup product. The Massey product was created by William S. Massey, an American algebraic topologist. Massey triple product Let be elements of the cohomology algebra of a differential graded algebra . If , the Massey product is a subset of , where . The Massey product is defined algebraically, by lifting the elements to equivalence classes of elements of , taking the Massey products of these, and then pushing down to cohomology. This may result in a well-defined cohomology class, or may result in indeterminacy. Define to be . The cohomology class of an element of will be denoted by . The Massey triple product of three cohomology classes is defined by The Massey product of three cohomology classes is not an element of , but a set of elements of , possibly empty and possibly containing more than one element. If have degrees , then the Massey product has degree , with the coming from the differential . The Massey product is nonempty if the products and are both exact, in which case all its elements are in the same element of the quotient group So the Massey product can be regarded as a function defined on triples of classes such that the product of the first or last two is zero, taking values in the above quotient group. More casually, if the two pairwise products and both vanish in homology (), i.e., and for some chains and , then the triple product vanishes "for two different reasons" — it is the boundary of and (since and because elements of homology are cycles). The bounding chains and have indeterminacy, which disappears when one moves to homology, and since and have the same boundary, subtracting them (the sign convention is to correctly handle the grading) gives a cocycle (the boundary of the difference vanishes), and one thus obtains a well-defined element of cohomology — this step is analogous to defining t
https://en.wikipedia.org/wiki/ATM%20%28computer%29
The ATM Turbo (ru: "АТМ-ТУРБО"), also known simply as ATM (from ru: "Ассоциация Творческой Молодёжи", meaning "Association of Creative Youth") is a ZX Spectrum clone, developed in Moscow in 1991, by two firms, MicroArt and ATM. It offers enhanced characteristics, compared to the original Spectrum, such as a , RAM, ROM, AY-8910 (two chips in upgraded models), 8-bit DAC, 8-bit 8-channel ADC, RS-232, parallel port, Beta Disk Interface, IDE interface, AT/XT keyboard, text mode (, , ), and three new graphics modes. The ATM can be emulated in Unreal Speccy v0.27 and higher. History ATM was developed in 1991 based on the Pentagon, a ZX Spectrum clone popular in Russia. In 1992 an upgraded model was introduced, named ATM Turbo 2. Up to 1994 the computer was produced by ATM and MicroArt; later the firms separated and production ended. In 2004 NedoPC from Moscow resumed production. New versions called ATM Turbo 2+ and ZX Evolution were introduced. Characteristics Graphics modes For compatibility purposes, the original ZX Spectrum mode is available. New graphics modes offer expanded abilities: mode, with 2 out of 16 colors per 8x1 pixels. The Profi offers a similar mode, but the ATM can use the full set for both ink and paper. mode with a raster mode (a two-pixel chunky mode, not planar like EGA). Two games for this mode were converted directly from PC: Prince of Persia and Goblins, and one from Sony PlayStation: Time Gal. Other games that use this mode exist, like Ball Quest, released in August, 2006. Palette: from a 64 color palette (6-bit RGB) can be set for all modes. Operating systems 48K Sinclair BASIC, 128K Sinclair BASIC, TR-DOS, CP/M, iS-DOS, TASiS, DNA OS, Mr Gluk Reset Service. Software ATM Turbo Virtual TR-DOS Models Many models exist. Models before version 6.00 are called ATM 1, later models are called ATM 2(2+) or ATM Turbo 2(2+) or simply Turbo 2+. A IDE interface is available since v6.00.JIO0UBH9BY8B9T7GVC6R (the latest model is 7.18
https://en.wikipedia.org/wiki/Drilling%20and%20blasting
Drilling and blasting is the controlled use of explosives and other methods, such as gas pressure blasting pyrotechnics, to break rock for excavation. It is practiced most often in mining, quarrying and civil engineering such as dam, tunnel or road construction. The result of rock blasting is often known as a rock cut. Drilling and blasting currently utilizes many different varieties of explosives with different compositions and performance properties. Higher velocity explosives are used for relatively hard rock in order to shatter and break the rock, while low velocity explosives are used in soft rocks to generate more gas pressure and a greater heaving effect. For instance, an early 20th-century blasting manual compared the effects of black powder to that of a wedge, and dynamite to that of a hammer. The most commonly used explosives in mining today are ANFO based blends due to lower cost than dynamite. Before the advent of tunnel boring machines (TBMs), drilling and blasting was the only economical way of excavating long tunnels through hard rock, where digging is not possible. Even today, the method is still used in the construction of tunnels, such as in the construction of the Lötschberg Base Tunnel. The decision whether to construct a tunnel using a TBM or using a drill and blast method includes a number of factors. Tunnel length is a key issue that needs to be addressed because large TBMs for a rock tunnel have a high capital cost, but because they are usually quicker than a drill and blast tunnel the price per metre of tunnel is lower. This means that shorter tunnels tend to be less economical to construct with a TBM and are therefore usually constructed by drill and blast. Managing ground conditions can also have a significant effect on the choice with different methods suited to different hazards in the ground. History The use of explosives in mining goes back to the year 1627, when gunpowder was first used in place of mechanical tools in the Hungar