source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Enterprise%20unified%20process
|
The enterprise unified process (EUP) is an extended variant of the unified process and was developed by Scott W. Ambler and Larry Constantine in 2000, eventually reworked in 2005 by Ambler, John Nalbone and Michael Vizdos. EUP was originally introduced to overcome some shortages of RUP, namely the lack of production and eventual retirement of a software system. So two phases and several new disciplines were added. EUP sees software development not as a standalone activity, but embedded in the lifecycle of the system (to be built or enhanced or replaced), the IT lifecycle of the enterprise and the organization/business lifecycle of the enterprise itself. It deals with software development as seen from the customer's point of view.
In 2013 work began to evolve EUP to be based on disciplined agile delivery instead of the unified process.
Phases
The unified process defines four project phases
Inception
Elaboration
Construction
Transition
To these EUP adds two additional phases
Production
Retirement
Disciplines
The rational unified process defines nine project disciplines
Business modeling
Requirements
Analysis and design
Implementation
Test
Deployment
Configuration and change management
Project management
Environment
To these EUP adds one additional project discipline
Operations and support
and seven enterprise disciplines
Enterprise business modeling
Portfolio management
Enterprise architecture
Strategic reuse
People management
Enterprise administration
Software process improvement
Best practices of EUP
The EUP provides following best practices:-
Develop iteratively
Manage requirements
Proven architecture
Modeling
Continuously verify quality.
Manage change
Collaborative development
Look beyond development.
Deliver working software on a regular basis
Manage risk
See also
Disciplined agile delivery
Rational unified process
Software development process
Extreme programming
References
Bibliography
External links
Scott W. Ambler's website on t
|
https://en.wikipedia.org/wiki/Arnold%20tongue
|
In mathematics, particularly in dynamical systems, Arnold tongues (named after Vladimir Arnold) are a pictorial phenomenon that occur when visualizing how the rotation number of a dynamical system, or other related invariant property thereof, changes according to two or more of its parameters. The regions of constant rotation number have been observed, for some dynamical systems, to form geometric shapes that resemble tongues, in which case they are called Arnold tongues.
Arnold tongues are observed in a large variety of natural phenomena that involve oscillating quantities, such as concentration of enzymes and substrates in biological processes and cardiac electric waves. Sometimes the frequency of oscillation depends on, or is constrained (i.e., phase-locked or mode-locked, in some contexts) based on some quantity, and it is often of interest to study this relation. For instance, the outset of a tumor triggers in the area a series of substance (mainly proteins) oscillations that interact with each other; simulations show that these interactions cause Arnold tongues to appear, that is, the frequency of some oscillations constrain the others, and this can be used to control tumor growth.
Other examples where Arnold tongues can be found include the inharmonicity of musical instruments, orbital resonance and tidal locking of orbiting moons, mode-locking in fiber optics and phase-locked loops and other electronic oscillators, as well as in cardiac rhythms, heart arrhythmias and cell cycle.
One of the simplest physical models that exhibits mode-locking consists of two rotating disks connected by a weak spring. One disk is allowed to spin freely, and the other is driven by a motor. Mode locking occurs when the freely-spinning disk turns at a frequency that is a rational multiple of that of the driven rotator.
The simplest mathematical model that exhibits mode-locking is the circle map, which attempts to capture the motion of the spinning disks at discrete time interv
|
https://en.wikipedia.org/wiki/Type%20%28model%20theory%29
|
In model theory and related areas of mathematics, a type is an object that describes how a (real or possible) element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x1, x2,…, xn that are true of a set of n-tuples of an L-structure . Depending on the context, types can be complete or partial and they may use a fixed set of constants, A, from the structure . The question of which types represent actual elements of leads to the ideas of saturated models and omitting types.
Formal definition
Consider a structure for a language L. Let M be the universe of the structure. For every A ⊆ M, let L(A) be the language obtained from L by adding a constant ca for every a ∈ A. In other words,
A 1-type (of ) over A is a set p(x) of formulas in L(A) with at most one free variable x (therefore 1-type) such that for every finite subset p0(x) ⊆ p(x) there is some b ∈ M, depending on p0(x), with (i.e. all formulas in p0(x) are true in when x is replaced by b).
Similarly an n-type (of ) over A is defined to be a set p(x1,…,xn) = p(x) of formulas in L(A), each having its free variables occurring only among the given n free variables x1,…,xn, such that for every finite subset p0(x) ⊆ p(x) there are some elements b1,…,bn ∈ M with .
A complete type of over A is one that is maximal with respect to inclusion. Equivalently, for every either or . Any non-complete type is called a partial type.
So, the word type in general refers to any n-type, partial or complete, over any chosen set of parameters (possibly the empty set).
An n-type p(x) is said to be realized in if there is an element b ∈ Mn such that . The existence of such a realization is guaranteed for any type by the compactness theorem, although the realization might take place in some elementary extension of , rather than in itself.
If a complete type is realized by b in , then the type is typically d
|
https://en.wikipedia.org/wiki/SCSI%20connector
|
A SCSI connector ( ) is used to connect computer parts that use a system called SCSI to communicate with each other. Generally, two connectors, designated male and female, plug together to form a connection which allows two components, such as a computer and a disk drive, to communicate with each other. SCSI connectors can be electrical connectors or optical connectors. There have been a large variety of SCSI connectors in use at one time or another in the computer industry. Twenty-five years of evolution and three major revisions of the standards resulted in requirements for Parallel SCSI connectors that could handle an 8, 16 or 32 bit wide bus running at 5, 10 or 20 megatransfer/s, with conventional or differential signaling. Serial SCSI added another three transport types, each with one or more connector types. Manufacturers have frequently chosen connectors based on factors of size, cost, or convenience at the expense of compatibility.
SCSI makes use of cables to connect devices. In a typical example, a socket on a computer motherboard would have one end of a cable plugged into it, while the other end of the cable plugged into a disk drive or other device. Some cables have different types of connectors on them, and some cables can have as many as 16 connectors (allowing 16 devices to be wired together). Different types of connectors may be used for devices inside a computer cabinet, than for external devices such as scanners or external disk drives.
Nomenclature
Many connector designations consist of an abbreviation for the connector family, followed by a number indicating the number of pins. For example, "CN36" (also written "CN-36" or "CN 36") would be a 36-pin Centronics-style connector. For some connectors (such as the D-subminiature family) use of the hyphen or space is more common, for others (like the "DD50") less so.
Parallel SCSI
Parallel SCSI (SCSI Parallel Interface SPI) allows for attachment of up to 8 devices (8-bit Narrow SCSI) or 16 devices
|
https://en.wikipedia.org/wiki/Galaksija%20BASIC
|
Galaksija BASIC was the BASIC interpreter of the Galaksija build-it-yourself home computer from Yugoslavia. While being partially based on code taken from TRS-80 Level 1 BASIC, which the creator believed to have been a Microsoft BASIC, the extensive modifications of Galaksija BASIC—such as to include rudimentary array support, video generation code (as the CPU itself did it in absence of dedicated video circuitry) and generally improvements to the programming language—is said to have left not much more than flow-control and floating point code remaining from the original.
The core implementation of the interpreter was fully contained in the 4 KiB ROM "A" or "1". The computer's original mainboard had a reserved slot for an extension ROM "B" or "2" that added more commands and features such as a built-in Zilog Z80 assembler.
ROM "A"/"1" symbols and keywords
The core implementation, in ROM "A" or "1", contained 3 special symbols and 32 keywords:
begins a comment (equivalent of standard BASIC REM command)
Equivalent of standard BASIC DATA statement
prefix for hex numbers
Allocates an array of strings, like DIM, but can allocate only array with name A$
serves as PEEK when used as a function (e.g. PRINT BYTE(11123)) and POKE when used as a command (e.g. BYTE 11123,123).
Calls BASIC subroutine as GOSUB in most other BASICs (e.g. CALL 100+4*X)
converts an ASCII numeric code into a corresponding character (string)
draws (command) or inspects (function) a pixel at given coordinates (0<=x<=63, 0<=y<=47).
displays the clock or time controlled by content of Y$ variable. Not in standard ROM
causes specified program line to be edited
standard part of IF-ELSE construct (Galaksija did not use THEN)
compare alphanumeric values X$ and Y$
standard FOR loop
standard GOTO command
equivalent of standard BASIC CLS command - clears the screen
protects n characters from the top of the screen from being scrolled away
standard part of IF-EL
|
https://en.wikipedia.org/wiki/Dysan
|
Dysan Corporation was an American storage media manufacturing corporation, formed in 1973 in San Jose, California, by CEO and former president C. Norman Dion. It was instrumental in the development of the 5.25" floppy disk, which appeared in 1976.
History
In 1983, Jerry Pournelle reported in BYTE that a software-publisher friend of his "distributes all his software on Dysan disks. It costs more to begin with, but saves [the cost of replacing defective media] in the long run, or so he says". By that year Dysan was a Fortune 500 company, had over 1200 employees, and was ranked as among the top ten private sector employers within Silicon Valley by the San Jose Mercury News, in terms of number of employees. In addition, some of Dysan's administrative and disk production facilities, located within the company's Santa Clara, manufacturing campus, were regarded as architecturally remarkable. For example, some of Dysan's Santa Clara campus magnetic media manufacturing facilities included architectural features such as large indoor employee lounge atriums, incorporating glass encased ceilings and walls, live indoor lush landscaping, waterfalls, running water creeks, and ponds with live fish.
In addition to manufacturing floppies, tape drives and hard disk drives, Dysan also produced hardware and storage containers for the disks.
Dysan merged with Xidex Magnetics in the spring of 1984. In 1997, under the direction of Jerry Ticerelli, Xidex declared bankruptcy. Xidex was absorbed by Anacomp and later spun off as a wholly owned subsidiary as Dysan.
After a brief re-opening in 2003, the company closed six months later under the direction of Dylan Campbell.
Recycling service
It is possible that Dysan was one of the first tech-based companies to offer a service for recycling used products. Some Dysan packaging included the following label:
References
1973 establishments in California
1984 disestablishments in California
Companies based in San Jose, California
Computer st
|
https://en.wikipedia.org/wiki/Cyril%20Hilsum
|
Cyril Hilsum (born 17 May 1925) is a British physicist and academic.
Hilsum was elected a member of the National Academy of Engineering in 1983 for the inventiveness and leadership in introducing III-V semiconductors into electronic technology.
Life
Hilsum entered Raine's Foundation School in 1936 as the middle of three brothers, leaving in 1943 after being accepted into University College London, where he did his BSc. In 1945, he joined the Royal Naval Scientific Service, moving in 1947 to the Admiralty Research Laboratory. In 1950, he transferred again to the Services Electronics Research Laboratory (SERL) where he remained until 1964 before again moving, this time to the Royal Radar Establishment. He won the Welker Award in 1978, was elected a Fellow of the Royal Academy of Engineering, a Fellow of the Royal Society in 1979 and an honorary member of the American National Academy of Engineering. In 1983, he was appointed Chief Scientist at GEC Hirst Research Centre. He was awarded the Max Born Prize in 1987, the 1988 Faraday Medal, and from then until 1990 served as President of the Institute of Physics. In the 1990 Queen's Birthday Honours, he was appointed a Commander of the Order of the British Empire (CBE) for "services to the Electrical and Electronics Industry". He was the subject of a photograph by Nick Sinclair in 1993 that is currently held by the National Portrait Gallery. In 1997, he was awarded the Glazebrook Medal and Prize from the Institute of Physics, and is notable as the only scientist to hold both this and the Faraday Medal together. He has served as a corporate research advisor for various entities, including Cambridge Display Technology, the European Commission and Unilever. In 2007, he was awarded the Royal Society's Royal Medal 'for his many outstanding contributions and for continuing to use his prodigious talents on behalf of industry, government and academe to this day'.
Hilsum serves as chairman of the scientific board for Peratech a
|
https://en.wikipedia.org/wiki/Ohmic%20contact
|
An ohmic contact is a non-rectifying electrical junction: a junction between two conductors that has a linear current–voltage (I–V) curve as with Ohm's law. Low-resistance ohmic contacts are used to allow charge to flow easily in both directions between the two conductors, without blocking due to rectification or excess power dissipation due to voltage thresholds.
By contrast, a junction or contact that does not demonstrate a linear I–V curve is called non-ohmic. Non-ohmic contacts come in a number of forms, such as p–n junction, Schottky barrier, rectifying heterojunction, or breakdown junction.
Generally the term "ohmic contact" implicitly refers to an ohmic contact of a metal to a semiconductor, where achieving ohmic contact resistance is possible but requires careful technique. Metal–metal ohmic contacts are relatively simpler to make, by ensuring direct contact between the metals without intervening layers of insulating contamination, excessive roughness or oxidation; various techniques are used to create ohmic metal–metal junctions (soldering, welding, crimping, deposition, electroplating, etc.). This article focuses on metal–semiconductor ohmic contacts.
Stable contacts at semiconductor interfaces, with low contact resistance and linear I–V behavior, are critical for the performance and reliability of semiconductor devices, and their preparation and characterization are major efforts in circuit fabrication. Poorly prepared junctions to semiconductors can easily show rectifying behaviour by causing depletion of the semiconductor near the junction, rendering the device useless by blocking the flow of charge between those devices and the external circuitry. Ohmic contacts to semiconductors are typically constructed by depositing thin metal films of a carefully chosen composition, possibly followed by annealing to alter the semiconductor–metal bond.
Physics of formation of metal–semiconductor ohmic contacts
Both ohmic contacts and Schottky barriers are depen
|
https://en.wikipedia.org/wiki/Ponsse
|
Ponsse Plc () is a company domiciled in Vieremä, Finland, a forest machine manufacturer run by the Vidgrén family. Ponsse manufactures, sells and maintains cut-to-length forest machines such as forwarders and harvesters and also builds their information systems. Harvesters operate digitally: for example, production data is reported daily to the client.
Ponsse is one of the world’s largest forest machine manufacturers. It is the market leader in Finland: more than 40 per cent of all forest machines in Finland are made by Ponsse. All machines are manufactured in the company’s birthplace in Vieremä. Ponsse Group employs about 1,800 people in 10 countries, and the company’s shares are quoted on the NASDAQ OMX Nordic List.
History
1970s – The early years
In 1969, forest machine entrepreneur Einari Vidgrén was working at Tehdaspuu Oy’s logging site, where he built a forest machine from a wheel loader’s components. The forwarder was named PONSSE Paz according to a crossbreed hunting dog.
Tehdaspuu requested Vidgrén to build more forest machines, and Vidgrén asked the municipality of Vieremä to build him a machine shop where he could build his machines. In 1970, a decision was made to build a 300 m2 machine shop, which was then leased out to Vidgrén. Ponsse Oy was established in the same year.
In 1971, Vidgrén placed an ad in the Helsingin Sanomat newspaper in order to recruit the first engineer for the factory. Jouko Kelppe, a young engineer from Salo was hired. The manufacturing of the first machine was started in the spring of 1971. A forest machine contractor Eero Vainikainen was in the process of purchasing a Volvo or Valmet machine. However, after strong persuasion and visiting the Ponsse factory, he decided to buy the first ever Ponsse machine. The machine was driven out of the factory by Vidgrén in the autumn of 1971.
Ponsse premiered in the international markets in 1974. A massive storm hit Germany, and a large number of foreign equipment was needed for
|
https://en.wikipedia.org/wiki/Atmospheric%20dispersion%20modeling
|
Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that include algorithms to solve the mathematical equations that govern the pollutant dispersion. The dispersion models are used to estimate the downwind ambient concentration of air pollutants or toxins emitted from sources such as industrial plants, vehicular traffic or accidental chemical releases. They can also be used to predict future concentrations under specific scenarios (i.e. changes in emission sources). Therefore, they are the dominant type of model used in air quality policy making. They are most useful for pollutants that are dispersed over large distances and that may react in the atmosphere. For pollutants that have a very high spatio-temporal variability (i.e. have very steep distance to source decay such as black carbon) and for epidemiological studies statistical land-use regression models are also used.
Dispersion models are important to governmental agencies tasked with protecting and managing the ambient air quality. The models are typically employed to determine whether existing or proposed new industrial facilities are or will be in compliance with the National Ambient Air Quality Standards (NAAQS) in the United States and other nations. The models also serve to assist in the design of effective control strategies to reduce emissions of harmful air pollutants. During the late 1960s, the Air Pollution Control Office of the U.S. EPA initiated research projects that would lead to the development of models for the use by urban and transportation planners. A major and significant application of a roadway dispersion model that resulted from such research was applied to the Spadina Expressway of Canada in 1971.
Air dispersion models are also used by public safety responders and emergency management personnel for emergency planning of accidental chemical releases. Models are used to determin
|
https://en.wikipedia.org/wiki/Continuously%20compounded%20nominal%20and%20real%20returns
|
Return rate is a corporate finance and accounting tool which calculates the gain and loss of investment over a certain period of time.
Nominal return
Let Pt be the price of a security at time t, including any cash dividends or interest, and let Pt − 1 be its price at t − 1. Let RSt be the simple rate of return on the security from t − 1 to t. Then
The continuously compounded rate of return or instantaneous rate of return RCt obtained during that period is
If this instantaneous return is received continuously for one period, then the initial value Pt-1 will grow to during that period. See also continuous compounding.
Since this analysis did not adjust for the effects of inflation on the purchasing power of Pt, RS and RC are referred to as nominal rates of return.
Real return
Let be the purchasing power of a dollar at time t (the number of bundles of consumption that can be purchased for $1). Then , where PLt is the price level at t (the dollar price of a bundle of consumption goods). The simple inflation rate ISt from t –1 to t is . Thus, continuing the above nominal example, the final value of the investment expressed in real terms is
Then the continuously compounded real rate of return is
The continuously compounded real rate of return is just the continuously compounded nominal rate of return minus the continuously compounded inflation rate.
Sources
Applied mathematics
|
https://en.wikipedia.org/wiki/Windows%20Native%20API
|
The Native API is a lightweight application programming interface (API) used by Windows NT and user mode applications. This API is used in the early stages of Windows NT startup process, when other components and APIs are still unavailable. Therefore, a few Windows components, such as the Client/Server Runtime Subsystem (CSRSS), are implemented using the Native API. The Native API is also used by subroutines such as those in kernel32.dll that implement the Windows API, the API based on which most of the Windows components are created.
Most of the Native API calls are implemented in ntoskrnl.exe and are exposed to user mode by ntdll.dll. The entry point of ntdll.dll is LdrInitializeThunk. Native API calls are handled by the kernel via the System Service Descriptor Table (SSDT).
Function groups
The Native API comprises many functions. They include C runtime functions that are needed for a very basic C runtime execution, such as strlen(), sprintf(), memcpy() and floor(). Other common procedures like malloc(), printf(), scanf() are missing (the first because it does not specify a heap to allocate memory from and the second and third because they use the console, accessed only via KERNEL32.DLL). The vast majority of other Native API routines, by convention, have a 2 or 3 letter prefix, which is:
Nt or Zw are system calls declared in ntdll.dll and ntoskrnl.exe. When called from ntdll.dll in user mode, these groups are almost exactly the same; they execute an interrupt into kernel mode and call the equivalent function in ntoskrnl.exe via the SSDT. When calling the functions directly in ntoskrnl.exe (only possible in kernel mode), the Zw variants ensure kernel mode, whereas the Nt variants do not. The Zw prefix does not stand for anything.
Rtl is the second largest group of ntdll calls. These comprise the (extended) C Run-Time Library, which includes many utility functions that can be used by native applications, yet don't directly involve kernel support.
Csr are cli
|
https://en.wikipedia.org/wiki/Intruder%20detection
|
In information security, intruder detection is the process of detecting intruders behind attacks as unique persons. This technique tries to identify the person behind an attack by analyzing their computational behaviour. This concept is sometimes confused with Intrusion Detection (also known as IDS) techniques which are the art of detecting intruder actions.
History
Some other earlier works reference the concept of Intruder Authentication, Intruder Verification, or Intruder Classification, but the Si6 project was one of the first projects to deal with the full scope of the concept.
Theory
Intruder Detection Systems try to detect who is attacking a system by analyzing his or her computational behaviour or biometric behaviour.
Some of the parameters used to identify a intruder
Keystroke Dynamics (aka keystroke patterns, typing pattern, typing behaviour)
Patterns using an interactive command interpreter:
Commands used
Commands sequence
Accessed directories
Character deletion
Patterns on the network usage:
IP address used
ISP
Country
City
Ports used
TTL analysis
Operating system used to attack
Protocols used
Connection times patterns
Keystroke dynamics
Keystroke dynamics is paramount in Intruder Detection techniques because it is the only parameter that has been classified as a real 'behavioural biometric pattern'.
Keystroke dynamics analyze times between keystrokes issued in a computer keyboard or cellular phone keypad searching for patterns. First techniques used statistics and probability concepts like 'standard deviations' and 'Mean', later approaches use data mining, neural networks, Support Vector Machine, etc.
Translation confusion
There is a confusion with the Spanish translation of 'Intrusion detection system', also known as IDS. Some people translate it as 'Sistemas de Detección de Intrusiones', but others translate it as 'Sistemas de Detección de Intrusos'. Only the former is correct.
See also
Intrusion Detection
Intrusion-de
|
https://en.wikipedia.org/wiki/Experimental%20software%20engineering
|
Experimental software engineering involves running experiments on the processes and procedures involved in the creation of software systems, with the intent that the data be used as the basis of theories about the processes involved in software engineering (theory backed by data is a fundamental tenet of the scientific method). A number of research groups primarily use empirical and experimental techniques.
The term empirical software engineering emphasizes the use of empirical studies of all kinds to accumulate knowledge. Methods used include experiments, case studies, surveys, and using whatever data is available.
Empirical software engineering research
In a keynote at the International Symposium on Empirical Software Engineering and Measurement Prof. Wohlin recommended ten commitments that the research community should follow to increase the relevance and impact of empirical software engineering research. However, at the same conference Dr. Ali effectively argued that solely following these will not be enough and we need to do more than just show the evidence substantiating the claimed benefits of our interventions but instead what is required for practical relevance and potential impact is the evidence for cost-effectiveness.
The International Software Engineering Research Network (ISERN) is a global community of research groups who are active in experimental software engineering. Its purpose is to advance the practice of and foster university and industry collaborations within experimental software engineering. ISERN holds annual meetings in conjunction with the International Symposium on Empirical Software Engineering and Measurement (ESEM) conference.
References
Bibliography
Victor Basili, Richard W. Selby, David H. Hutchens, "Experimentation in Software Engineering", IEEE Transactions on Software Engineering, Vol. SE-12, No.7, July 1986
Basili, V.; Rombach, D.; Schneider, K.; Kitchenham, B.; Pfahl, D.; Selby, R. (Eds.),Empirical Software Engineerin
|
https://en.wikipedia.org/wiki/List%20of%20ICD-9%20codes%20740%E2%80%93759%3A%20congenital%20anomalies
|
This is a shortened version of the fourteenth chapter of the ICD-9: Congenital Anomalies. It covers ICD codes 740 to 759. The full chapter can be found on pages 417 to 437 of Volume 1, which contains all (sub)categories of the ICD-9. Volume 2 is an alphabetical index of Volume 1. Both volumes can be downloaded for free from the website of the World Health Organization.
Nervous system (740–742)
Anencephalus and similar anomalies
Anencephalus
Spina bifida
Other congenital anomalies of nervous system
Microcephalus
Hydrocephalus
Eye, ear, face and neck (743–744)
Congenital anomalies of eye
Anophthalmos
Clinical anophthalmos unspecified
Cystic eyeball congenital
Cryptophthalmos
Microphthalmos
Buphthalmos
Congenital cataract and lens anomalies
Coloboma and other anomalies of anterior segment
Aniridia
Congenital anomalies of posterior segment
Congenital anomalies of eyelids, lacrimal system, and orbit
Congenital anomalies of ear, face, and neck
Anomalies of ear causing impairment of hearing
Accessory auricle
Other specified congenital anomalies of ear
Macrotia
Microtia
Unspecified congenital anomaly of ear
Branchial cleft cyst or fistula; preauricular sinus
Webbing of neck
Other specified congenital anomalies of face and neck
Macrocheilia
Microcheilia
Macrostomia
Microstomia
Circulatory system (745–747)
Bulbus cordis anomalies and anomalies of cardiac septal closure
Common truncus
Transposition of great vessels
Tetralogy of fallot
Common ventricle
Ventricular septal defect
Atrial septal defect
Endocardial cushion defects
Cor biloculare
Other congenital anomalies of heart
Tricuspid atresia and stenosis congenital
Ebstein's anomaly
Congenital stenosis of aortic valve
Congenital insufficiency of aortic valve
Congenital mitral stenosis
Congenital mitral insufficiency
Hypoplastic left heart syndrome
Other specified congenital anomalies of heart
Subaortic stenosis congenital
|
https://en.wikipedia.org/wiki/Entrance%20facility
|
In telecommunications, Entrance facility refers to the entrance to a building for both public and private network service cables (including antenna transmission lines, where applicable), including the entrance point at the building wall or floor, and continuing to the entrance room or entrance space.
Entrance facilities are the transmission facilities (typically wires or cables) that connect competitive LECs’ networks with incumbent LECs’ networks.
Computer networking
|
https://en.wikipedia.org/wiki/Black%20rose%20symbolism
|
Black roses do not naturally exist but are symbols with different meanings or for various things.
Flowers
The flowers commonly called black roses do not really exist in said color, instead they actually have a dark shade, such as the "Black Magic", "Barkarole", "Black Beauty" and "Baccara" varieties. They can be artificially colored as well.
In the language of flowers, roses have many different meanings. Black roses symbolize ideas such as hatred, despair, death or rebirths.
Anarchism
Black Rose Books is the name of the Montreal anarchist publisher and small press imprint headed by the libertarian-municipalist and anarchist Dimitrios Roussopoulos. One of the two anarchist bookshops in Sydney is Black Rose Books which has existed in various guises since 1982.
The Black Rose was the title of a respected journal of anarchist ideas published in the Boston area during the 1970s, as well as the name of an anarchist lecture series addressed by notable anarchist and libertarian socialists (including Murray Bookchin and Noam Chomsky) into the 1990s.
Black Rose Labour (organisation) is the name of a factional political organisation associated with the United Kingdom Labour Party, which defines itself as Libertarian Socialist.
Black Rose Anarchist Federation is a political organization that was founded in 2014, with a few local and regional groups in the United States.
See also
Anarchist symbolism
References
Wilkins, Eithne. The rose-garden game; a tradition of beads and flowers, [New York] Herder and Herder, 1969.
Symbolism
Rose
Language of flowers
Anarchist symbols
|
https://en.wikipedia.org/wiki/Neusis%20construction
|
In geometry, the neusis (; ; plural: ) is a geometric construction method that was used in antiquity by Greek mathematicians.
Geometric construction
The neusis construction consists of fitting a line element of given length () in between two given lines ( and ), in such a way that the line element, or its extension, passes through a given point . That is, one end of the line element has to lie on , the other end on , while the line element is "inclined" towards .
Point is called the pole of the neusis, line the directrix, or guiding line, and line the catch line. Length is called the diastema ().
A neusis construction might be performed by means of a marked ruler that is rotatable around the point (this may be done by putting a pin into the point and then pressing the ruler against the pin). In the figure one end of the ruler is marked with a yellow eye with crosshairs: this is the origin of the scale division on the ruler. A second marking on the ruler (the blue eye) indicates the distance from the origin. The yellow eye is moved along line , until the blue eye coincides with line . The position of the line element thus found is shown in the figure as a dark blue bar.
Use of the neusis
Neuseis have been important because they sometimes provide a means to solve geometric problems that are not solvable by means of compass and straightedge alone. Examples are the trisection of any angle in three equal parts, and the doubling of the cube. Mathematicians such as Archimedes of Syracuse (287–212 BC) and Pappus of Alexandria (290–350 AD) freely used neuseis; Sir Isaac Newton (1642–1726) followed their line of thought, and also used neusis constructions. Nevertheless, gradually the technique dropped out of use.
Regular polygons
In 2002, A. Baragar showed that every point constructible with marked ruler and compass lies in a tower of fields over , , such that the degree of the extension at each step is no higher than 6. Of all prime-power polygons below the 12
|
https://en.wikipedia.org/wiki/MLX%20%28software%29
|
MLX is a series of machine language entry utilities published by the magazines COMPUTE! and COMPUTE!'s Gazette, as well as books from COMPUTE! Publications. These programs were designed to allow relatively easy entry of the type-in machine language listings that were often included in these publications. Versions were available for the Commodore 64, VIC-20, Atari 8-bit family, and Apple II. MLX listings were reserved for relatively long machine language programs such as SpeedScript.
First version
MLX was introduced in the December 1983 issue of COMPUTE! for the Commodore 64 and Atari 8-bit family. This was followed in the January 1984 issue of COMPUTE!'s Gazette by a version for the VIC-20 with 8K expansion, and in the March 1984 issue by Tiny MLX, a version for the unexpanded VIC-20. These use a format consisting of six data bytes in decimal format, and a seventh as a checksum. The program auto-increments the address and prints the comma delimiters every three characters. Invalid keystrokes are ignored.
In the Commodore version, beginning in the May 1984 issue of COMPUTE!, several keyboard keys are redefined to create a makeshift numeric keypad.
Improved version
A new version of MLX was introduced for the Apple II in the June 1985 issue. This version uses an 8-byte-per-line hexadecimal format. A more sophisticated algorithm was implemented to catch errors overlooked by the original.
The improved features were then backported to the Commodore 64. The new version, known on the title screen as "MLX II", but otherwise simply as "the new MLX", appeared in the December 1985 issue of COMPUTE!. It was printed in COMPUTE!'s Gazette the following month. This version of MLX was used until COMPUTE!'s Gazette switched to a disk-only format in December 1993.
See also
The Automatic Proofreader – COMPUTE!'s checksum utility for BASIC programs
References
External links
Machine Language Editor for Atari and Commodore
Apple II software
VIC-20 software
Atari 8-bit fami
|
https://en.wikipedia.org/wiki/Anonym.OS
|
Anonym.OS was a Live CD operating system based on OpenBSD 3.8 with strong encryption and anonymization tools. The goal of the project was to provide secure, anonymous web browsing access to everyday users. The operating system was OpenBSD 3.8, although many packages have been added to facilitate its goal. It used Fluxbox as its window manager.
The project was discontinued after the release of Beta 4 (2006).
Distributed
Designed, created and distributed by kaos.theory/security.research.
Legacy
Although this specific project is no longer updated, its successors are Incognito OS (discontinued in 2008) and FreeSBIE.
See also
Comparison of BSD operating systems
Security-focused operating system
References
External links
Anonym.OS's SourceForge website
Anonym.OS at DistroWatch
OpenBSD
Operating system distributions bootable from read-only media
Privacy software
Tor (anonymity network)
|
https://en.wikipedia.org/wiki/Supervisory%20control%20theory
|
The supervisory control theory (SCT), also known as the Ramadge–Wonham framework (RW framework), is a method for automatically synthesizing supervisors that restrict the behavior of a plant such that as much as possible of the given specifications are fulfilled. The plant is assumed to spontaneously generate events. The events are in either one of the following two categories controllable or uncontrollable. The supervisor observes the string of events generated by the plant and might prevent the plant from generating a subset of the controllable events. However, the supervisor has no means of forcing the plant to generate an event.
In its original formulation the SCT considered the plant and the specification to be modeled by formal languages, not necessarily regular languages generated by finite automata as was done in most subsequent work.
See also
Discrete event dynamic system (DEDS)
Boolean differential calculus (BDC)
References
Control theory
|
https://en.wikipedia.org/wiki/Isopropyl%20acetate
|
Isopropyl acetate is an ester, an organic compound which is the product of esterification of acetic acid and isopropanol. It is a clear, colorless liquid with a characteristic fruity odor.
Isopropyl acetate is a solvent with a wide variety of manufacturing uses that is miscible with most other organic solvents, and slightly soluble in water (although less so than ethyl acetate). It is used as a solvent for cellulose, plastics, oil and fats. It is a component of some printing inks and perfumes.
Isopropyl acetate decomposes slowly on contact with steel in the presence of air, producing acetic acid and isopropanol. It reacts violently with oxidizing materials and it attacks many plastics.
Isopropyl acetate is quite flammable in both its liquid and vapor forms, and it may be harmful if swallowed or inhaled.
The Occupational Safety and Health Administration has set a permissible exposure limit (PEL) of 250ppm (950mg/m3) over an eight-hour time-weighted average for workers handling isopropyl acetate.
References
Flavors
Ester solvents
Acetate esters
Isopropyl esters
Sweet-smelling chemicals
|
https://en.wikipedia.org/wiki/Tagged%20pointer
|
In computer science, a tagged pointer is a pointer (concretely a memory address) with additional data associated with it, such as an indirection bit or reference count. This additional data is often "folded" into the pointer, meaning stored inline in the data representing the address, taking advantage of certain properties of memory addressing. The name comes from "tagged architecture" systems, which reserved bits at the hardware level to indicate the significance of each word; the additional data is called a "tag" or "tags", though strictly speaking "tag" refers to data specifying a type, not other data; however, the usage "tagged pointer" is ubiquitous.
Folding tags into the pointer
There are various techniques for folding tags into a pointer.
Most architectures are byte-addressable (the smallest addressable unit is a byte), but certain types of data will often be aligned to the size of the data, often a word or multiple thereof. This discrepancy leaves a few of the least significant bits of the pointer unused, which can be used for tags – most often as a bit field (each bit a separate tag) – as long as code that uses the pointer masks out these bits before accessing memory. E.g., on a 32-bit architecture (for both addresses and word size), a word is 32 bits = 4 bytes, so word-aligned addresses are always a multiple of 4, hence end in 00, leaving the last 2 bits available; while on a 64-bit architecture, a word is 64 bits = 8 bytes, so word-aligned addresses end in 000, leaving the last 3 bits available. In cases where data is aligned at a multiple of word size, further bits are available. In case of word-addressable architectures, word-aligned data does not leave any bits available, as there is no discrepancy between alignment and addressing, but data aligned at a multiple of word size does.
Conversely, in some operating systems, virtual addresses are narrower than the overall architecture width, which leaves the most significant bits available for tags; this
|
https://en.wikipedia.org/wiki/SU-8%20photoresist
|
SU-8 is a commonly used epoxy-based negative photoresist. Negative refers to a photoresist whereby the parts exposed to UV become cross-linked, while the remainder of the film remains soluble and can be washed away during development.
As shown in the structural diagram, SU-8 derives its name from the presence of 8 epoxy groups. This is a statistical average per moiety. It is these epoxies that cross-link to give the final structure.
It can be made into a viscous polymer that can be spun or spread over a thickness ranging from below 1 micrometer up to above 300 micrometers, or Thick Film Dry Sheets (TFDS) for lamination up to above 1 millimetre thick. Up to 500 µm, the resist can be processed with standard contact lithography. Above 500 µm, absorption leads to increasing sidewall undercuts and poor curing at the substrate interface. It can be used to pattern high aspect ratio structures. An aspect ratio of (> 20) has been achieved with the solution formulation and (> 40) has been demonstrated from the dry resist. Its maximum absorption is for ultraviolet light with a wavelength of the i-line: 365 nm (it is not practical to expose SU-8 with g-line ultraviolet light). When exposed, SU-8's long molecular chains cross-link, causing the polymerisation of the material. SU-8 series photoresists use gamma-butyrolactone or cyclopentanone as the primary solvent.
SU-8 was originally developed as a photoresist for the microelectronics industry, to provide a high-resolution mask for fabrication of semiconductor devices.
It is now mainly used in the fabrication of microfluidics (mainly via soft lithography, but also with other imprinting techniques such as nanoimprint lithography) and microelectromechanical systems parts. It is also one of the most biocompatible materials known and is often used in bio-MEMS for life science applications.
Composition and processing
SU-8 is composed of Bisphenol A Novolac epoxy that is dissolved in an organic solvent (gamma-butyrolactone GBL
|
https://en.wikipedia.org/wiki/Imre%20Leader
|
Imre Bennett Leader (born 30 October 1963) is a British mathematician, a professor in DPMMS at the University of Cambridge working in the field of combinatorics. He is also known as an Othello player.
Life
He is the son of the physicist Elliot Leader and his first wife Ninon Neményi, previously married to the poet Endre Kövesi; Darian Leader is his brother. Imre Lakatos was a family friend and his godfather.
Leader was educated at St Paul's School in London, from 1976 to 1980. He won a silver medal on the British team at the 1981 International Mathematical Olympiad (IMO) for pre-undergraduates. He later acted as the official leader of the British IMO team, taking over from Adam McBride in 1999, to 2001. He was the IMO's Chief Coordinator and Problems Group Chairman in 2002.
Leader went on to Trinity College, Cambridge, where he graduated B.A. in 1984, M.A. in 1989, and Ph.D. in 1989. His Ph.D. was in mathematics was for work on combinatorics, supervised by Béla Bollobás. From 1989 to 1996 he was Fellow at Peterhouse, Cambridge, then was Reader at University College London from 1996 to 2000. He was a lecturer at Cambridge from 2000 to 2002, and Reader there from 2002 to 2005. In 2000 he became a Fellow of Trinity College.
Awards and honours
In 1999 Leader was awarded a Junior Whitehead Prize for his contributions to combinatorics. Cited results included the proof, with Reinhard Diestel, of the bounded graph conjecture of Rudolf Halin.
Othello
Leader in an interview in 2016 stated that he began to play Othello in 1981, with his friend Jeremy Rickard. Between 1983 and 2019 he was 15 times the British Othello champion. In 1983 he came second in the world individual championship, and in 1988 he played on the British team that won the world team championship. In 2019 he won the European championship, beating Matthias Berg in the final in Berlin.
References
External links
Some publications from DBLP
Living people
Reversi players
20th-century British mathematician
|
https://en.wikipedia.org/wiki/Transitive%20reduction
|
In the mathematical field of graph theory, a transitive reduction of a directed graph is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices , a (directed) path from to in exists if and only if such a path exists in the reduction. Transitive reductions were introduced by , who provided tight bounds on the computational complexity of constructing them.
More technically, the reduction is a directed graph that has the same reachability relation as . Equivalently, and its transitive reduction should have the same transitive closure as each other, and the transitive reduction of should have as few edges as possible among all graphs with that property.
The transitive reduction of a finite directed acyclic graph (a directed graph without directed cycles) is unique and is a subgraph of the given graph. However, uniqueness fails for graphs with (directed) cycles, and for infinite graphs not even existence is guaranteed.
The closely related concept of a minimum equivalent graph is a subgraph of that has the same reachability relation and as few edges as possible. The difference is that a transitive reduction does not have to be a subgraph of . For finite directed acyclic graphs, the minimum equivalent graph is the same as the transitive reduction. However, for graphs that may contain cycles, minimum equivalent graphs are NP-hard to construct, while transitive reductions can be constructed in polynomial time.
Transitive reduction can be defined for an abstract binary relation on a set, by interpreting the pairs of the relation as arcs in a directed graph.
Classes of graphs
In directed acyclic graphs
The transitive reduction of a finite directed graph G is a graph with the fewest possible edges that has the same reachability relation as the original graph. That is, if there is a path from a vertex x to a vertex y in graph G, there must also be a path from x to y in the transitive reduction of G, and
|
https://en.wikipedia.org/wiki/CDC%201604
|
The CDC 1604 was a 48-bit computer designed and manufactured by Seymour Cray and his team at the Control Data Corporation (CDC). The 1604 is known as one of the first commercially successful transistorized computers. (The IBM 7090 was delivered earlier, in November 1959.) Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-UNIVAC 1103.
A cut-down 24-bit version, designated the CDC 924, was shortly thereafter produced, and delivered to NASA.
The first 1604 was delivered to the U.S. Navy Post Graduate School in January 1960 for JOVIAL applications supporting major Fleet Operations Control Centers primarily for weather prediction in Hawaii, London, and Norfolk, Virginia. By 1964, over 50 systems were built. The CDC 3600, which added five op codes, succeeded the 1604, and "was largely compatible" with it.
One of the 1604s was shipped to the Pentagon to DASA (Defense Atomic Support Agency) and used during the Cuban missile crises to predict possible strikes by the Soviet Union against the United States.
A 12-bit minicomputer, called the CDC 160, was often used as an I/O processor in 1604 systems. A stand-alone version of the 160 called the CDC 160-A was arguably the first minicomputer.
Architecture
Memory in the CDC 1604 consisted of 32K 48-bit words of magnetic core memory with a cycle time of 6.4 microseconds. It was organized as two banks of 16K words each, with odd addresses in one bank and even addresses in the other. The two banks were phased 3.2 microseconds apart, so average effective memory access time was 4.8 microseconds. The computer executed about 100,000 operations per second.
Each 48-bit word contained two 24-bit instructions. The instruction format was 6-3-15: six bits for the operation code, three bits for a "designator" (index register for memory access instructions, condition for jump (branch) instructions) and fifteen bits for a memory address (or shift
|
https://en.wikipedia.org/wiki/Nadcap
|
Nadcap (formerly NADCAP, the National Aerospace and Defense Contractors Accreditation Program) is a global cooperative accreditation program for aerospace engineering, defense and related industries.
History of Nadcap
The Nadcap program is administered by the Performance Review Institute (PRI). Nadcap was established in 1990 by SAE International. Nadcap's membership consists of "prime contractors" who coordinate with aerospace accredited suppliers to develop industry-wide audit criteria for special processes and products. Through PRI, Nadcap provides independent certification of manufacturing processes for the industry. PRI has its headquarters in Warrendale, Pennsylvania with branch offices for Nadcap located in London, Beijing, and Nagoya.
Fields of Nadcap activities
The Nadcap program provides accreditation for special processes in the aerospace and defense industry.
These include:
Aerospace Quality Systems (AQS)
Aero Structure Assembly (ASA)
Chemical Processing (CP)
Coatings (CT)
Composites (COMP)
Conventional Machining as a Special Process (CMSP)
Elastomer Seals (SEAL)
Electronics (ETG)
Fluids Distribution (FLU)
Heat Treating (HT)
Materials Testing Laboratories (MTL)
Measurement & Inspection (M&I)
Metallic Materials Manufacturing (MMM)
Nonconventional Machining and Surface Enhancement (NMSE)
Nondestructive Testing (NDT)
Non Metallic Materials Manufacturing (NMMM)
Non Metallic Materials Testing (NMMT)
Sealants (SLT)
Welding (WLD)
The Nadcap program and industry
PRI schedules an audit and assigns an industry approved auditor who will conduct the audit using an industry agreed checklist. At the end of the audit, any non-conformity issues will be raised through a non-conformance report. PRI will administer and close out the non-conformance reports with the Supplier. Upon completion PRI will present the audit pack to a 'special process Task Group’ made up of members from industry who will review it and vote on its acceptability for approval.
|
https://en.wikipedia.org/wiki/Narcissistic%20number
|
In number theory, a narcissistic number (also known as a pluperfect digital invariant (PPDI), an Armstrong number (after Michael F. Armstrong) or a plus perfect number) in a given number base is a number that is the sum of its own digits each raised to the power of the number of digits.
Definition
Let be a natural number. We define the narcissistic function for base to be the following:
where is the number of digits in the number in base , and
is the value of each digit of the number. A natural number is a narcissistic number if it is a fixed point for , which occurs if . The natural numbers are trivial narcissistic numbers for all , all other narcissistic numbers are nontrivial narcissistic numbers.
For example, the number 153 in base is a narcissistic number, because and .
A natural number is a sociable narcissistic number if it is a periodic point for , where for a positive integer (here is the th iterate of ), and forms a cycle of period . A narcissistic number is a sociable narcissistic number with , and an amicable narcissistic number is a sociable narcissistic number with .
All natural numbers are preperiodic points for , regardless of the base. This is because for any given digit count , the minimum possible value of is , the maximum possible value of is , and the narcissistic function value is . Thus, any narcissistic number must satisfy the inequality . Multiplying all sides by , we get , or equivalently, . Since , this means that there will be a maximum value where , because of the exponential nature of and the linearity of . Beyond this value , always. Thus, there are a finite number of narcissistic numbers, and any natural number is guaranteed to reach a periodic point or a fixed point less than , making it a preperiodic point. Setting equal to 10 shows that the largest narcissistic number in base 10 must be less than .
The number of iterations needed for to reach a fixed point is the narcissistic function's persistence o
|
https://en.wikipedia.org/wiki/Privacy%20software
|
Privacy software is software built to protect the privacy of its users. The software typically works in conjunction with Internet usage to control or limit the amount of information made available to third parties. The software can apply encryption or filtering of various kinds.
Types of protection
Privacy software can refer to two different types of protection. The first type is protecting a user's Internet privacy from the World Wide Web. There are software products that will mask or hide a user's IP address from the outside world to protect the user from identity theft. The second type of protection is hiding or deleting the user's Internet traces that are left on their PC after they have been surfing the Internet. There is software that will erase all the user's Internet traces and there is software that will hide and (Encrypt) a user's traces so that others using their PC will not know where they have been surfing.
Whitelisting and blacklisting
One solution to enhance privacy software is whitelisting. Whitelisting is a process in which a company identifies the software that it will allow and does not try to recognize malware. Whitelisting permits acceptable software to run and either prevents anything else from running or lets new software run in a quarantined environment until its validity can be verified. Whereas whitelisting allows nothing to run unless it is on the whitelist, blacklisting allows everything to run unless it is on the black. A blacklist then includes certain types of software that are not allowed to run in the company environment. For example, a company might blacklist peer-to-peer file sharing on its systems. In addition to software, people, devices, and websites can also be whitelisted or blacklisted.
Intrusion detection systems
Intrusion detection systems are designed to detect all types of malicious network traffic and computer usage that cannot be detected by a firewall. These systems capture all network traffic flows and examine
|
https://en.wikipedia.org/wiki/System%20deployment
|
The deployment of a mechanical device, electrical system, computer program, etc., is its assembly or transformation from a packaged form to an operational working state.
Deployment implies moving a product from a temporary or development state to a permanent or desired state.
See also
IT infrastructure deployment
Development
Innovation
Product life-cycle theory
Software deployment
References
Systems engineering
|
https://en.wikipedia.org/wiki/Kronecker%27s%20lemma
|
In mathematics, Kronecker's lemma (see, e.g., ) is a result about the relationship between convergence of infinite sums and convergence of sequences. The lemma is often used in the proofs of theorems concerning sums of independent random variables such as the strong Law of large numbers. The lemma is named after the German mathematician Leopold Kronecker.
The lemma
If is an infinite sequence of real numbers such that
exists and is finite, then we have for all and that
Proof
Let denote the partial sums of the x'''s. Using summation by parts,
Pick any ε > 0. Now choose N so that is ε-close to s for k > N. This can be done as the sequence converges to s. Then the right hand side is:
Now, let n go to infinity. The first term goes to s, which cancels with the third term. The second term goes to zero (as the sum is a fixed value). Since the b'' sequence is increasing, the last term is bounded by .
References
Mathematical series
Lemmas
|
https://en.wikipedia.org/wiki/Permanent%20signal
|
Permanent signal (PS) in American telephony terminology, or permanent loop in British usage, is a condition in which a POTS line is off-hook without connection for an extended period of time. This is indicated in modern switches by the silent termination after the off-hook tone times out and the telephone exchange computer puts the telephone line on its High & Wet list or Wetlist. In older switches, however, a Permanent Signal Holding Trunk (PSHT) would play either an off-hook tone (howler tone) or a 480/500 Hz high tone (which would subsequently bleed into adjacent lines via crosstalk). Off-hook tone is a tone of increasing intensity that is intended to alert telephone users to the fact that the receiver has been left off the hook without being connected in a call. On some systems before the off-hook tone is played, an intercept message may be announced. The most common message reads as follows; "If you'd like to make a call, please hang up and try again. If you need help, hang up and then dial your operator."
Permanent signal can also describe the state of a trunk that is seized but has not been dialed upon, if it remains in a busy condition (sometimes alerting with reorder).
In most mid-20th-century switching equipment, a permanent signal would tie up a junctor circuit, diminishing the ability of the switch to handle outgoing calls. When flooded cables or other conditions made this a real problem, switch staff would open the cable, or paper the off-normal contacts of the crossbar switch, or block the line relay from operating. These methods had the disadvantage of blocking all outgoing calls from that line until it was manually cleared. Manufacturers also sold devices that monitored the talk wires and held up the cutoff relay or crossbar hold magnet until the condition cleared. Some crossbar line circuit designs had a park condition allowing the line circuit itself to monitor the line.
Stored program control exchanges finally solved the problem, by sett
|
https://en.wikipedia.org/wiki/Mutation%20frequency
|
Mutation frequency and mutation rates are highly correlated to each other. Mutation frequencies test are cost effective in laboratories however; these two concepts provide vital information in reference to accounting for the emergence of mutations on any given germ line.
There are several test utilized in measuring the chances of mutation frequency and rates occurring in a particular gene pool. Some of the test are as follows:
Avida Digital Evolution Platform
Fluctuation Analysis
Mutation frequency and rates provide vital information about how often a mutation may be expressed in a particular genetic group or sex. Yoon et., 2009 suggested that as sperm donors ages increased the sperm mutation frequencies increased. This reveals the positive correlation in how males are most likely to contribute to genetic disorders that reside within X-linked recessive chromosome.
There are additional factors affecting mutation frequency and rates involving evolutionary influences. Since, organisms may pass mutations to their offspring incorporating and analyzing the mutation frequency and rates of a particular species may provide a means to adequately comprehend its longevity
Aging
The time course of spontaneous mutation frequency from middle to late adulthood was measured in four different tissues of the mouse. Mutation frequencies in the cerebellum (90% neurons) and male germ cells were lower than in liver and adipose tissue. Furthermore, the mutation frequencies increased with age in liver and adipose tissue, whereas in the cerebellum and male germ cells the mutation frequency remained constant
Dietary restricted rodents live longer and are generally healthier than their ad libitum fed counterparts. No changes were observed in the spontaneous chromosomal mutation frequency of dietary restricted mice (aged 6 and 12 months) compared to ad libitum fed control mice. Thus dietary restriction appears to have no appreciable effect on spontaneous mutation in chromosomal
|
https://en.wikipedia.org/wiki/Warp%20%26%20Warp
|
is a multidirectional shooter arcade video game developed and published by Namco in 1981. It was released by Rock-Ola in North America as Warp Warp. The game was ported to the Sord M5 and MSX. A sequel, Warpman, was released in 1985 for the Family Computer with additional enemy types, power-ups, and improved graphics and sound.
Gameplay
The player must take control of a "Monster Fighter", who must shoot tongue-sticking aliens named "Berobero" (a Japanese onomatopoeic word for licking) in the "Space World" without letting them touch him. If he kills three Berobero of the same colour in a row, a mysterious alien will appear that can be killed for extra points. When the Warp Zone in the centre of the screen flashes (with the Katakana text in the Japanese version or the English text "WARP" in the US versions), it is possible for the Monster Fighter to warp to the "Maze World", where the Berobero must be killed with time-delay bombs. The delay is controlled by how long the player holds the button down - but every time he kills one, his bombs will get stronger, making it easier for the Monster Fighter to blow himself up with his own bombs until he returns to Space World.
Reception
In a retrospsective review, AllGame compared its gameplay to Wizard of Wor and Bomberman, describing it as "an obscure but endearing maze shooter".
References
External links
1981 video games
Arcade video games
Namco arcade games
Bandai Namco Entertainment franchises
Multidirectional shooters
MSX games
Nintendo Entertainment System games
Video games developed in Japan
|
https://en.wikipedia.org/wiki/Linux%20Security%20Modules
|
Linux Security Modules (LSM) is a framework allowing the Linux kernel to support without bias a variety of computer security models. LSM is licensed under the terms of the GNU General Public License and is a standard part of the Linux kernel since Linux 2.6. AppArmor, SELinux, Smack, and TOMOYO Linux are the currently approved security modules in the official kernel.
Design
LSM was designed in order to answer all the requirements for successfully implementing a mandatory access control module, while imposing the fewest possible changes to the Linux kernel. LSM avoids the approach of system call interposition used by Systrace because it doesn't scale to multiprocessor kernels and is subject to TOCTTOU (race) attacks. Instead, LSM inserts "hooks" (upcalls to the module) at every point in the kernel where a user-level system-call is about to result with an access to an important internal kernel-object like inodes and task control blocks.
LSM is narrowly scoped to solve the problem of access control, while not imposing a large and complex change-patch on the mainstream kernel. It isn't intended to be a general "hook" or "upcall" mechanism, nor does it support Operating system-level virtualization.
LSM's access-control goal is very closely related to the problem of system auditing, but is subtly different. Auditing requires that every attempt at access be recorded. LSM cannot deliver this, because it would require a great many more hooks, in order to detect cases where the kernel "short circuits" failing system-calls and returns an error code before getting near significant objects.
The LSM design is described in the paper Linux Security Modules: General Security Support for the Linux Kernel presented at USENIX Security 2002. At the same conference was the paper Using CQUAL for Static Analysis of Authorization Hook Placement which studied automatic static analysis of the kernel code to verify that all of the necessary hooks have actually been inserted into the Linu
|
https://en.wikipedia.org/wiki/Semantic%20mapper
|
A semantic mapper is tool or service that aids in the transformation of data elements from one namespace into another namespace. A semantic mapper is an essential component of a semantic broker and one tool that is enabled by the Semantic Web technologies.
Essentially the problems arising in semantic mapping are the same as in data mapping for data integration purposes, with the difference that here the semantic relationships are made explicit through the use of semantic nets or ontologies which play the role of data dictionaries in data mapping.
Structure
A semantic mapper must have access to three data sets:
List of data elements in source namespace
List of data elements in destination namespace
List of semantic equivalent statements between source and destination (e.g. owl:equivalentClass, owl:equivalentProperty or owl:sameAs in OWL).
A semantic mapper processes on a list of data elements in the source namespace. The semantic mapper will successively translate the data elements from the source namespace to the destination namespace. The mapping does not necessarily need to be a one-to-one mapping. Some data elements may map to several data elements in the destination.
Some semantic mappers are static in that they will do a one-time data transforms. Others will generate an executable program to repeatedly perform this transform. The output of this program may be any transformation system such as XSLT, a Java program or a program in some other procedural language.
See also
Data model
Data wrangling
Enterprise application integration
Mediation
Ontology matching
Semantic heterogeneity
Semantic integration
Semantic translation
Semantic unification
References
Ontologies & the Semantic Web for Semantic Interoperability Leo Obrst
Semantic Web
Data mapping
|
https://en.wikipedia.org/wiki/King%20%26%20Balloon
|
is a fixed shooter arcade video game released by Namco in 1980 and licensed to GamePlan for U.S. manufacture and distribution. It runs upon the Namco Galaxian hardware, based on the Z80 microprocessor, with an extra Zilog Z80 microprocessor to drive a DAC for speech; it was one of the first games to have speech synthesis. An MSX port was released in Japan in 1984.
Gameplay
The player controls two green men with an orange cannon, stationed on the parapet of a castle, that fires at a fleet of hot-air balloons. Below the cannon, the King moves slowly back and forth on the ground as the balloons return fire and dive toward him. If a balloon reaches the ground, it will sit there until the King walks into it, at which time it lifts off with him. The player must then shoot the balloon to free the King, who will parachute safely to the ground. At times, two or more diving balloons can combine to form a single larger one, which awards extra points and splits apart when hit.
The cannon is destroyed by collision with balloons or their shots, but is replaced after a brief delay with no effect on the number of remaining lives. One life is lost whenever a balloon carries the King off the top of the screen; the game ends when all lives are lost.
As in Galaxian, the round number stops increasing at round 48.
Voice
The King speaks when he is captured ("HELP!"), when he is rescued ("THANK YOU"), and when he is carried away ("BYE BYE!"). The balloons make the same droning sound as the aliens from Galaxian, released in the previous year, and the cannon's shots also make the same sound as those of the player's ship (the "Galaxip") from the very same game.
In the original Japanese version of the game, the King speaks English with a heavy Japanese accent, saying "herupu" ("help!"), "sankyū" ("thank you"), and "baibai" ("bye bye!"). The U.S. version of the game features a different voice for the King without the Japanese accent.
Legacy
King & Balloon was later featured in Namco Muse
|
https://en.wikipedia.org/wiki/Intussusceptive%20angiogenesis
|
Intussusceptive angiogenesis also known as splitting angiogenesis, is a type of angiogenesis, the process whereby a new blood vessel is created. By intussusception a new blood vessel is created by splitting of an existing blood vessel in two. Intussusception occurs in normal development as well as in pathologic conditions involving wound healing, tissue regeneration, inflammation as colitis or myocarditis, lung fibrosis, and tumors amongst others.
Intussusception was first observed in neonatal rats. In this type of vessel formation, the capillary wall extends into the lumen to split a single vessel in two. There are four phases of intussusceptive angiogenesis. First, the two opposing capillary walls establish a zone of contact. Second, the endothelial cell junctions are reorganized and the vessel bilayer is perforated to allow growth factors and cells to penetrate into the lumen. Third, a core is formed between the two new vessels at the zone of contact that is filled with pericytes and myofibroblasts. These cells begin laying collagen fibers into the core to provide an extracellular matrix for growth of the vessel lumen. Finally, the core is fleshed out with no alterations to the basic structure. Intussusception is important because it is a reorganization of existing cells. It allows a vast increase in the number of capillaries without a corresponding increase in the number of endothelial cells. This is especially important in embryonic development as there are not enough resources to create a rich microvasculature with new cells every time a new vessel develops.
A process called coalescent angiogenesis is considered the opposite of intussusceptive angiogenesis. During coalescent angiogenesis capillaries fuse and form larger vessels to increase blood flow and circulation. Several other modes of angiogenesis have been described, such as sprouting angiogenesis, vessel co-option and vessel elongation.
Research
In a small study comparing the lungs of patients wh
|
https://en.wikipedia.org/wiki/Personal%20Internet%20Communicator
|
The Personal Internet Communicator (PIC) is a consumer device released by AMD in 2004 to allow people in emerging countries access to the internet. Originally part of AMD's 50x15 Initiative, the PIC has been deployed by Internet service providers (ISPs) in several developing countries. It is based on an AMD Geode CPU and uses Microsoft Windows CE and Microsoft Internet Explorer 6.
The fanless PC runs the low-power Geode x86 processor and is equipped with 128 MB DDR memory and a 10 GB 3.5-inch hard drive. The device's price is $185 with a keyboard, mouse, and preinstalled software for basic personal computing and internet/email access and $249 with a monitor included.
Transfer to Data Evolution Corporation
AMD stopped work on the Personal Internet Communicator project after nearly two years of planning and development. In December 2006 AMD transferred PIC manufacturing assets to Data Evolution Corporation and the device was then marketed as the decTOP.
Local service providers, such as telecommunications firms and government-sponsored communications initiatives, brand, promote, and sell the PIC. AMD owns the design of the PIC, but the device is currently manufactured by Solectron, a high volume contract manufacturing specialist. Other companies integrated into the development include Seagate and Samsung. The ISPs include the Tata Group (India), CRC (México), Telmex (México), and Cable and Wireless (in the Caribbean).
Technology
Hardware
The device is designed for minimal cost like a consumer audio/video appliance, is not internally expandable, and comes equipped with a minimum set of interfaces which include a VGA graphics display interface, four USB 1.1 ports, a built-in 56 kbit/s ITU v.92 Fax/Modem and an AC'97 audio interface providing sound capabilities. There is also an IDE-connector and the case can house a normal 3.5" HDD.
Hardware specifications
AMD Geode GX 500@1.0W processor, 366 MHz clock rate
128 MBytes DDR RAM (Up to 512 MBytes)
VGA display inte
|
https://en.wikipedia.org/wiki/PelB%20leader%20sequence
|
The pelB leader sequence is a sequence of amino acids which, when attached to a protein, directs the protein to the bacterial periplasm, where the sequence is removed by a signal peptidase. Specifically, pelB refers to pectate lyase B of Erwinia carotovora CE. The leader sequence consists of the 22 N-terminal amino acid residues. This leader sequence can be attached to any other protein (on the DNA level) resulting in a transfer of such a fused protein to the periplasmic space of Gram-negative bacteria, such as Escherichia coli, often used in genetic engineering.
Protein secretion can increase the stability of cloned gene products. For instance it was shown that the half-life of the recombinant proinsulin is increased 10-fold when the protein is secreted to the periplasmic space. (vijji. Narne, R.S.Ramya)
One of pelB's possible applications is to direct coat protein-antigen fusions to the cell surface for the construction of engineered bacteriophages for the purpose of phage display.
The Pectobacterium carotovorum pelB leader sequence commonly used in molecular biology has the sequence MKYLLPTAAAGLLLLAAQPAMA (UniProt Q04085).
References
SP Lei et al., Characterization of the Erwinia carotovora pelB gene and its product pectate lyase. J. Bacteriol. 169(9): 4379–4383 (1987).
Amino acids
Bacteria
Genetic engineering
|
https://en.wikipedia.org/wiki/Mongoose-V
|
The Mongoose-V 32-bit microprocessor for spacecraft onboard computer applications is a radiation-hardened and expanded 10–15 MHz version of the MIPS R3000 CPU. Mongoose-V was developed by Synova of Melbourne, Florida, USA, with support from the NASA Goddard Space Flight Center.
The Mongoose-V processor first flew on NASA's Earth Observing-1 (EO-1) satellite launched in November 2000 where it functioned as the main flight computer. A second Mongoose-V controlled the satellite's solid-state data recorder.
The Mongoose-V requires 5 volts and is packaged into a 256-pin ceramic quad flatpack (CQFP).
Examples of spacecraft that use the Mongoose-V include:
Earth Observing-1 (EO-1)
NASA's Microwave Anisotropy Probe (MAP), launched in June 2001, carried a Mongoose-V flight computer similar to that on EO-1.
NASA's Space Technology 5 series of microsatellites
CONTOUR
TIMED
Pluto probe New Horizons
See also
RAD750 Power PC
LEON
ERC32
Radiation hardening
Communications survivability
Faraday cage
Institute for Space and Defense Electronics, Vanderbilt University
Mars Reconnaissance Orbiter
MESSENGER Mercury probe
Mars rovers
TEMPEST
References
External links
Mongoose-V product page at Synova's website
Avionics computers
MIPS implementations
Radiation-hardened microprocessors
New Horizons
|
https://en.wikipedia.org/wiki/Saynoto0870.com
|
saynoto0870.com is a UK website with a directory of non-geographic telephone numbers and their geographic alternatives.
The website, which primarily started as a directory of cheaper alternatives to 0870 numbers (hence the name), also lists geographic-rate (01, 02 or 03) or free-to-caller (080) alternatives for 0843, 0844, 0845, 0871, 0872, and 0873 revenue-share numbers.
The vast majority of telephone numbers are submitted by visitors to the website, but the discussion board also offers a place for visitors to request alternative telephone numbers if they are not included in the database. Some companies that advertise a non-geographic number will also offer a number for calling from abroad – usually starting +44 1 or +44 2 – this number can be used within the UK (removing the +44 and replacing it with 0) to avoid the cost of calling non-geographic telephone numbers. Some companies will also offer a geographic alternative if asked.
Motivations behind the website
Organisations using 084, 087 and 09 non-geographic telephone numbers (except for 0870 numbers from 1 August 2009 to 1 July 2015) automatically impose a Service Charge on all callers. Calls to 01, 02 and 03 numbers, standard 07 mobile numbers and 080 numbers do not attract such an additional charge.
The revenue derived from the Service Charge is used by the call recipient to cover the call handling and call routing costs incurred at their end of the call. Any excess is paid out under a "revenue share" agreement or may be used as a contribution towards meeting other expenses such as a lease for switchboard equipment.
Non-geographic telephone numbers beginning 084 and 087 have increasingly been employed for inappropriate uses such as bookings, renewals, refunds, cancellations, customer services and complaints, and for contacting public services including essential health services.
Most landline providers offer 'inclusive' calls of up to one hour to standard telephone numbers (those beginning with 01, 02
|
https://en.wikipedia.org/wiki/FROSTBURG
|
FROSTBURG was a Connection Machine 5 (CM-5) massively parallel supercomputer used by the US National Security Agency (NSA) to perform higher-level mathematical calculations. The CM-5 was built by the Thinking Machines Corporation, based in Cambridge, Massachusetts, at a cost of US$25 million. The system was installed at NSA in 1991, and operated until 1997. It was the first massively parallel processing computer bought by NSA, originally containing 256 processing nodes. The system was upgraded in 1993 with another 256 nodes, for a total of 512 nodes. The system had a total of 500 billion 32-bit words (≈ 2 terabytes) of storage, 2.5 billion words (≈ 10 gigabytes) of memory, and could perform at a theoretical maximum 65.5 gigaFLOPS. The operating system CMost was based on Unix, but optimized for parallel processing.
FROSTBURG is now on display at the National Cryptologic Museum.
See also
HARVEST
Cryptanalytic computer
References
External links
Another photograph of FROSTBURG by Declan McCullagh
Cryptanalytic devices
Supercomputers
|
https://en.wikipedia.org/wiki/Salicylaldehyde
|
Salicylic aldehyde (2-hydroxybenzaldehyde) is the organic compound with the formula (C7 H6 O2) C6H4CHO-2-OH. Along with 3-hydroxybenzaldehyde and 4-hydroxybenzaldehyde, it is one of the three isomers of hydroxybenzaldehyde. This colorless oily liquid has a bitter almond odor at higher concentration. Salicylaldehyde is a key precursor to a variety of chelating agents, some of which are commercially important.
Production
Salicylaldehyde is prepared from phenol and chloroform by heating with sodium hydroxide or potassium hydroxide in a Reimer–Tiemann reaction:
Alternatively, it is produced by condensation of phenol or its derivatives with formaldehyde to give hydroxybenzyl alcohol, which is oxidized to the aldehyde.
Salicylaldehydes in general may be prepared by other ortho-selective formylation reactions from the corresponding phenol, for instance by the Duff reaction, Reimer–Tiemann reaction, or by treatment with paraformaldehyde in the presence of magnesium chloride and a base.
Natural occurrences
Salicylaldehyde was identified as a characteristic aroma component of buckwheat.
It is also one of the components of castoreum, the exudate from the castor sacs of the mature North American beaver (Castor canadensis) and the European beaver (Castor fiber), used in perfumery.
Furthermore, salicylaldehyde occurs in the larval defensive secretions of several leaf beetle species that belong the subtribe Chrysomelina. An example for a leaf beetle species that produces salicylaldehyde is the red poplar leaf beetle Chrysomela populi.
Reactions and applications
Salicylaldehyde is used to make the following:
Oxidation with hydrogen peroxide gives catechol (1,2-dihydroxybenzene) (Dakin reaction).
Etherification with chloroacetic acid followed by cyclisation gives the heterocycle benzofuran (coumarone). The first step in this reaction to the substituted benzofuran is called the Rap–Stoermer condensation after E. Rap (1895) and R. Stoermer (1900).
Salicylaldehyde is co
|
https://en.wikipedia.org/wiki/Xbox%20Games%20Store
|
Xbox Games Store (formerly Xbox Live Marketplace) is a digital distribution platform used by Microsoft's Xbox 360 video game console. The service allows users to download or purchase video games (including both Xbox Live Arcade games and full Xbox 360 titles), add-ons for existing games, game demos along with other miscellaneous content such as gamer pictures and Dashboard themes.
Initially used by the Xbox One during its launch in 2013, after five years the Xbox Games Store was replaced by Microsoft Store, as the standard digital storefront for all Windows 10 devices. The subsequent Xbox Series X/S consoles also use Microsoft Store.
The service also previously offered sections for downloading video content, such as films and television episodes; as of late 2012, this functionality was superseded by Xbox Music and Xbox Video (now known as Groove Music and Microsoft Movies & TV respectively).
In August 2023, Microsoft announced that the Xbox 360 Store will be shutting down on July 29, 2024; however, backwards-compatible Xbox 360 titles will remain available for purchase on the Microsoft Store for Xbox One and Xbox Series X/S after the Xbox 360 Store's sunset date.
Services
Xbox Live Arcade
The Xbox Live Arcade (XBLA) branding encompasses digital-only games that can be purchased from Xbox Games Store on the Xbox 360, including ports of classic games and new original titles.
Games on Demand
The Games on Demand section of Xbox Games Store allows users to purchase downloadable versions of Xbox 360 titles, along with games released for the original Xbox. Most of all, some of the delisted downloadable contents of the respective Xbox 360 games are also included in this edition.
Xbox Live Indie Games
As part of the "New Xbox Experience" update launched on November 19, 2008, Microsoft launched Xbox Live Community Games, and later renamed to "Xbox Live Indie Games", a service similar to Xbox Live Arcade (XBLA), with smaller and less expensive games created by independ
|
https://en.wikipedia.org/wiki/168%20%28number%29
|
168 (one hundred [and] sixty-eight) is the natural number following 167 and preceding 169.
In mathematics
168 is an even number, a composite number, an abundant number, and an idoneal number.
There are 168 primes less than 1000. 168 is the product of the first two perfect numbers.
168 is the order of the group PSL(2,7), the second smallest nonabelian simple group.
From Hurwitz's automorphisms theorem, 168 is the maximum possible number of automorphisms of a genus 3 Riemann surface, this maximum being achieved by the Klein quartic, whose symmetry group is PSL(2,7). The Fano plane has 168 symmetries.
168 is the largest known n such that 2n does not contain all decimal digits.
168 is the fourth Dedekind number.
In astronomy
168P/Hergenrother is a periodic comet in the Solar System
168 Sibylla is a dark Main belt asteroid
In the military
was an Imperial Japanese Navy during World War II
is a United States Navy fleet tugboat
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War I
was a United States Navy during World War II
was a United States Navy steamer during World War II
was a La Salle-class transport during World War II
In movies
The 168 Film Project in Los Angeles
In transportation
New York City Subway stations
168th Street (New York City Subway); a subway station complex at 168th Street and Broadway consisting of:
168th Street (IRT Broadway – Seventh Avenue Line); serving the train
168th Street (IND Eighth Avenue Line); serving the trains
168th Street (BMT Jamaica Line); was the former terminal of the BMT Jamaica Line in Queens
British Rail Class 168
VASP Flight 168 from Rio de Janeiro, Brazil to Fortaleza; crashed on June 8, 1982
In other fields
168 is also:
The year AD 168 or 168 BC
168 AH is a year in the Islamic calendar that corresponds to 784 – 785 CE
The number of hours in a week, or 7 x
|
https://en.wikipedia.org/wiki/Code%20signing
|
Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity. Code signing was invented in 1995 by Michael Doyle, as part of the Eolas WebWish browser plug-in, which enabled the use of public-key cryptography to sign downloadable Web app program code using a secret key, so the plug-in code interpreter could then use the corresponding public key to authenticate the code before allowing it access to the code interpreter’s APIs.
Code signing can provide several valuable features. The most common use of code signing is to provide security when deploying; in some programming languages, it can also be used to help prevent namespace conflicts. Almost every code signing implementation will provide some sort of digital signature mechanism to verify the identity of the author or build system, and a checksum to verify that the object has not been modified. It can also be used to provide versioning information about an object or to store other metadata about an object.
The efficacy of code signing as an authentication mechanism for software depends on the security of underpinning signing keys. As with other public key infrastructure (PKI) technologies, the integrity of the system relies on publishers securing their private keys against unauthorized access. Keys stored in software on general-purpose computers are susceptible to compromise. Therefore, it is more secure, and best practice, to store keys in secure, tamper-proof, cryptographic hardware devices known as hardware security modules or HSMs.
Providing security
Many code signing implementations will provide a way to sign the code using a system involving a pair of keys, one public and one private, similar to the process employed by TLS or SSH. For example, in the case of .NET, the developer uses a priv
|
https://en.wikipedia.org/wiki/SnRNP
|
snRNPs (pronounced "snurps"), or small nuclear ribonucleoproteins, are RNA-protein complexes that combine with unmodified pre-mRNA and various other proteins to form a spliceosome, a large RNA-protein molecular complex upon which splicing of pre-mRNA occurs. The action of snRNPs is essential to the removal of introns from pre-mRNA, a critical aspect of post-transcriptional modification of RNA, occurring only in the nucleus of eukaryotic cells.
Additionally, U7 snRNP is not involved in splicing at all, as U7 snRNP is responsible for processing the 3′ stem-loop of histone pre-mRNA.
The two essential components of snRNPs are protein molecules and RNA. The RNA found within each snRNP particle is known as small nuclear RNA, or snRNA, and is usually about 150 nucleotides in length. The snRNA component of the snRNP gives specificity to individual introns by "recognizing" the sequences of critical splicing signals at the 5' and 3' ends and branch site of introns. The snRNA in snRNPs is similar to ribosomal RNA in that it directly incorporates both an enzymatic and a structural role.
SnRNPs were discovered by Michael R. Lerner and Joan A. Steitz.
Thomas R. Cech and Sidney Altman also played a role in the discovery, winning the Nobel Prize for Chemistry in 1989 for their independent discoveries that RNA can act as a catalyst in cell development.
Types
At least five different kinds of snRNPs join the spliceosome to participate in splicing. They can be visualized by gel electrophoresis and are known individually as: U1, U2, U4, U5, and U6. Their snRNA components are known, respectively, as: U1 snRNA, U2 snRNA, U4 snRNA, U5 snRNA, and U6 snRNA.
In the mid-1990s, it was discovered that a variant class of snRNPs exists to help in the splicing of a class of introns found only in metazoans, with highly conserved 5' splice sites and branch sites. This variant class of snRNPs includes: U11 snRNA, U12 snRNA, U4atac snRNA, and U6atac snRNA. While different, they perform the same fu
|
https://en.wikipedia.org/wiki/Cancelbot
|
A cancelbot is an automated or semi-automated process for sending out third-party cancel messages over Usenet, commonly as a stopgap measure to combat spam.
History
One of the earliest uses of a cancelbot was by microbiology professor Richard DePew, to remove anonymous postings in science newsgroups. Perhaps the most well known early cancelbot was used in June 1994 by Arnt Gulbrandsen within minutes of the first post of Canter & Siegel's second spam wave, as it was created in response to their "Green Card spam" in April 1994. Usenet spammers have alleged that cancelbots are a tool of the mythical Usenet cabal.
Rationale
Cancelbots must follow community consensus to be able to serve a useful purpose, and historically, technical criteria have been the only acceptable criteria for determining if messages are cancelable, and only a few active cancellers ever obtain the broad community support needed to be effective.
Pseudosites are referenced in cancel headers by legitimate cancelbots to identify the criteria on which a message is being canceled, allowing administrators of Usenet sites to determine via standard "aliasing" mechanisms which criteria that they will accept third-party cancels for.
Currently, the generally accepted criteria (and associated pseudosites) are:
By general convention, special values are given in X-Canceled-By, Message-Id and Path headers when performing third-party cancels. This allows administrators to decide which reasons for third-party cancellation are acceptable for their site:
The $alz convention states that the Message-Id: header used for a third-party cancel should always be the original Message-Id: with "cancel." prepended.
The X-Canceled-By: convention states that the operator of a cancelbot should provide a consistent, valid, and actively monitored contact email address for their cancelbot in the X-Canceled-By: header, both to identify the canceler, and to provide a point of contact in case something goes wrong or questions
|
https://en.wikipedia.org/wiki/Flame%20polishing
|
Flame polishing, also known as fire polishing, is a method of polishing a material, usually glass or thermoplastics, by exposing it to a flame or heat. When the surface of the material briefly melts, surface tension smooths the surface. Operator skill is critical with this method. When done properly, flame plastic polishing produces the clearest finish, especially when polishing acrylic. This method is most applicable to flat external surfaces. Flame polishing is frequently used in acrylic plastic fabrication because of its high speed compared to abrasive methods. In this application, an oxyhydrogen torch is typically used, one reason being that the flame chemistry is unlikely to contaminate the plastic.
Flame polishing is essential to creation of the glass pipettes used for the patch clamp technique of voltage clamping.
Equipment
Various machines and torches/gas burners are used in the flame polishing process. Depending on the heating requirements for an intended application, different kinds of gases are used including but not limited to: natural gas, propane and oxygen, oxygen and hydrogen. A specially designed machine called the hydro flame is commonly used in flame polishing. The hydro flame is a gas-powered generator that uses distilled water and electricity to create oxygen and hydrogen for its flame. The size, shape, and chemistry of the flames used in fire polishing can vary widely based on the type and shape of the material being polished.
See also
Fire hardening, also known as "fire polishing", a primitive process for hardening wood
References
Glass production
Plastics industry
|
https://en.wikipedia.org/wiki/WinFiles
|
WinFiles, formerly Windows95.com, was an Internet download directory website. Originally, it was founded by Steve Jenkins in 1994.
CNET buyout
On February 24, 1999, CNET agreed to pay WinFiles owner Jenesys LLC US$5.75 million and made an additional $5.75 million payment 18 months after the closing of the deal - totaling $11.5 million.
References
External links
WinFiles.com
Classic WinFiles.com
WinFiles.com.ru
Internet properties established in 1994
Former CBS Interactive websites
File hosting
Download websites
|
https://en.wikipedia.org/wiki/%CE%91-Pinene
|
α-Pinene is an organic compound of the terpene class. It is one of the two isomers of pinene, the other being β-pinene. An alkene, it contains a reactive four-membered ring. It is found in the oils of many species of many coniferous trees, notably the pine. It is also found in the essential oil of rosemary (Rosmarinus officinalis) and Satureja myrtifolia (also known as Zoufa in some regions). Both enantiomers are known in nature; (1S,5S)- or (−)-α-pinene is more common in European pines, whereas the (1R,5R)- or (+)-α-isomer is more common in North America. The enantiomers' racemic mixture is present in some oils such as eucalyptus oil and orange peel oil.
Reactivity
Commercially important derivatives of alpha-pinene are linalool, geraniol, nerol, a-terpineol, and camphene.
α-Pinene 1 is reactive owing to the presence of the four-membered ring adjacent to the alkene. The compound is prone to skeletal rearrangements such as the Wagner–Meerwein rearrangement. Acids typically lead to rearranged products. With concentrated sulfuric acid and ethanol the major products are terpineol 2 and its ethyl ether 3, while glacial acetic acid gives the corresponding acetate 4. With dilute acids, terpin hydrate 5 becomes the major product.
With one molar equivalent of anhydrous HCl, the simple addition product 6a can be formed at low temperature in the presence of diethyl ether, but it is very unstable. At normal temperatures, or if no ether is present, the major product is bornyl chloride 6b, along with a small amount of fenchyl chloride 6c. For many years 6b (also called "artificial camphor") was referred to as "pinene hydrochloride", until it was confirmed as identical with bornyl chloride made from camphene. If more HCl is used, achiral 7 (dipentene hydrochloride) is the major product along with some 6b. Nitrosyl chloride followed by base leads to the oxime 8 which can be reduced to "pinylamine" 9. Both 8 and 9 are stable compounds containing an intact four-membered ring, a
|
https://en.wikipedia.org/wiki/DELTA%20%28taxonomy%29
|
DELTA (DEscription Language for TAxonomy) is a data format used in taxonomy for recording descriptions of living things. It is designed for computer processing, allowing the generation of identification keys, diagnosis, etc.
It is widely accepted as a standard and many programs using this format are available for various taxonomic tasks.
It was devised by the CSIRO Australian Division of Entomology in 1971 to 2000, with a notable part taken by Dr. Michael J. Dallwitz. More recently, the Atlas of Living Australia (ALA) rewrote the DELTA software in Java so it can run in a Java environment and across multiple operating systems. The software package can now be found at and downloaded from the ALA site.
DELTA System
The DELTA System is a group of integrated programs that are built on the DELTA format. The main program is the DELTA Editor, which provides an interface for creating a matrix of characters for any number taxa. A whole suite of programs can be found and run from within the DELTA editor which allow for the output of an interactive identification key, called Intkey. Other powerful features include the output of natural language descriptions, full diagnoses, and differences among taxa.
References
External links
DELTA for beginners. An introduction into the taxonomy software package DELTA
Taxonomy (biology)
|
https://en.wikipedia.org/wiki/The%20World%20%28Internet%20service%20provider%29
|
The World is an Internet service provider originally headquartered in Brookline, Massachusetts. It was the first commercial ISP in the world that provided a direct connection to the internet, with its first customer logging on in November 1989.
Controversy
Many government and university installations blocked, threatened to block, or attempted to shut-down The World's Internet connection until Software Tool & Die was eventually granted permission by the National Science Foundation to provide public Internet access on "an experimental basis."
Domain name history
The World is operated by Software Tool & Die. The site and services were initially hosted solely under the domain name world.std.com which continues to function to this day.
Sometime in or before 1994, the domain name world.com had been purchased by Software Tool & Die and used as The World's primary domain name. In 2000, STD let go ownership of world.com and is no longer associated with it.
In 1999, STD obtained the domain name theworld.com, promoting the PascalCase version TheWorld.com as the primary domain name of The World.
Services
The World still offers text-based dial-up and PPP dial-up, with over 9000 telephone access numbers throughout Canada, the United States, and Puerto Rico. Other features include shell access, with many historically standard shell features and utilities still offered. Additional user services include Usenet feed, personal web space, mailing lists, and email aliases. As of 2012, there were approximately 1750 active users.
More recent features include domain name hosting and complete website hosting.
Community
The World offers a community Usenet hierarchy, wstd.*, which is accessible only to users of The World. There are over 60 newsgroups in this hierarchy. The World users may send each other Memos (password protected messaging) and access a list of all personal customer websites.
Much of The World's website and associated functionality was designed and built by James
|
https://en.wikipedia.org/wiki/Software%20design%20description
|
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
Composition
The SDD usually contains the following information:
The Data-driven design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures.
The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules.
The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model.
The procedural design describes structured programming concepts using graphical, tabular and textual notations.
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
IEEE 1016
IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an
|
https://en.wikipedia.org/wiki/Stacking-fault%20energy
|
The stacking-fault energy (SFE) is a materials property on a very small scale. It is noted as γSFE in units of energy per area.
A stacking fault is an interruption of the normal stacking sequence of atomic planes in a close-packed crystal structure. These interruptions carry a certain stacking-fault energy. The width of stacking fault is a consequence of the balance between the repulsive force between two partial dislocations on one hand and the attractive force due to the surface tension of the stacking fault on the other hand. The equilibrium width is thus partially determined by the stacking-fault energy. When the SFE is high the dissociation of a full dislocation into two partials is energetically unfavorable, and the material can deform either by dislocation glide or cross-slip. Lower SFE materials display wider stacking faults and have more difficulties for cross-slip.
The SFE modifies the ability of a dislocation in a crystal to glide onto an intersecting slip plane. When the SFE is low, the mobility of dislocations in a material decreases.
Stacking faults and stacking fault energy
A stacking fault is an irregularity in the planar stacking sequence of atoms in a crystal – in FCC metals the normal stacking sequence is ABCABC etc., but if a stacking fault is introduced it may introduce an irregularity such as ABCBCABC into the normal stacking sequence. These irregularities carry a certain energy which is called stacking-fault energy.
Influences on stacking fault energy
Stacking fault energy is heavily influenced by a few major factors, specifically base metal, alloying metals, percent of alloy metals, and valence-electron to atom ratio.
Alloying elements effects on SFE
It has long been established that the addition of alloying elements significantly lowers the SFE of most metals. Which element and how much is added dramatically affects the SFE of a material. The figures on the right show how the SFE of copper lowers with the addition of two different all
|
https://en.wikipedia.org/wiki/Type%20enforcement
|
The concept of type enforcement (TE), in the field of information technology, is an access control mechanism for regulating access in computer systems. Implementing TE gives priority to mandatory access control (MAC) over discretionary access control (DAC). Access clearance is first given to a subject (e.g. process) accessing objects (e.g. files, records, messages) based on rules defined in an attached security context. A security context in a domain is defined by a domain security policy. In the Linux security module (LSM) in SELinux, the security context is an extended attribute. Type enforcement implementation is a prerequisite for MAC, and a first step before multilevel security (MLS) or its replacement multi categories security (MCS). It is a complement of role-based access control (RBAC).
Control
Type enforcement implies fine-grained control over the operating system, not only to have control over process execution, but also over domain transition or authorization scheme. This is why it is best implemented as a kernel module, as is the case with SELinux. Using type enforcement is a way to implement the FLASK architecture.
Access
Using type enforcement, users may (as in Microsoft Active Directory) or may not (as in SELinux) be associated with a Kerberos realm, although the original type enforcement model implies so. It is always necessary to define a TE access matrix containing rules about clearance granted to a given security context, or subject's rights over objects according to an authorization scheme.
Security
Practically, type enforcement evaluates a set of rules from the source security context of a subject, against a set of rules from the target security context of the object. A clearance decision occurs depending on the TE access description (matrix). Then, DAC or other access control mechanisms (MLS / MCS, ...) apply.
History
Type enforcement was introduced in the Secure Ada Target architecture in the late 1980s with a full implementation de
|
https://en.wikipedia.org/wiki/Regular%20grid
|
A regular grid is a tessellation of n-dimensional Euclidean space by congruent parallelotopes (e.g. bricks).
Its opposite is irregular grid.
Grids of this type appear on graph paper and may be used in finite element analysis, finite volume methods, finite difference methods, and in general for discretization of parameter spaces. Since the derivatives of field variables can be conveniently expressed as finite differences, structured grids mainly appear in finite difference methods. Unstructured grids offer more flexibility than structured grids and hence are very useful in finite element and finite volume methods.
Each cell in the grid can be addressed by index (i, j) in two dimensions or (i, j, k) in three dimensions, and each vertex has coordinates in 2D or in 3D for some real numbers dx, dy, and dz representing the grid spacing.
Related grids
A Cartesian grid is a special case where the elements are unit squares or unit cubes, and the vertices are points on the integer lattice.
A rectilinear grid is a tessellation by rectangles or rectangular cuboids (also known as rectangular parallelepipeds) that are not, in general, all congruent to each other. The cells may still be indexed by integers as above, but the mapping from indexes to vertex coordinates is less uniform than in a regular grid. An example of a rectilinear grid that is not regular appears on logarithmic scale graph paper.
A skewed grid is a tessellation of parallelograms or parallelepipeds. (If the unit lengths are all equal, it is a tessellation of rhombi or rhombohedra.)
A curvilinear grid or structured grid is a grid with the same combinatorial structure as a regular grid, in which the cells are quadrilaterals or [general] cuboids, rather than rectangles or rectangular cuboids.
See also
References
Tessellation
Lattice points
Mesh generation
|
https://en.wikipedia.org/wiki/Unstructured%20grid
|
An unstructured grid or irregular grid is a tessellation of a part of the Euclidean plane or Euclidean space by simple shapes, such as triangles or tetrahedra, in an irregular pattern. Grids of this type may be used in finite element analysis when the input to be analyzed has an irregular shape.
Unlike structured grids, unstructured grids require a list of the connectivity which specifies the way a given set of vertices make up individual elements (see graph (data structure)).
Ruppert's algorithm is often used to convert an irregularly shaped polygon into an unstructured grid of triangles.
In addition to triangles and tetrahedra, other commonly used elements in finite element simulation include quadrilateral (4-noded) and hexahedral (8-noded) elements in 2D and 3D, respectively. One of the most commonly used algorithms to generate unstructured quadrilateral grid is "Paving". However, there is no such commonly used algorithm for generating unstructured hexahedral grid on a general 3D solid model. "Plastering" is a 3D version of Paving, but it has difficulty in forming hexahedral elements at the interior of a solid.
See also
External links
Mesh generation
fr:Maillage
|
https://en.wikipedia.org/wiki/BLEU
|
BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. Invented at IBM in 2001, BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.
Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the whole corpus to reach an estimate of the translation's overall quality. Intelligibility or grammatical correctness are not taken into account.
BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.
Mathematical definition
Basic setup
A basic, first attempt at defining the BLEU score would take two arguments: a candidate string and a list of reference strings . The idea is that should be close to 1 when is similar to , and close to 0 if not.
As an analogy, the BLEU score is like a language teacher trying to score the quality of a student translation by checking how closely it follows the reference answers .
Since in natural language processing, one should evaluate a large set of candidate strings, one must generalize the BLEU score to the case where one has
|
https://en.wikipedia.org/wiki/Rate%20function
|
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principle. A large deviation principle quantifies the asymptotic probability of rare events for a sequence of probabilities.
A rate function is also called a Cramér function, after the Swedish probabilist Harald Cramér.
Definitions
Rate function An extended real-valued function I : X → [0, +∞] defined on a Hausdorff topological space X is said to be a rate function if it is not identically +∞ and is lower semi-continuous, i.e. all the sub-level sets
are closed in X.
If, furthermore, they are compact, then I is said to be a good rate function.
A family of probability measures (μδ)δ > 0 on X is said to satisfy the large deviation principle with rate function I : X → [0, +∞) (and rate 1 ⁄ δ) if, for every closed set F ⊆ X and every open set G ⊆ X,
If the upper bound (U) holds only for compact (instead of closed) sets F, then (μδ)δ>0 is said to satisfy the weak large deviations principle (with rate 1 ⁄ δ and weak rate function I).
Remarks
The role of the open and closed sets in the large deviation principle is similar to their role in the weak convergence of probability measures: recall that (μδ)δ > 0 is said to converge weakly to μ if, for every closed set F ⊆ X and every open set G ⊆ X,
There is some variation in the nomenclature used in the literature: for example, den Hollander (2000) uses simply "rate function" where this article — following Dembo & Zeitouni (1998) — uses "good rate function", and "weak rate function". Regardless of the nomenclature used for rate functions, examination of whether the upper bound inequality (U) is supposed to hold for closed or compact sets tells one whether the large deviation principle in use is strong or weak.
Properties
Uniqueness
A natural question to ask, given the somewhat abstract setting of the general framework above, i
|
https://en.wikipedia.org/wiki/Large%20deviations%20theory
|
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insurance mathematics, namely ruin theory with Cramér and Lundberg. A unified formalization of large deviation theory was developed in 1966, in a paper by Varadhan. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures.
Roughly speaking, large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events.
Introductory examples
An elementary example
Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by where we encode head as 1 and tail as 0. Now let denote the mean value after trials, namely
Then lies between 0 and 1. From the law of large numbers it follows that as N grows, the distribution of converges to (the expected value of a single coin toss).
Moreover, by the central limit theorem, it follows that is approximately normally distributed for large The central limit theorem can provide more detailed information about the behavior of than the law of large numbers. For example, we can approximately find a tail probability of that is greater than for a fixed value of However, the approximation by the central limit theorem may not be accurate if is far from unless is sufficiently large. Also, it does not provide information about the convergence of the tail probabilities as However, the large deviation theory can provide answers for such problems.
Let us make this statement more precise. For a given value let us compute the tail probability Define
Note that the function is a convex, nonnegative function that is zero at and
|
https://en.wikipedia.org/wiki/EMBO%20Reports
|
EMBO Reports is a peer-reviewed scientific journal covering research related to biology at a molecular level. It publishes primary research papers, reviews, and essays and opinion. It also features commentaries on the social impact of advances in the life sciences and the converse influence of society on science. A sister journal to The EMBO Journal, EMBO Reports was established in 2000 and was published on behalf of the European Molecular Biology Organization by Nature Publishing Group since 2003. It is now published by EMBO Press.
External links
Molecular biology
Molecular and cellular biology journals
Monthly journals
English-language journals
Academic journals established in 2000
European Molecular Biology Organization academic journals
|
https://en.wikipedia.org/wiki/Normalized%20difference%20vegetation%20index
|
The normalized difference vegetation index (NDVI) is a widely-used metric for quantifying the health and density of vegetation using sensor data. It is calculated from spectrometric data at two specific bands: red and near-infrared. The spectrometric data is usually sourced from remote sensors, such as satellites.
The metric is popular in industry because of its accuracy. It has a high correlation with the true state of vegetation on the ground. The index is easy to interpret: NDVI will be a value between -1 and 1. An area with nothing growing in it will have an NDVI of zero. NDVI will increase in proportion to vegetation growth. An area with dense, healthy vegetation will have an NDVI of one. NDVI values less than 0 suggest a lack of dry land. An ocean will yield an NDVI of -1.
Brief history
The exploration of outer space started in earnest with the launch of Sputnik 1 by the Soviet Union on 4 October 1957. This was the first man-made satellite orbiting the Earth. Subsequent successful launches, both in the Soviet Union (e.g., the Sputnik and Cosmos programs), and in the U.S. (e.g., the Explorer program), quickly led to the design and operation of dedicated meteorological satellites. These are orbiting platforms embarking instruments specially designed to observe the Earth's atmosphere and surface with a view to improve weather forecasting. Starting in 1960, the TIROS series of satellites embarked television cameras and radiometers. This was later (1964 onwards) followed by the Nimbus satellites and the family of Advanced Very High Resolution Radiometer instruments on board the National Oceanic and Atmospheric Administration (NOAA) platforms. The latter measures the reflectance of the planet in red and near-infrared bands, as well as in the thermal infrared. In parallel, NASA developed the Earth Resources Technology Satellite (ERTS), which became the precursor to the Landsat program. These early sensors had minimal spectral resolution, but tended to include ba
|
https://en.wikipedia.org/wiki/Bracket%20polynomial
|
In the mathematical field of knot theory, the bracket polynomial (also known as the Kauffman bracket) is a polynomial invariant of framed links. Although it is not an invariant of knots or links (as it is not invariant under type I Reidemeister moves), a suitably "normalized" version yields the famous knot invariant called the Jones polynomial. The bracket polynomial plays an important role in unifying the Jones polynomial with other quantum invariants. In particular, Kauffman's interpretation of the Jones polynomial allows generalization to invariants of 3-manifolds.
The bracket polynomial was discovered by Louis Kauffman in 1987.
Definition
The bracket polynomial of any (unoriented) link diagram , denoted , is a polynomial in the variable , characterized by the three rules:
, where is the standard diagram of the unknot
The pictures in the second rule represent brackets of the link diagrams which differ inside a disc as shown but are identical outside. The third rule means that adding a circle disjoint from the rest of the diagram multiplies the bracket of the remaining diagram by .
Further reading
Louis H. Kauffman, State models and the Jones polynomial. Topology 26 (1987), no. 3, 395--407. (introduces the bracket polynomial)
External links
Knot theory
Polynomials
|
https://en.wikipedia.org/wiki/Decimal%20degrees
|
Decimal degrees (DD) is a notation for expressing latitude and longitude geographic coordinates as decimal fractions of a degree. DD are used in many geographic information systems (GIS), web mapping applications such as OpenStreetMap, and GPS devices. Decimal degrees are an alternative to using sexagesimal degrees (degrees, minutes, and seconds - DMS notation). As with latitude and longitude, the values are bounded by ±90° and ±180° respectively.
Positive latitudes are north of the equator, negative latitudes are south of the equator. Positive longitudes are east of the Prime Meridian; negative longitudes are west of the Prime Meridian. Latitude and longitude are usually expressed in that sequence, latitude before longitude. The abbreviation dLL has been used in the scientific literature with locations in texts being identified as a tuple within square brackets, for example [54.5798,-3.5820]. The appropriate decimal places are used,<ref>W. B. Whalley, 2021.'Mapping small glaciers, rock glaciers and related features in an age of retreating glaciers: using decimal latitude-longitude locations and 'geomorphic information tensors,Geografia Fisica e Dinamica Quaternaria 2021:44 55-67,DOI 10.4461/ GFDQ.2021.44.4</ref> negative values are given as hyphen-minus, Unicode 002D.
Precision
The radius of the semi-major axis of the Earth at the equator is resulting in a circumference of . The equator is divided into 360 degrees of longitude, so each degree at the equator represents . As one moves away from the equator towards a pole, however, one degree of longitude is multiplied by the cosine of the latitude, decreasing the distance, approaching zero at the pole. The number of decimal places required for a particular precision at the equator is:
A value in decimal degrees to a precision of 4 decimal places is precise to at the equator. A value in decimal degrees to 5 decimal places is precise to at the equator. Elevation also introduces a small error: at elevation, the
|
https://en.wikipedia.org/wiki/Segmental%20analysis%20%28biology%29
|
Segmental analysis is a method of anatomical analysis for describing the connective morphology of the human body. Instead of describing anatomy in terms of spatial relativity, as in the anatomical position method, segmental analysis describes anatomy in terms of which organs, tissues, etc. connect to each other, and the characteristics of those connections.
Literature
Anderson RH, Becker AE, Freedom RM, et al. Sequential segmental analysis of congenital heart disease. Pediatric Cardiology 1984;5(4):281-7.
Anatomy
|
https://en.wikipedia.org/wiki/FRISK%20Software%20International
|
FRISK Software International (established in 1993) was an Icelandic software company that developed F-Prot antivirus and F-Prot AVES antivirus and anti-spam service. The company was founded in 1993. It was acquired by Cyren in 2012.
History
The company was founded in 1993. Its name is derived from the initial letters of the personal name and patronymic of Friðrik Skúlason, its founder. Dr. Vesselin Vladimirov Bontchev, a computer expert from Bulgaria, best known for his research on the Dark Avenger virus, worked for the company as an anti-virus researcher.
F-Prot Antivirus was first released in 1989, making it one of the longest lived anti-virus brands on the market. It was the world's first with a heuristic engine. It is sold in both home and corporate packages, of which there are editions for Windows and Linux. There are corporate versions for Microsoft Exchange, Solaris, and certain IBM eServers. The Linux version is available to home users free of charge, with virus definition updates. Free 30-day trial versions for other platforms can be downloaded. F-Prot AVES is specifically targeted towards corporate users.
The company has also produced a genealogy program called Espólín and Púki, a spellchecker with additional features.
In Summer of 2012, FRISK was acquired by Cyren, an Israeli-American provider of security products.
F-Prot Antivirus
F-Prot Antivirus (stylized F-PROT) is an antivirus product developed by FRISK Software International. It is available in related versions for several platforms. It is available for Microsoft Windows, Microsoft Exchange Server, Linux, Solaris, AIX and IBM eServers.
FRISK Software International allows others to develop applications using their scanning engine, through the use of a SDK. Many software vendors use the F-Prot Antivirus engine, including SUSE.
F-Prot Antivirus reached end-of-life on July 31, 2021, and is no longer sold.
Friðrik Skúlason
Friðrik Skúlason, also sometimes known as "Frisk", is the founder of FR
|
https://en.wikipedia.org/wiki/Basic%20Interoperable%20Scrambling%20System
|
Basic Interoperable Scrambling System, usually known as BISS, is a satellite signal scrambling system developed by the European Broadcasting Union and a consortium of hardware manufacturers.
Prior to its development, "ad hoc" or "occasional use" satellite news feeds were transmitted either using proprietary encryption methods (e.g. RAS, or PowerVu), or without any encryption. Unencrypted satellite feeds allowed anyone with the correct equipment to view the program material.
Proprietary encryption methods were determined by encoder manufacturers, and placed major compatibility limitations on the type of satellite receiver (IRD) that could be used for each feed. BISS was an attempt to create an "open platform" encryption system, which could be used across a range of manufacturers equipment.
There are mainly two different types of BISS encryption used:
BISS-1 transmissions are protected by a 12 digit hexadecimal "session key" that is agreed by the transmitting and receiving parties prior to transmission. The key is entered into both the encoder and decoder, this key then forms part of the encryption of the digital TV signal and any receiver with BISS-support with the correct key will decrypt the signal.
BISS-E (E for encrypted) is a variation where the decoder has stored one secret BISS-key entered by for example a rights holder. This is unknown to the user of the decoder. The user is then sent a 16-digit hexadecimal code, which is entered as a "session key". This session key is then mathematically combined internally to calculate a BISS-1 key that can decrypt the signal.
Only a decoder with the correct secret BISS-key will be able to decrypt a BISS-E feed. This gives the rights holder control as to exactly which decoder can be used to decrypt/decode a specific feed. Any BISS-E encrypted feed will have a corresponding BISS-1 key that will unlock it.
BISS-E is amongst others used by EBU to protect UEFA Champions League, NBC in the United States for NBC O&O and Af
|
https://en.wikipedia.org/wiki/Factorization%20of%20polynomials
|
In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.
The first polynomial factorization algorithm was published by Theodor von Schubert in 1793. Leopold Kronecker rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension. But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems:
When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982)
Nowadays, modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the rational numbers and number fields, a fundamental step is a factorization of a polynomial over a finite field.
Formulation of the question
Polynomial rings over the integers or over a field are unique factorization domains. This means that every element of these rings is a product of a constant and a product of irreducible polynomials (those that are not the product of two non-constant polynomials). Moreover, this decomposition is unique up to multiplication of the factors by invertible constants.
Factorization depends on the base field. For example, the fundamental theorem of algebra, which states that every polynomial with complex coefficients has complex roots,
|
https://en.wikipedia.org/wiki/Apolysis
|
Apolysis ( "discharge, lit. absolution") is the separation of the cuticle from the epidermis in arthropods and related groups (Ecdysozoa). Since the cuticle of these animals is also the skeletal support of the body and is inelastic, it is shed during growth and a new covering of larger dimensions is formed. During this process, an arthropod becomes dormant for a period of time. Enzymes are secreted to digest the inner layers of the existing cuticle, detaching the animal from the outer cuticle. This allows the new cuticle to develop without being exposed to the environmental elements.
After apolysis, ecdysis occurs. Ecdysis is the actual emergence of the arthropod into the environment and always occurs directly after apolysis. The newly emerged animal then hardens and continues its life.
References
Developmental biology
|
https://en.wikipedia.org/wiki/Mario%20Livio
|
Mario Livio (born June 19, 1945) is an Israeli-American astrophysicist and an author of works that popularize science and mathematics. For 24 years (1991–2015) he was an astrophysicist at the Space Telescope Science Institute, which operates the Hubble Space Telescope. He has published more than 400 scientific articles on topics including cosmology, supernova explosions, black holes, extrasolar planets, and the emergence of life in the universe. His book on the irrational number phi, The Golden Ratio: The Story of Phi, the World's Most Astonishing Number (2002), won the Peano Prize and the International Pythagoras Prize for popular books on mathematics.
Scientific career
Livio earned a Bachelor of Science degree in physics and mathematics at the Hebrew University of Jerusalem, a Master of Science degree in theoretical particle physics at the Weizmann Institute, and a Ph.D. in theoretical astrophysics at Tel Aviv University. He was a professor of physics at the Technion – Israel Institute of Technology from 1981 to 1991, before moving to the Space Telescope Science Institute.
Livio has focused much of his research on supernova explosions and their use in determining the rate of expansion of the universe. He has also studied so-called dark energy, black holes, and the formation of planetary systems around young stars. He has contributed to hundreds of papers in peer-reviewed journals on astrophysics. Among his prominent contributions, he has authored and co-authored important papers on topics related to accretion onto compact objects (white dwarfs, neutron stars, and black holes). In 1980, he published one of the very first multi-dimensional numerical simulations of the collapse of a massive star and a supernova explosion. He was one of the pioneers in the study of common envelope evolution of binary stars, and he applied the results to the shaping of planetary nebulae as well as to the progenitors of Type Ia supernovae. Together with D. Eichler, T. Piran, and D. S
|
https://en.wikipedia.org/wiki/Fleet%20Street%20Publisher
|
Fleet Street Publisher was an Atari ST desktop publishing program produced by Mirrorsoft in the United Kingdom and released in November 1986. A IBM PC compatible version produced by Rowan Software was planned for 1987 but never released.
Running under GEM the program offered features such as multi-column text, the ability to design flow charts and graphics and multiple document sizes (flyers, menus, cards, etc.). Possible font sizes ranged from 4 to 216 points with support for accented characters. Character and line spacing where fully controllable by the user. The software came with a 150 image clipart gallery.
The software was superseded by Timeworks Publisher (Publish-It in the United States), which the market regarded as a much better product. This new version was produced by GST Software Products, and upgrades for the PC versions were available into the late 2000s.
Versions
Fleet Street Publisher (1986, published by Spectrum Holobyte and France Image Logiciel)
Fleet Street Publisher 1.1 (1987, published by Mirrorsoft)
Fleet Street Publisher 2.0 (1989, published by MichTron)
Fleet Street Publisher 3.0 (1989, published by MichTron)
References
External links
Icwhen.com
Atari ST software
Desktop publishing software
|
https://en.wikipedia.org/wiki/Critical%20technical%20practice
|
Critical technical practice is critical theory based approach towards technological design proposed by Phil Agre where critical and cultural theories are brought to bear in the work of designers and engineers. One of the goals of critical technical practice is to increase awareness and critical reflection on the hidden assumptions, ideologies and values underlying technology design. It was developed by Agre in response to a perceived lack in Artificial Intelligence research in the late 20th century, and continues to influence critical AI in 2021.
References
See also
Critical design
Critical making
Critical theory
|
https://en.wikipedia.org/wiki/Disc%20mill
|
A disc mill is a type of crusher that can be used to grind, cut, shear, shred, fiberize, pulverize, granulate, crack, rub, curl, fluff, twist, hull, blend, or refine. It works in a similar manner to the ancient Buhrstone mill in that the feedstock is fed between opposing discs or plates. The discs may be grooved, serrated, or spiked.
Applications
Typical applications for a single-disc mill are all three stages of the wet milling of field corn, manufacture of peanut butter, processing nut shells, ammonium nitrate, urea, producing chemical slurries and recycled paper slurries, and grinding chromium metal.
Double-disc mills are typically used for alloy powders, aluminum chips, bark, barley, borax, brake lining scrap, brass chips, sodium hydroxide, chemical salts, coconut shells, copper powder, cork, cottonseed hulls, pharmaceuticals, feathers, hops, leather, oilseed cakes, phosphates, rice, rosin, sawdust, and seeds.
Disc mills are relatively expensive to run and maintain and they consume much more power than other shredding machines, and are not used where ball mills or hammermills produce the desired results at a lower cost.
Mechanism
Substances are crushed between the edge of a thick, spinning disk and something else. Some mills cover the edge of the disk in blades to chop up incoming matter rather than crush it.
Industrial equipment
Grinding mills
|
https://en.wikipedia.org/wiki/PC-based%20IBM%20mainframe-compatible%20systems
|
Since the rise of the personal computer in the 1980s, IBM and other vendors have created PC-based IBM-compatible mainframes which are compatible with the larger IBM mainframe computers. For a period of time PC-based mainframe-compatible systems had a lower price and did not require as much electricity or floor space. However, they sacrificed performance and were not as dependable as mainframe-class hardware. These products have been popular with mainframe developers, in education and training settings, for very small companies with non-critical processing, and in certain disaster relief roles (such as field insurance adjustment systems for hurricane relief).
Background
Up until the mid-1990s, mainframes were very large machines that often occupied entire rooms. The rooms were often air conditioned and had special power arrangements to accommodate the three-phase electric power required by the machines. Modern mainframes are now physically comparatively small and require little or no special building arrangements.
System/370
IBM had demonstrated use of a mainframe instruction set in their first desktop computer—the IBM 5100, released in 1975. This product used microcode to execute many of the System/370's processor instructions, so that it could run a slightly modified version of IBM's APL mainframe program interpreter.
In 1980 rumors spread of a new IBM personal computer, perhaps a miniaturized version of the 370. In 1981 the IBM Personal Computer appeared, but it was not based on the System 370 architecture. However, IBM did use their new PC platform to create some exotic combinations with additional hardware that could execute S/370 instructions locally.
Personal Computer XT/370
In October 1983, IBM announced the IBM Personal Computer XT/370. This was essentially a three-in-one product. It could run PC DOS locally, it could also act as 3270 terminal, and finally—its most important distinguishing feature relative to an IBM 3270 PC—was that it could execute S/37
|
https://en.wikipedia.org/wiki/Hartley%20function
|
The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set A uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function
where denotes the cardinality of A.
If the base of the logarithm is 2, then the unit of uncertainty is the shannon (more commonly known as bit). If it is the natural logarithm, then the unit is the nat. Hartley used a base-ten logarithm, and with this base, the unit of information is called the hartley (aka ban or dit) in his honor. It is also known as the Hartley entropy or max-entropy.
Hartley function, Shannon entropy, and Rényi entropy
The Hartley function coincides with the Shannon entropy (as well as with the Rényi entropies of all orders) in the case of a uniform probability distribution. It is a special case of the Rényi entropy since:
But it can also be viewed as a primitive construction, since, as emphasized by Kolmogorov and Rényi, the Hartley function can be defined without introducing any notions of probability (see Uncertainty and information by George J. Klir, p. 423).
Characterization of the Hartley function
The Hartley function only depends on the number of elements in a set, and hence can be viewed as a function on natural numbers. Rényi showed that the Hartley function in base 2 is the only function mapping natural numbers to real numbers that satisfies
(additivity)
(monotonicity)
(normalization)
Condition 1 says that the uncertainty of the Cartesian product of two finite sets A and B is the sum of uncertainties of A and B. Condition 2 says that a larger set has larger uncertainty.
Derivation of the Hartley function
We want to show that the Hartley function, log2(n), is the only function mapping natural numbers to real numbers that satisfies
(additivity)
(monotonicity)
(normalization)
Let f be a function on positive integers that satisfies the above three properties. From the additive proper
|
https://en.wikipedia.org/wiki/Digital%20rhetoric
|
Digital rhetoric can be generally defined as communication that exists in the digital sphere. As such, digital rhetoric can be expressed in many different forms, including text, images, videos, and software. Due to the increasingly mediated nature of our contemporary society, there are no longer clear distinctions between digital and non-digital environments. This has expanded the scope of digital rhetoric to account for the increased fluidity with which humans interact with technology.
The field of digital rhetoric has not yet become well-established. Digital rhetoric largely draws its theory and practices from the tradition of rhetoric as both an analytical tool and a production guide. As a whole, it can be structured as a type of meta-discipline.
Due to evolving study, digital rhetoric has held various meanings to different scholars over time. Similarly, digital rhetoric can take on a variety of meanings based on what is being analyzed—which depends on the concept, forms or objects of study, or rhetorical approach. Digital rhetoric can also be analyzed through the lenses of different social movements. This approach allows the reach of digital rhetoric to expand our understanding of its influence.
The term “digital rhetoric” differentiates from the term “rhetoric” because the latter term has been one to be debated amongst many scholars. Only a few scholars like Elizabeth Losh and Ian Bogost have taken the time to really come up with a definition for digital rhetoric. One of the most straightforward definitions for “digital rhetoric” is that it is the application of rhetorical theory (Eyman, 13).
Definition
Evolving definition of 'digital rhetoric'
The following subsections detail the evolving definition of 'digital rhetoric' as a term since its creation in 1989.
Early definitions (1989–2015)
The term was coined by rhetorician Richard A. Lanham in a lecture he delivered in 1989 and first formally put into words in his 1993 essay collection, The Electronic
|
https://en.wikipedia.org/wiki/Multi%20categories%20security
|
Multi categories security (MCS) is an access control method in Security-Enhanced Linux that uses categories attached to objects (files) and granted to subjects (processes, ...) at the operating system level. The implementation in Fedora Core 5 is advisory because there is nothing stopping a process from increasing its access. The eventual aim is to make MCS a hierarchical mandatory access control system. Currently, MCS controls access to files and to ptrace or kill processes. It has not yet decided what level of control it should have over access to directories and other file system objects. It is still evolving.
MCS access controls are applied after the Domain-Type access controls and after regular DAC (Unix permissions). In the default policy of Fedora Core 5, it is possible to manage up to 256 categories (c0 to c255). It is possible to recompile the policy with a much larger number of categories if required.
As part of the Multi-Level Security (MLS) development work applications such as the CUPs print server will understand the MLS sensitivity labels, CUPs will use them to control printing and to label the printed pages according to their sensitivity level. The MCS data is stored and manipulated in the same way as MLS data, therefore any program which is modified for MCS support will also be expected to support MLS. This will increase the number of applications supporting MLS and therefore make it easier to run MLS (which is one of the reasons for developing MCS).
Note that MCS is not a sub-set of MLS, the Bell–LaPadula model is not applied. If a process has a clearance that dominates the classification of a file then it gets both read and write access. For example in a commercial environment you might use categories to map to data from different departments. So you could have c0 for HR data and c1 for Financial data. If a user is running with categories c0 and c1 then they can read HR data and write it to a file labeled for Financial data. In a
|
https://en.wikipedia.org/wiki/Herbert%20Wilf
|
Herbert Saul Wilf (June 13, 1931 – January 7, 2012) was an American mathematician, specializing in combinatorics and graph theory. He was the Thomas A. Scott Professor of Mathematics in Combinatorial Analysis and Computing at the University of Pennsylvania. He wrote numerous books and research papers. Together with Neil Calkin he founded The Electronic Journal of Combinatorics in 1994 and was its editor-in-chief until 2001.
Biography
Wilf was the author of numerous papers and books, and was adviser and mentor to many students and colleagues. His collaborators include Doron Zeilberger and Donald Knuth. One of Wilf's former students is Richard Garfield, the creator of the collectible card game Magic: The Gathering. He also served as a thesis advisor for E. Roy Weintraub in the late 1960s.
Wilf died of a progressive neuromuscular disease in 2012.
Awards
In 1998, Wilf and Zeilberger received the Leroy P. Steele Prize for Seminal Contribution to Research for their joint paper, "Rational functions certify combinatorial identities" (Journal of the American Mathematical Society, 3 (1990) 147–158). The prize citation reads: "New mathematical ideas can have an impact on experts in a field, on people outside the field, and on how the field develops after the idea has been introduced. The remarkably simple idea of the work of Wilf and Zeilberger has already changed a part of mathematics for the experts, for the high-level users outside the area, and the area itself." Their work has been translated into computer packages that have simplified hypergeometric summation.
In 2002, Wilf was awarded the Euler Medal by the Institute of Combinatorics and its Applications.
Selected publications
1971: (editor with Frank Harary) Mathematical Aspects of Electrical Networks Analysis, SIAM-AMS Proceedings, Volume 3,American Mathematical Society
1998: (with N. J. Calkin) "The Number of Independent Sets in a Grid Graph", SIAM Journal on Discrete Mathematics
Books
A=B (
|
https://en.wikipedia.org/wiki/Luca%20Turin
|
Luca Turin (born 20 November 1953) is a biophysicist and writer with a long-standing interest in bioelectronics, the sense of smell, perfumery, and the fragrance industry.
Early life and education
Turin was born in Beirut, Lebanon on 20 November 1953 into an Italian-Argentinian family, and raised in France, Italy and Switzerland. His father, Duccio Turin, was a UN diplomat and chief architect of the Palestinian refugee camps, and his mother, Adela Turin (born Mandelli), is an art historian, designer, and award-winning children's author. Turin studied Physiology and Biophysics at University College London and earned his PhD in 1978. He worked at the CNRS from 1982-1992, and served as lecturer in Biophysics at University College London from 1992-2000.
Career
After leaving the CNRS, Turin first held a visiting research position at the National Institutes of Health in North Carolina before moving back to London, where he became a lecturer in biophysics at University College London. In 2001 Turin was hired as CTO of start-up company Flexitral, based in Chantilly, Virginia, to pursue rational odorant design based on his theories. In April 2010 he described this role in the past tense, and the company's domain name appears to have been surrendered.
In 2010, Turin was based at MIT, working on a project to develop an electronic nose using natural receptors, financed by DARPA. In 2014 he moved to the Institute of Theoretical Physics at the University of Ulm where he was a Visiting Professor. He is a Stavros Niarchos Researcher in the neurobiology division at the Biomedical Sciences Research Center Alexander Fleming in Greece. In 2021 he moved to the University of Buckingham, UK as Professor of Physiology in the Medical School.
Vibration theory of olfaction
A major prediction of Turin's vibration theory of olfaction is the isotope effect: that the normal and deuterated versions of a compound should smell different due to unique vibration frequencies, despite having the
|
https://en.wikipedia.org/wiki/Virus%20latency
|
Virus latency (or viral latency) is the ability of a pathogenic virus to lie dormant (latent) within a cell, denoted as the lysogenic part of the viral life cycle. A latent viral infection is a type of persistent viral infection which is distinguished from a chronic viral infection. Latency is the phase in certain viruses' life cycles in which, after initial infection, proliferation of virus particles ceases. However, the viral genome is not eradicated. The virus can reactivate and begin producing large amounts of viral progeny (the lytic part of the viral life cycle) without the host becoming reinfected by new outside virus, and stays within the host indefinitely.
Virus latency is not to be confused with clinical latency during the incubation period when a virus is not dormant.
Mechanisms
Episomal latency
Episomal latency refers to the use of genetic episomes during latency. In this latency type, viral genes are stabilized, floating in the cytoplasm or nucleus as distinct objects, either as linear or lariat structures. Episomal latency is more vulnerable to ribozymes or host foreign gene degradation than proviral latency (see below).
Herpesviridae
One example is the herpes virus family, Herpesviridae, all of which establish latent infection. Herpes virus include chicken-pox virus and herpes simplex viruses (HSV-1, HSV-2), all of which establish episomal latency in neurons and leave linear genetic material floating in the cytoplasm.
Epstein-Barr virus
The Gammaherpesvirinae subfamily is associated with episomal latency established in cells of the immune system, such as B-cells in the case of Epstein–Barr virus. Epstein–Barr virus lytic reactivation (which can be due to chemotherapy or radiation) can result in genome instability and cancer.
Herpes simplex virus
In the case of herpes simplex (HSV), the virus has been shown to fuse with DNA in neurons, such as nerve ganglia or neurons, and HSV reactivates upon even minor chromatin loosening with stress, a
|
https://en.wikipedia.org/wiki/Massbus
|
The Massbus is a high-performance computer input/output bus designed in the 1970s by Digital Equipment Corporation (DEC). The architecture development was sponsored by Gordon Bell and John Levy was the principal architect.
The bus was used by Digital to interconnect its highest-performance computers with magnetic disk and magnetic tape storage equipment. The use of a common bus was intended to allow a single controller design to handle multiple peripheral models, and allowed the PDP-10, PDP-11, and VAX computer families to share a common set of peripherals. At the time there were multiple operating systems for each of the 16-bit, 32-bit, and 36-bit computer lines. The 18-bit PDP-15/40 connected to Massbus peripherals via a PDP-11 front end. An engineering goal was to reduce the need for a new driver per peripheral per operating system per computer family. Also, a major technical goal was to place any magnetic technology changes (data separators) into the storage device rather than in the CPU-attached controller. Thus the CPU I/O or memory bus to Massbus adapter needed no changes for multiple generations of storage technology.
A business objective was to provide a subsystem entry price well below that of IBM storage subsystems which used large and expensive controllers unique to each storage technology and processor architecture and were optimized for connecting large numbers of storage devices.
The first Massbus device was the RP04, based on Sperry Univac Information Storage Systems's (ISS) clone of the IBM 3330. Subsequently, DEC offered the RP05 and RP06, based on Memorex's 3330 clone. Memorex modified the IBM compatible interface to DEC requirements and moved the data separator electronics into the drive. DEC designed the rest which was mounted in the "bustle" attached to the drive. This set the pattern for future improvements of disk technology to double density 3330, CDC SMD drives, and then "Winchester" technology. Drives were supplied by ISS/Univa
|
https://en.wikipedia.org/wiki/Future%20interests%20%28actuarial%20science%29
|
Future interests is the subset of actuarial math that divides enjoyment of property -- usually the right to an income stream either from an annuity, a trust, royalties, or rents -- based usually on the future survival of one or more persons (natural humans, not juridical persons such as corporations).
Actuarial science
|
https://en.wikipedia.org/wiki/Synchronous%20Backplane%20Interconnect
|
The Synchronous Backplane Interconnect (SBI) was the internal processor-memory bus used by early VAX computers manufactured by the Digital Equipment Corporation of Maynard, Massachusetts.
The bus was implemented using Schottky TTL logic levels and allowed multiprocessor operation.
Computer buses
DEC hardware
|
https://en.wikipedia.org/wiki/Bond%20graph
|
A bond graph is a graphical representation of a physical dynamic system. It allows the conversion of the system into a state-space representation. It is similar to a block diagram or signal-flow graph, with the major difference that the arcs in bond graphs represent bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Bond graphs are multi-energy domain (e.g. mechanical, electrical, hydraulic, etc.) and domain neutral. This means a bond graph can incorporate multiple domains seamlessly.
The bond graph is composed of the "bonds" which link together "single-port", "double-port" and "multi-port" elements (see below for details). Each bond represents the instantaneous flow of energy () or power. The flow in each bond is denoted by a pair of variables called power variables, akin to conjugate variables, whose product is the instantaneous power of the bond. The power variables are broken into two parts: flow and effort. For example, for the bond of an electrical system, the flow is the current, while the effort is the voltage. By multiplying current and voltage in this example you can get the instantaneous power of the bond.
A bond has two other features described briefly here, and discussed in more detail below. One is the "half-arrow" sign convention. This defines the assumed direction of positive energy flow. As with electrical circuit diagrams and free-body diagrams, the choice of positive direction is arbitrary, with the caveat that the analyst must be consistent throughout with the chosen definition. The other feature is the "causality". This is a vertical bar placed on only one end of the bond. It is not arbitrary. As described below, there are rules for assigning the proper causality to a given port, and rules for the precedence among ports. Causality explains the mathematical relationship between effort and flow. The positions of the causalities show which of the power va
|
https://en.wikipedia.org/wiki/Portability%20testing
|
Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another. The test results, defined by the individual needs of the system, are some measurement of how easily the component or application will be to integrate into the environment and these results will then be compared to the software system's non-functional requirement of portability for correctness. The levels of correctness are usually measured by the cost to adapt the software to the new environment compared to the cost of redevelopment.
Use cases
When multiple subsystems share components of a larger system, portability testing can be used to help prevent propagation of errors throughout the system. Changing or upgrading to a newer system, adapting to a new interface or interfacing a new system in an existing environment are all problems that software systems with longevity will face sooner or later and properly testing the environment for portability can save on overall cost throughout the life of the system. A general guideline for portability testing is that it should be done if the software system is designed to move from one hardware platform, operating system, or web browser to another.
Examples
Software designed to run on Macintosh OS X and Microsoft Windows operating systems.
Applications developed to be compatible with Google Android and Apple iOS phones.
Video Games or other graphic intensive software intended to work with OpenGL and DirectX API's.
Software that should be compatible with Google Chrome and Mozilla Firefox browsers.
Attributes
There are four testing attributes included in portability testing. The ISO 9126 (1991) standard breaks down portability testing attributes as Installability, Compatibility, Adaptability and Replaceability. The ISO 29119 (2013) standard describes Portability with t
|
https://en.wikipedia.org/wiki/Ternary%20relation
|
In mathematics, a ternary relation or triadic relation is a finitary relation in which the number of places in the relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place.
Just as a binary relation is formally defined as a set of pairs, i.e. a subset of the Cartesian product of some sets A and B, so a ternary relation is a set of triples, forming a subset of the Cartesian product of three sets A, B and C.
An example of a ternary relation in elementary geometry can be given on triples of points, where a triple is in the relation if the three points are collinear. Another geometric example can be obtained by considering triples consisting of two points and a line, where a triple is in the ternary relation if the two points determine (are incident with) the line.
Examples
Binary functions
A function in two variables, mapping two values from sets A and B, respectively, to a value in C associates to every pair (a,b) in an element f(a, b) in C. Therefore, its graph consists of pairs of the form . Such pairs in which the first element is itself a pair are often identified with triples. This makes the graph of f a ternary relation between A, B and C, consisting of all triples , satisfying , , and
Cyclic orders
Given any set A whose elements are arranged on a circle, one can define a ternary relation R on A, i.e. a subset of A3 = , by stipulating that holds if and only if the elements a, b and c are pairwise different and when going from a to c in a clockwise direction one passes through b. For example, if A = { } represents the hours on a clock face, then holds and does not hold.
Betweenness relations
Ternary equivalence relation
Congruence relation
The ordinary congruence of arithmetics
which holds for three integers a, b, and m if and only if m divides a − b, formally may be considered as a ternary relation. However, usually, this instead is considered as a family of binary relations between the a and
|
https://en.wikipedia.org/wiki/Log%20probability
|
In probability theory and computer science, a log probability is simply a logarithm of a probability. The use of log probabilities means representing probabilities on a logarithmic scale , instead of the standard unit interval.
Since the probabilities of independent events multiply, and logarithms convert multiplication to addition, log probabilities of independent events add. Log probabilities are thus practical for computations, and have an intuitive interpretation in terms of information theory: the negative of the average log probability is the information entropy of an event. Similarly, likelihoods are often transformed to the log scale, and the corresponding log-likelihood can be interpreted as the degree to which an event supports a statistical model. The log probability is widely used in implementations of computations with probability, and is studied as a concept in its own right in some applications of information theory, such as natural language processing.
Motivation
Representing probabilities in this way has several practical advantages:
Speed. Since multiplication is more expensive than addition, taking the product of a high number of probabilities is often faster if they are represented in log form. (The conversion to log form is expensive, but is only incurred once.) Multiplication arises from calculating the probability that multiple independent events occur: the probability that all independent events of interest occur is the product of all these events' probabilities.
Accuracy. The use of log probabilities improves numerical stability, when the probabilities are very small, because of the way in which computers approximate real numbers.
Simplicity. Many probability distributions have an exponential form. Taking the log of these distributions eliminates the exponential function, unwrapping the exponent. For example, the log probability of the normal distribution's probability density function is instead of . Log probabilities make some mat
|
https://en.wikipedia.org/wiki/National%20Information%20Assurance%20Partnership
|
The National Information Assurance Partnership (NIAP) is a United States government initiative to meet the security testing needs of both information technology consumers and producers that is operated by the National Security Agency (NSA), and was originally a joint effort between NSA and the National Institute of Standards and Technology (NIST).
Purpose
The long-term goal of NIAP is to help increase the level of trust consumers have in their information systems and networks through the use of cost-effective security testing, evaluation, and validation programs. In meeting this goal, NIAP seeks to:
Promote the development and use of evaluated IT products and systems
Champion the development and use of national and international standards for IT security
Common Criteria
Foster research and development in IT security requirements definition, test methods, tools, techniques, and assurance metrics
Support a framework for international recognition and acceptance of IT security testing and evaluation results
Facilitate the development and growth of a commercial security testing industry within the U.S.
Services
Common Criteria Evaluation and Validation Scheme (CCEVS)
Product / System Configuration Guidance
Product and Protection Profile Evaluation
Consistency Instruction Manuals
NIAP Validation Body
The principal objective of the NIAP Validation Body is to ensure the provision of competent IT security evaluation and validation services for both government and industry. The Validation Body has the ultimate responsibility for the operation of the CCEVS in accordance with its policies and procedures, and where appropriate: interpret and amend those policies and procedures. The NSA is responsible for providing sufficient resources to the Validation Body so that it may carry out its responsibilities.
The Validation Body is led by a Director and Deputy Director selected by NSA management. The Director of the Validation Body reports to the NIAP Director for administrat
|
https://en.wikipedia.org/wiki/CPU%20Wars
|
CPU Wars is an underground comic strip by Charles Andres that circulated around Digital Equipment Corporation and other computer manufacturers starting in 1977. It described a hypothetical invasion of Digital's slightly disguised Maynard, Massachusetts ex-woolen mill headquarters (now located in Barnyard, Mass) by troops from IPM, the Impossible to Program Machine Corporation in a rather-blunt-edged parody of IBM. The humor hinged on the differences in style and culture between the invading forces of IPM and the laid-back employees of the Human Equipment Corporation. For example, even at gunpoint, the employees were unable to lead the invading forces to their leaders because they had no specific leaders as a result their corporation's use of matrix management.
The comic was drawn by a DEC employee, initially anonymous and later self-revealed to be Charles Andres. A compendium of the strips was finally published in 1980.
The most notable trace of the comic is the phrase Eat flaming death, supposedly derived from a famously turgid line in a World War II-era anti-Nazi propaganda comic that ran “Eat flaming death, non-Aryan mongrels!” or something of the sort (however, it is also reported that on the Firesign Theatre's 1975 album In The Next World, You're On Your Own a character won the right to scream “Eat flaming death, fascist media pigs” in the middle of Oscar night on a game show; this may have been an influence). Used in humorously overblown expressions of hostility. “Eat flaming death, EBCDIC users!”
References
External links
Digital Equipment Corporation
History of computing hardware
Underground comix
|
https://en.wikipedia.org/wiki/Apotome%20%28mathematics%29
|
In the historical study of mathematics, an apotome is a line segment formed from a longer line segment by breaking it into two parts, one of which is commensurable only in power to the whole; the other part is the apotome. In this definition, two line segments are said to be "commensurable only in power" when the ratio of their lengths is an irrational number but the ratio of their squared lengths is rational.
Translated into modern algebraic language, an apotome can be interpreted as a quadratic irrational number formed by subtracting one square root of a rational number from another.
This concept of the apotome appears in Euclid's Elements beginning in book X, where Euclid defines two special kinds of apotomes. In an apotome of the first kind, the whole is rational, while in an apotome of the second kind, the part subtracted from it is rational; both kinds of apotomes also satisfy an additional condition. Euclid Proposition XIII.6 states that, if a rational line segment is split into two pieces in the golden ratio, then both pieces may be represented as apotomes.
References
Mathematical terminology
Euclidean geometry
|
https://en.wikipedia.org/wiki/Phillips%20Code
|
The Phillips Code is a brevity code (shorthand) created in 1879 by Walter P. Phillips (then of the Associated Press) for the rapid transmission of press reports by telegraph.
Overview
It was created in 1879 by Walter P. Phillips. It defined hundreds of abbreviations and initialisms for commonly used words that news authors and copy desk staff would commonly use. There were subcodes for commodities and stocks called the Market Code, a Baseball Supplement, and single-letter codes for Option Months. The last official edition was published in 1925, but there was also a Market supplement last published in 1909 that was separate.
The code consists of a dictionary of common words or phrases and their associated abbreviations. Extremely common terms are represented by a single letter (C: See; Y: Year); those less frequently used gain successively longer abbreviations (Ab: About; Abb: Abbreviate; Abty: Ability; Acmpd: Accompanied).
Later, The Evans Basic English Code expanded the 1,760 abbreviations in the Phillips Code to 3,848 abbreviations.
Examples of use
Using the Phillips Code, this ten-word telegraphic transmission:
ABBG LG WORDS CAN SAVE XB AMTS MON AVOG FAPIB
expands to this:
Abbreviating long words can save exorbitant amounts of money, avoiding filing a petition in bankruptcy.
In 1910, an article explaining the basic structure and purpose of the Phillips Code appeared in various US newspapers and magazines. One example given is:
T tri o HKT ft mu o SW on Ms roof garden, nw in pg, etc.which the article translates as:
The trial of Harry K. Thaw for the murder of Stanford White on Madison Square Roof Garden, now in progress, etc.
Notable codes
The terms POTUS and SCOTUS originated in the code. SCOTUS appeared in the very first edition of 1879 and POTUS was in use by 1895, and was officially included in the 1923 edition. These abbreviations entered common parlance when news gathering services, in particular, the Associated Press, adopted the terminology.
|
https://en.wikipedia.org/wiki/Image%20sensor
|
An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.
The two main types of electronic image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on metal–oxide–semiconductor (MOS) technology, with CCDs based on MOS capacitors and CMOS sensors based on MOSFET (MOS field-effect transistor) amplifiers. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds, while digital sensors include flat-panel detectors.
CCD vs. CMOS sensors
The two main types of digital image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor), fabricated in complementary MOS (CMOS) or N-type MOS (NMOS or Live MOS) technologies. Both CCD and CMOS sensors are based on the MOS technology, with MOS capacitors being the building blocks of a CCD, and MOSFET amplifiers being the building blocks of a CMOS sensor.
Cameras integrated in small consumer products generally use CMOS sensors, which are usually cheaper and have lower power consumption in battery powered devices than CCDs. CCD sensors are used for high end broadcast quality video cameras, and CMOS sensors dominate in still photography and consumer goods where overall cost is a major concern. Both types of sensor accomplish the same task of capturing light and c
|
https://en.wikipedia.org/wiki/WormBook
|
WormBook is an open access, comprehensive collection of original, peer-reviewed chapters covering topics related to the biology of the nematode worm Caenorhabditis elegans (C. elegans). WormBook also includes WormMethods, an up-to-date collection of methods and protocols for C. elegans researchers.
WormBook is the online text companion to WormBase, the C. elegans model organism database. Capitalizing on the World Wide Web, WormBook links in-text references (e.g. genes, alleles, proteins, literature citations) with primary biological databases such as WormBase and PubMed. C. elegans was the first multicellular organism to have its genome sequenced and is a model organism for studying developmental genetics and neurobiology.
Contents
The content of WormBook is categorized into the sections listed below, each filled with a variety of relevant chapters. These sections include:
Genetics and genomics
Molecular biology
Biochemistry
Cell Biology
Signal transduction
Developmental biology
Post-embryonic development
Sex-determination systems
The germ line
Neurobiology and behavior
Evolution and ecology
Disease models and drug discovery
WormMethods
References
Bioinformatics
Biology books
Cell biology
Caenorhabditis elegans
Proteins
Animal developmental biology
|
https://en.wikipedia.org/wiki/Plaintext-aware%20encryption
|
Plaintext-awareness is a notion of security for public-key encryption. A cryptosystem is plaintext-aware if it is difficult for any efficient algorithm to come up with a valid ciphertext without being aware of the corresponding plaintext.
From a lay point of view, this is a strange property. Normally, a ciphertext is computed by encrypting a plaintext. If a ciphertext is created this way, its creator would be aware, in some sense, of the plaintext. However, many cryptosystems are not plaintext-aware. As an example, consider the RSA cryptosystem without padding. In the RSA cryptosystem, plaintexts and ciphertexts are both values modulo N (the modulus). Therefore, RSA is not plaintext aware: one way of generating a ciphertext without knowing the plaintext is to simply choose a random number modulo N.
In fact, plaintext-awareness is a very strong property. Any cryptosystem that is semantically secure and is plaintext-aware is actually secure against a chosen-ciphertext attack, since any adversary that chooses ciphertexts would already know the plaintexts associated with them.
History
The concept of plaintext-aware encryption was developed by Mihir Bellare and Phillip Rogaway in their paper on optimal asymmetric encryption, as a method to prove that a cryptosystem is chosen-ciphertext secure.
Further research
Limited research on plaintext-aware encryption has been done since Bellare and Rogaway's paper. Although several papers have applied the plaintext-aware technique in proving encryption schemes are chosen-ciphertext secure, only three papers revisit the concept of plaintext-aware encryption itself, both focussed on the definition given by Bellare and Rogaway that inherently require random oracles. Plaintext-aware encryption is known to exist when a public-key infrastructure is assumed.
Also, it has been shown that weaker forms of plaintext-awareness exist under the knowledge of exponent assumption, a non-standard assumption about Diffie-Hellman trip
|
https://en.wikipedia.org/wiki/Stygofauna
|
Stygofauna are any fauna that live in groundwater systems or aquifers, such as caves, fissures and vugs. Stygofauna and troglofauna are the two types of subterranean fauna (based on life-history). Both are associated with subterranean environments – stygofauna are associated with water, and troglofauna with caves and spaces above the water table. Stygofauna can live within freshwater aquifers and within the pore spaces of limestone, calcrete or laterite, whilst larger animals can be found in cave waters and wells. Stygofaunal animals, like troglofauna, are divided into three groups based on their life history - stygophiles, stygoxenes, and stygobites.
Stygophiles inhabit both surface and subterranean aquatic environments, but are not necessarily restricted to either.
Stygoxenes are like stygophiles, except they are defined as accidental or occasional presence in subterranean waters. Stygophiles and stygoxenes may live for part of their lives in caves, but don't complete their life cycle in them.
Stygobites are obligate, or strictly subterranean, aquatic animals and complete their entire life in this environment.
Extensive research of stygofauna has been undertaken in countries with ready access to caves and wells such as France, Slovenia, the US and, more recently, Australia. Many species of stygofauna, particularly obligate stygobites, are endemic to specific regions or even individual caves. This makes them an important focus for the conservation of groundwater systems.
Diet and lifecycle
Stygofauna have adapted to the limited food supply and are extremely energy efficient. Stygofauna feed on plankton, bacteria, and plants found in streams.
To survive in an environment where food is scarce and oxygen levels are low, stygofauna often have very low metabolism. As a result, stygofauna may live longer than other terrestrial species. For example, the crayfish Orconectes australis from Shelta Cave in Alabama has been estimated to reproduce at 100 years and live
|
https://en.wikipedia.org/wiki/Subvocal%20recognition
|
Subvocal recognition (SVR) is the process of taking subvocalization and converting the detected results to a digital output, aural or text-based.
Concept
A set of electrodes are attached to the skin of the throat and, without opening the mouth or uttering a sound, the words are recognized by a computer.
Subvocal speech recognition deals with electromyograms that are different for each speaker. Therefore, consistency can be thrown off just by the positioning of an electrode. To improve accuracy, researchers in this field are relying on statistical models that get better at pattern-matching the more times a subject "speaks" through the electrodes, but even then there are lapses. At Carnegie Mellon University, researchers found that the same "speaker" with accuracy rates of 94% one day can see that rate drop to 48% a day later; between two different speakers it drops even more.
Relevant applications for this technology where audible speech is impossible: for astronauts, underwater Navy Seals, fighter pilots and emergency workers charging into loud, harsh environments. At Worcester Polytechnic Institute in Massachusetts, research is underway to use subvocal information as a control source for computer music instruments.
Research and patents
With a grant from the U.S. Army, research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura.
NASA's Ames Research Laboratory in Mountain View, California, under the supervision of Charles Jorgensen is conducting subvocalization research.
The Brain Computer Interface R&D program at Wadsworth Center under the New York State Department of Health has confirmed the existing ability to decipher consonants and vowels from imagined speech, which allows for brain-based communication using imagined speech.
US Patents on silent communication technologies include: US Patent 6587729 "Apparatus for audibly communicating speech using the radio frequency
|
https://en.wikipedia.org/wiki/American%20Cryptogram%20Association
|
The American Cryptogram Association (ACA) is an American non-profit organization devoted to the hobby of cryptography, with an emphasis on types of codes, ciphers, and cryptograms that can be solved either with pencil and paper, or with computers, but not computer-only systems.
History
The ACA was formed on September 1, 1930. Initially the primary interest was in monoalphabetic substitution ciphers (also known as "single alphabet" or "Aristocrat" puzzles), but this has since extended to dozens of different systems, such as Playfair, autokey, transposition, and Vigenère ciphers.
Since some of its members had belonged to the “National Puzzlers' League”, some of the NPL terminology ("nom," "Krewe," etc.) is also used in the ACA.
Publications and activities
The association has a collection of books and articles on cryptography and related subjects in the library at Kent State University.
An annual convention takes place in late August or early September. Recent conventions have been held in Bletchley Park and Fort Lauderdale, Florida.
There is also a regular journal called “The Cryptogram”, which first appeared in February, 1932, and has grown to a 28-page bimonthly periodical which includes articles and challenge ciphers.
Notable members
H. O. Yardley, who used the nom BOZO, first Vice President in 1933.
Helen Fouché Gaines, member since 1933, who used the nom PICCOLA, editor of the 1939 book Elementary Cryptanalysis.
Rosario Candela, who used the nom ISKANDER, member since June 1934.
David Kahn, who used the noms DAKON, ISHCABIBEL and more recently Kahn D.
Will Shortz, New York Times Puzzle Editor, who uses the nom WILLz.
David Shulman, lexicographer and cryptographer, member since 1933, who used the nom Ab Struse.
James Gillogly, who uses the nom SCRYER.
Gelett Burgess, American artist, art critic, poet, author and humorist, used the nom TWO O'CLOCK.
References
sci.crypt FAQ, part 10
Official ACA website
"Decoding Nazi Secrets", Nova Online, PBS
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.