source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Hisense
Hisense Group is a Chinese multinational major appliance and electronics manufacturer headquartered in Qingdao, Shandong Province, China. Televisions are the main products of Hisense, and it is the largest TV manufacturer in China by market share since 2004. Hisense is also an OEM, so some of its products are sold to other companies and carry brand names not related to Hisense. Two major subsidiaries of Hisense Group are listed companies, Hisense Visual Technology () and Hisense H.A. (, ). Both had a state ownership of over 30% via Hisense holding company before the end of 2020. Hisense Group has over 80,000 employees worldwide, as well as 14 industrial parks, some of which are located in Qingdao, Shunde, Huzhou, Czech Republic, South Africa and Mexico. There are also 18 R&D centers located in Qingdao, Shenzhen, the United States, Germany, Slovenia, Israel, and other countries. History Qingdao No.2 Radio Factory, the predecessor of Hisense Group, was established in September 1969; this is the year its existence was first officially recognized. The small factory's first product was a radio sold under the brand name Red Lantern, but the company later gained the know-how to make TVs through a trial-production of black and white televisions ordered by the Shandong National Defense Office. This involved the technical training of three employees at another Chinese factory, Tianjin 712, and resulted in the production of 82 televisions by 1971 and the development of transistor TVs by 1975. Their first TV model, CJD18, was produced in 1978. Television production in China was limited until 1979, when a meeting of the Ministry of Electronics in Beijing concluded with calls for greater development of the civil-use electronics industry. Qingdao No.2 Radio Factory was then quickly merged with other local electronics makers and manufactured televisions under the name Qingdao General Television Factory in Shandong province. Color televisions were manufactured through the purch
https://en.wikipedia.org/wiki/Alfred%20Y.%20Cho
Alfred Yi Cho (; born July 10, 1937) is a Chinese-American electrical engineer, inventor, and optical engineer. He is the Adjunct Vice President of Semiconductor Research at Alcatel-Lucent's Bell Labs. He is known as the "father of molecular beam epitaxy"; a technique he developed at that facility in the late 1960s. He is also the co-inventor, with Federico Capasso of quantum cascade lasers at Bell Labs in 1994. Cho was elected a member of the National Academy of Engineering in (1985) for his pioneering development of a molecular beam epitaxy technique, leading to unique semiconductor layer device structures. Biography Cho was born in Beiping. He went to Hong Kong in 1949 and had his secondary education in Pui Ching Middle School there. Cho holds B.S., M.S. and Ph.D. degrees in electrical engineering from the University of Illinois. He joined Bell Labs in 1968. He is a member of the National Academy of Sciences and the National Academy of Engineering, as well as a Fellow of the American Physical Society, the Institute of Electrical and Electronics Engineers, the American Philosophical Society, and the American Academy of Arts and Sciences. In June 2007 he was honoured with the U.S. National Medal of Technology, the highest honor awarded by the President of the United States for technological innovation. Cho received the award for his contributions to the invention of molecular beam epitaxy (MBE) and his work to commercialize the process. He already has many awards to his name, including: the American Physical Society's International Prize for New Materials in 1982, the Solid State Science and Technology Medal of the Electrochemical Society in 1987, the World Materials Congress Award of ASM International in 1988, the Gaede-Langmuir Award of the American Vacuum Society in 1988, the IRI Achievement Award of the Industrial Research Institute in 1988, the New Jersey Governor's Thomas Alva Edison Science Award in 1990, the International Crystal Growth Award of the
https://en.wikipedia.org/wiki/Cyclic%20stress
Cyclic stress is the distribution of forces (aka stresses) that change over time in a repetitive fashion. As an example, consider one of the large wheels used to drive an aerial lift such as a ski lift. The wire cable wrapped around the wheel exerts a downward force on the wheel and the drive shaft supporting the wheel. Although the shaft, wheel, and cable move, the force remains nearly vertical relative to the ground. Thus a point on the surface of the drive shaft will undergo tension when it is pointing towards the ground and compression when it is pointing to the sky. Types of cyclic stress Cyclic stress is frequently encountered in rotating machinery where a bending moment is applied to a rotating part. This is called a cyclic bending stress and the aerial lift above is a good example. However, cyclic axial stresses and cyclic torsional stresses also exist. An example of cyclic axial stress would be a bungee cord (see bungee jumping), which must support the mass of people as they jump off structures such as bridges. When a person reaches the end of a cord, the cord deflects elastically and stops the person's descent. This creates a large axial stress in the cord. A fraction of the elastic potential energy stored in the cord is typically transferred back to the person, throwing the person upwards some fraction of the distance he or she fell. The person then falls on the cord again, inducing stress in the cord. This happens multiple times per jump. The same cord is used for several jumps, creating cyclical stresses in the cord that could eventually cause failure if not replaced. Cyclic stress and material failure When cyclic stresses are applied to a material, even though the stresses do not cause plastic deformation, the material may fail due to fatigue. Fatigue failure is typically modeled by decomposing cyclic stresses into mean and alternating components. Mean stress is the time average of the principal stress. The definition of alternating stress va
https://en.wikipedia.org/wiki/Dynamic%20Multipoint%20Virtual%20Private%20Network
Dynamic Multipoint Virtual Private Network (DMVPN) is a dynamic tunneling form of a virtual private network (VPN) supported on Cisco IOS-based routers, and Huawei AR G3 routers, and on Unix-like operating systems. Benefits DMVPN provides the capability for creating a dynamic-mesh VPN network without having to pre-configure (static) all possible tunnel end-point peers, including IPsec (Internet Protocol Security) and ISAKMP (Internet Security Association and Key Management Protocol) peers. DMVPN is initially configured to build out a hub-and-spoke network by statically configuring the hubs (VPN headends) on the spokes, no change in the configuration on the hub is required to accept new spokes. Using this initial hub-and-spoke network, tunnels between spokes can be dynamically built on demand (dynamic-mesh) without additional configuration on the hubs or spokes. This dynamic-mesh capability alleviates the need for any load on the hub to route data between the spoke networks. Technologies Next Hop Resolution Protocol, Generic Routing Encapsulation (GRE), , or multipoint GRE if spoke-to-spoke tunnels are desired An IP-based routing protocol, EIGRP, OSPF, RIPv2, BGP or ODR (DMVPN hub-and-spoke only). IPsec (Internet Protocol Security) using an IPsec profile, which is associated with a virtual tunnel interface in IOS software. All traffic sent via the tunnel is encrypted per the policy configured (IPsec transform set) Internal routing Routing protocols such as OSPF, EIGRP v1 or v2 or BGP are generally run between the hub and spoke to allow for growth and scalability. Both EIGRP and BGP allow a higher number of supported spokes per hub. Encryption As with GRE tunnels, DMVPN allows for several encryption schemes (including none) for the encryption of data traversing the tunnels. For security reasons Cisco recommend that customers use AES. Phases DMVPN has three phases that route data differently. Phase 1: All traffic flows from spokes to and through the hub.
https://en.wikipedia.org/wiki/List%20of%20ERP%20software%20packages
This is a list of notable enterprise resource planning (ERP) software. The first section is devoted to free and open-source software, and the second is for proprietary software. Free and open-source ERP software Proprietary ERP vendors and software 1C Company – 1C:Enterprise 24SevenOffice – 24SevenOffice Start, Premium, Professional and Custom 3i Infotech – Orion ERP, Orion 11j, Orion 11s abas Software AG – abas ERP Acumatica – Acumatica Cloud ERP BatchMaster Software – BatchMaster ERP Consona Corporation – AXIS ERP, Intuitive ERP, Made2Manage ERP CGI Group – CGI Advantage CGram Software – CGram Enterprise Consona Corporation – Cimnet Systems, Compiere professional edition, Encompix ERP Ciright Systems – Ciright ERP Comarch - from the smallest to the biggest system: Comarch ERP XT, Comarch Optima, Comarch ERP Standard (Altum), Comarch ERP Enterprise (Semiramis) Deacom – DEACOM ERP Epicor - Epicor iScala, Epicor Eagle, Prophet 21 Erply – Retail ERP Exact Software – Globe Next, Exact Online FinancialForce – FinancialForce ERP Fishbowl – Fishbowl Inventory Fujitsu Glovia Inc. – GLOVIA G2 Greentree International – Greentree Business Software IFS Inductive Automation – Ignition MES, OEE Module Industrial and Financial Systems – IFS Applications Infor Global Solutions – Infor CloudSuite Financials, Infor LN, Infor M3, Infor CloudSuite Industrial (SyteLine), Infor VISUAL, Infor Distribution SX.e IQMS – EnterpriseIQ Jeeves Information Systems AB – Jeeves Microsoft – Microsoft Dynamics (a product line of ERP and CRM applications), NAV-X Open Systems Accounting Software – OSAS, TRAVERSE Oracle – Oracle Fusion Cloud, Oracle ERP Cloud, Oracle NetSuite, Oracle E-Business Suite, JD Edwards EnterpriseOne, JD Edwards World, PeopleSoft, Oracle Retail Panaya – Panaya CloudQuality Suite Pegasus Software – Opera (I, II and 3) Planet Soho – SohoOS Plex Systems – Plex Online Pronto Software – Pronto Software QAD Inc – QAD Enterprise Applications (fo
https://en.wikipedia.org/wiki/Maximum%20power%20principle
The maximum power principle or Lotka's principle has been proposed as the fourth principle of energetics in open system thermodynamics, where an example of an open system is a biological cell. According to Howard T. Odum, "The maximum power principle can be stated: During self-organization, system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency." History Chen (2006) has located the origin of the statement of maximum power as a formal principle in a tentative proposal by Alfred J. Lotka (1922a, b). Lotka's statement sought to explain the Darwinian notion of evolution with reference to a physical principle. Lotka's work was subsequently developed by the systems ecologist Howard T. Odum in collaboration with the chemical engineer Richard C. Pinkerton, and later advanced by the engineer Myron Tribus. While Lotka's work may have been a first attempt to formalise evolutionary thought in mathematical terms, it followed similar observations made by Leibniz and Volterra and Ludwig Boltzmann, for example, throughout the sometimes controversial history of natural philosophy. In contemporary literature it is most commonly associated with the work of Howard T. Odum. The significance of Odum's approach was given greater support during the 1970s, amid times of oil crisis, where, as Gilliland (1978, pp. 100) observed, there was an emerging need for a new method of analysing the importance and value of energy resources to economic and environmental production. A field known as energy analysis, itself associated with net energy and EROEI, arose to fulfill this analytic need. However, in energy analysis intractable theoretical and practical difficulties arose when using the energy unit to understand, a) the conversion among concentrated fuel types (or energy types), b) the contribution of labour, and c) the contribution of the environment. Philosophy and theory Lotka said (1922b: 151): Gilli
https://en.wikipedia.org/wiki/Schoof%27s%20algorithm
Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve. The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves. Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time. This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm. Introduction Let be an elliptic curve defined over the finite field , where for a prime and an integer . Over a field of characteristic an elliptic curve can be given by a (short) Weierstrass equation with . The set of points defined over consists of the solutions satisfying the curve equation and a point at infinity . Using the group law on elliptic curves restricted to this set one can see that this set forms an abelian group, with acting as the zero element. In order to count points on an elliptic curve, we compute the cardinality of . Schoof's approach to computing the cardinality makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials. Hasse's theorem Hasse's theorem states that if is an elliptic curve over the finite field , then satisfies This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down to a finite (albeit large) set of possibilities. Defining to be , and making use of this result, we now have that computing the value of modulo where , is sufficient for determining , and thus . While there is no efficient way to c
https://en.wikipedia.org/wiki/Minimum%20total%20potential%20energy%20principle
The minimum total potential energy principle is a fundamental concept used in physics and engineering. It dictates that at low temperatures a structure or body shall deform or displace to a position that (locally) minimizes the total potential energy, with the lost potential energy being converted into kinetic energy (specifically heat). Some examples A free proton and free electron will tend to combine to form the lowest energy state (the ground state) of a hydrogen atom, the most stable configuration. This is because that state's energy is 13.6 electron volts (eV) lower than when the two particles separated by an infinite distance. The dissipation in this system takes the form of spontaneous emission of electromagnetic radiation, which increases the entropy of the surroundings. A rolling ball will end up stationary at the bottom of a hill, the point of minimum potential energy. The reason is that as it rolls downward under the influence of gravity, friction produced by its motion transfers energy in the form of heat of the surroundings with an attendant increase in entropy. A protein folds into the state of lowest potential energy. In this case, the dissipation takes the form of vibration of atoms within or adjacent to the protein. Structural mechanics The total potential energy, , is the sum of the elastic strain energy, , stored in the deformed body and the potential energy, , associated to the applied forces: This energy is at a stationary position when an infinitesimal variation from such position involves no change in energy: The principle of minimum total potential energy may be derived as a special case of the virtual work principle for elastic systems subject to conservative forces. The equality between external and internal virtual work (due to virtual displacements) is: where = vector of displacements = vector of distributed forces acting on the part of the surface = vector of body forces In the special case of elastic bodies, the right
https://en.wikipedia.org/wiki/Paul%20Horn%20%28computer%20scientist%29
Paul M. Horn (born August 16, 1946) is an American computer scientist and solid state physicist who has made contributions to pervasive computing, pioneered the use of copper and self-assembly in chip manufacturing, and he helped manage the development of deep computing, an important tool that provides business decision makers with the ability to analyze and develop solutions to very complex and difficult problems. Horn was elected a member of the National Academy of Engineering in 2007 for leadership in the development of information technology products, ranging from microelectronics to supercomputing. Early life and education Horn was born on August 16, 1946, and graduated from Clarkson University in 1968 with a Bachelor of Science degree. He obtained his PhD from the University of Rochester in physics in 1973. Career Horn has, at various times, been Senior Vice President of the IBM Corporation and executive director of Research. While at IBM, he initiated the project to develop Watson, the computer that competed successfully in the quiz show Jeopardy!. He is currently a New York University (NYU) Distinguished Scientist in Residence and NYU Stern Executive in Residence. He is also a professor at NYU Tandon School of Engineering. In 2009, he was appointed as the Senior Vice Provost for Research at NYU. Awards Industrial Research Institute (IRI) Medal in honor of his contributions to technology leadership, 2005 American Physical Society, George E. Pake Prize, 2002 Hutchison Medal from the University of Rochester, 2002 Distinguished Leadership award from the New York Hall of Science, 2000 Bertram Eugene Warren Award from the American Crystallographic Association, 1988 References External links IBM Bio 1946 births Living people Clarkson University alumni New York University faculty American computer scientists Computer hardware researchers Computer systems researchers IBM employees Members of the United States National Academy of Engineering Polytechnic
https://en.wikipedia.org/wiki/Resel
In image analysis, a resel (from resolution element) represents the actual spatial resolution in an image or a volumetric dataset. The number of resels in the image may be lower or equal to the number of pixel/voxels in the image. In an actual image the resels can vary across the image and indeed the local resolution can be expressed as "resels per pixel" (or "resels per voxel"). In functional neuroimaging analysis, an estimate of the number of resels together with random field theory is used in statistical inference. Keith Worsley has proposed an estimate for the number of resels/roughness. The word "resel" is related to the words "pixel", "texel", and "voxel", and Waldo R. Tobler is probably among the first to use the word. See also Kell factor References Bibliography Keith J. Worsley, An unbiased estimator for the roughness of a multivariate Gaussian random field, Technical report, 2000 July. Image processing Computer graphics
https://en.wikipedia.org/wiki/Ethernet%20Powerlink
Ethernet Powerlink is a real-time protocol for standard Ethernet. It is an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG). It was introduced by Austrian automation company B&R in 2001. This protocol has nothing to do with power distribution via Ethernet cabling or power over Ethernet (PoE), power line communication, or Bang & Olufsen's PowerLink cable. Overview Ethernet Powerlink expands Ethernet with a mixed polling and timeslicing mechanism. This provides: Guaranteed transfer of time-critical data in very short isochronic cycles with configurable response time Time-synchronisation of all nodes in the network with very high precision of sub-microseconds Transmission of less time-critical data in a reserved asynchronous channel Modern implementations reach cycle-times of under 200 µs and a time-precision (jitter) of less than 1 µs. Standardization Powerlink was standardized by the Ethernet Powerlink Standardization Group (EPSG) and founded in June 2003 as an independent association. Working groups focus on tasks like safety, technology, marketing, certification and end users. The EPSG cooperates with the standardization bodies and associations, like the CAN in Automation (CiA) Group and the IEC. Physical layer The original physical layer specified was 100BASE-TX Fast Ethernet. Since the end of 2006, Ethernet Powerlink with Gigabit Ethernet supported a transmission rate ten times higher (1,000 Mbit/s). Repeating hubs instead of switches within the Real-time domain is recommended to minimise delay and jitter. Ethernet Powerlink uses IAONA's Industrial Ethernet Planning and Installation Guide for clean cabling of industrial networks and both industrial Ethernet connectors 8P8C (commonly known as RJ45) and M12 are accepted. Data link layer The standard Ethernet data link layer is extended by an additional bus scheduling mechanism, which secures that at a time only one node is accessing the network. The schedule is divided in
https://en.wikipedia.org/wiki/Deus%20Ex%20Machina%20%28video%20game%29
Deus Ex Machina is a video game designed and created by Mel Croucher and published by Automata UK for the ZX Spectrum in October 1984 and later converted to MSX and Commodore 64. The game was the first to be accompanied by a fully synchronised soundtrack which featured narration, celebrity artists and music. The cast included Ian Dury, Jon Pertwee, Donna Bailey, Frankie Howerd, E.P. Thompson, and Croucher (who also composed the music). Andrew Stagg coded the original Spectrum version, and Colin Jones (later known as author/publisher Colin Bradshaw-Jones) was the programmer of the Commodore 64 version. The game charts the life of a "defect" which has formed in "the machine", from conception, through growth, evolution and eventually death. The progression is loosely based on "The Seven Ages of Man" from the Shakespeare play, As You Like It and includes many quotations and parodies of this. The original game would later be rereleased alongside a sequel/remake in 2015. Gameplay Players of the game take control of a defective machine which has taken the form of the human body. The players would experience the different stages of life, all the way from being a cell to being a senile old being. It is considered to be a work of audiovisual entertainment although the game itself does not have sound. It is separated into an audio cassette where the tape needs to be played alongside the game. The length of the audio cassette is 46 minutes which is also the length of the game itself. Although the game could be played without the audio cassette, it would make it easier to understand with the help of the soundtrack. The soundtrack includes songs, musical compositions, and also voices of famous actors. As the game comes with a full transcript of the speech, it could at times be played without audio. Reception Despite critical acclaim at the time, the game did not conform to conventions of packaging and pricing required by distributors and retailers and the game was sold mail-
https://en.wikipedia.org/wiki/Extreme%20programming%20practices
Extreme programming (XP) is an agile software development methodology used to implement software projects. This article details the practices used in this methodology. Extreme programming has 12 practices, grouped into four areas, derived from the best practices of software engineering. Fine scale feedback Pair programming Pair programming means that all the codes which is produced by two people programming on one task on one workstation. One programmer has control over the workstation and is thinking mostly about the coding in detail. The other programmer is more focused on the big picture, and is continually reviewing the code that is being produced by the first programmer. Programmers trade roles after minute to hour periods. The pairs are not fixed; programmers switch partners frequently, so that everyone knows what everyone is doing, and everybody remains familiar with the whole system, even the parts outside their skill set. This way, pair programming also can enhance team-wide communication. (This also goes hand-in-hand with the concept of Collective Ownership). Planning game The main planning process within extreme programming is called the Planning Game. The game is a meeting that occurs once per iteration, typically once a week. The planning process is divided into two parts: Release Planning: This is focused on determining what requirements are included in which near-term releases, and when they should be delivered. The customers and developers are both part of this. Release Planning consists of three phases: Exploration Phase: In this phase the customer will provide a shortlist of high-value requirements for the system. These will be written down on user story cards. Commitment Phase: Within the commitment phase business and developers will commit themselves to the functionality that will be included and the date of the next release. Steering Phase: In the steering phase the plan can be adjusted, new requirements can be added and/or existing req
https://en.wikipedia.org/wiki/Shot%20transition%20detection
Shot transition detection (or simply shot detection) also called cut detection is a field of research of video processing. Its subject is the automated detection of transitions between shots in digital video with the purpose of temporal segmentation of videos. Use Shot transition detection is used to split up a film into basic temporal units called shots; a shot is a series of interrelated consecutive pictures taken contiguously by a single camera and representing a continuous action in time and space. This operation is of great use in software for post-production of videos. It is also a fundamental step of automated indexing and content-based video retrieval or summarization applications which provide an efficient access to huge video archives, e.g. an application may choose a representative picture from each scene to create a visual overview of the whole film and, by processing such indexes, a search engine can process search items like "show me all films where there's a scene with a lion in it." Cut detection can do nothing that a human editor couldn't do manually, however it is advantageous as it saves time. Furthermore, due to the increase in the use of digital video and, consequently, in the importance of the aforementioned indexing applications, the automatic cut detection is very important nowadays. Basic technical terms In simple terms cut detection is about finding the positions in a video in that one scene is replaced by another one with different visual content. Technically speaking the following terms are used: A digital video consists of frames that are presented to the viewer's eye in rapid succession to create the impression of movement. "Digital" in this context means both that a single frame consists of pixels and the data is present as binary data, such that it can be processed with a computer. Each frame within a digital video can be uniquely identified by its frame index, a serial number. A shot is a sequence of frames shot uninterrupt
https://en.wikipedia.org/wiki/Dynamic%20linker
In computing, a dynamic linker is the part of an operating system that loads and links the shared libraries needed by an executable when it is executed (at "run time"), by copying the content of libraries from persistent storage to RAM, filling jump tables and relocating pointers. The specific operating system and executable format determine how the dynamic linker functions and how it is implemented. Linking is often referred to as a process that is performed when the executable is compiled, while a dynamic linker is a special part of an operating system that loads external shared libraries into a running process and then binds those shared libraries dynamically to the running process. This approach is also called dynamic linking or late linking. Implementations Microsoft Windows Dynamic-link library, or DLL, is Microsoft's implementation of the shared library concept in the Microsoft Windows and OS/2 operating systems. These libraries usually have the file extension DLL, OCX (for libraries containing ActiveX controls), or DRV (for legacy system drivers). The file formats for DLLs are the same as for Windows EXE files that is, Portable Executable (PE) for 32-bit and 64-bit Windows, and New Executable (NE) for 16-bit Windows. As with EXEs, DLLs can contain code, data, and resources, in any combination. Data files with the same file format as a DLL, but with different file extensions and possibly containing only resource sections, can be called resource DLLs. Examples of such DLLs include multi-language user interface libraries with extension MUI, icon libraries, sometimes having the extension ICL, and font files, having the extensions FON and FOT. Unix-like systems using ELF, and Darwin-based systems In most Unix-like systems, most of the machine code that makes up the dynamic linker is actually an external executable that the operating system kernel loads and executes first in a process address space newly constructed as a result of calling exec or posix_spa
https://en.wikipedia.org/wiki/Volume%20fraction
In chemistry and fluid mechanics, the volume fraction φi is defined as the volume of a constituent Vi divided by the volume of all constituents of the mixture V prior to mixing: Being dimensionless, its unit is 1; it is expressed as a number, e.g., 0.18. It is the same concept as volume percent (vol%) except that the latter is expressed with a denominator of 100, e.g., 18%. The volume fraction coincides with the volume concentration in ideal solutions where the volumes of the constituents are additive (the volume of the solution is equal to the sum of the volumes of its ingredients). The sum of all volume fractions of a mixture is equal to 1: The volume fraction (percentage by volume, vol%) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others. Volume concentration and volume percent Volume percent is the concentration of a certain solute, measured by volume, in a solution. It has as a denominator the volume of the mixture itself, as usual for expressions of concentration, rather than the total of all the individual components’ volumes prior to mixing: Volume percent is usually used when the solution is made by mixing two fluids, such as liquids or gases. However, percentages are only additive for ideal gases. The percentage by volume (vol%) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others. In the case of a mixture of ethanol and water, which are miscible in all proportions, the designation of solvent and solute is arbitrary. The volume of such a mixture is slightly less than the sum of the volumes of the components. Thus, by the above definition, the term "40% alcohol by volume" refers to a mixture of 40 volume units of ethanol with enough water to make a final volume of 100 units, rather than a
https://en.wikipedia.org/wiki/Edgeworth%20price%20cycle
An Edgeworth price cycle is cyclical pattern in prices characterized by an initial jump, which is then followed by a slower decline back towards the initial level. The term was introduced by Maskin and Tirole (1988) in a theoretical setting featuring two firms bidding sequentially and where the winner captures the full market. Phases of a price cycle A price cycle has the following phases: War of attrition: When the price is at marginal cost, the firms are engaged in a war of attrition where each firm hopes that the competitor will raise her price first ("relent"). Jump: When one firm relents, the other firm will then in the next period undercut, which is when the market price jumps. This first period is the most valuable to be the low-price firm, which is what causes firms to want to stay in the war of attrition to force the competitor to jump first. Undercutting: then follows a sequence where the firms take turns at undercutting each other until the market arrives back in the war of attrition at the low price. Discussion It can be debated whether Edgeworth Cycles should be thought of as tacit collusion because it is a Markov Perfect equilibrium, but Maskin and Tirole write: "Thus our model can be viewed as a theory of tacit collusion." (p. 592). Edgeworth cycles have been reported in gasoline markets in many countries. Because the cycles tend to occur frequently, weekly average prices found in government reports will generally mask the cycling. Wang (2012) emphasizes the role of price commitment in facilitating price cycles: without price commitment, the dynamic game becomes one of simultaneous move and here, the cycles are no longer a Markov Perfect equilibrium but rely on, e.g., supergame arguments. Edgeworth cycles are distinguished from both sticky pricing and cost-based pricing. Sticky prices are typically found in markets with less aggressive price competition, so there are fewer or no cycles. Purely cost-based pricing occurs when retailers m
https://en.wikipedia.org/wiki/SIMNET
SIMNET was a wide area network with vehicle simulators and displays for real-time distributed combat simulation: tanks, helicopters and airplanes in a virtual battlefield. SIMNET was developed for and used by the United States military. SIMNET development began in the mid-1980s, was fielded starting in 1987, and was used for training until successor programs came online well into the 1990s. SIMNET was perhaps the world's first fully operational virtual reality system and was the first real time, networked simulator. It was not unlike our massive multiplayer games today. It supported a variety of air and ground vehicles, some human-directed and others autonomous. Origins and purpose Jack Thorpe of the Defense Advanced Research Projects Agency (DARPA) saw the need for networked multi-user simulation. Interactive simulation equipment was very expensive, and reproducing training facilities was likewise expensive and time consuming. In the early 1980s, DARPA decided to create a prototype research system to investigate the feasibility of creating a real-time distributed simulator for combat simulation. SIMNET, the resulting application, was to prove both the feasibility and effectiveness of such a project. Training using actual equipment was extremely expensive and dangerous. Being able to simulate certain combat scenarios, and to have participants remotely located rather than all in one place, hugely reduced the cost of training and the risk of personal injury. Long-haul networking for SIMNET was run originally across multiple 56 kbit/s dial-up lines, using parallel processors to compress packets over the data links. This traffic contained not only the vehicle data but also compressed voice. Developers SIMNET was developed by three companies: Delta Graphics, Inc.; Perceptronics, Inc.; and Bolt, Beranek and Newman (BBN), Inc. There was no prime contractor on SIMNET; independent contracts were made directly with each of these three companies. BBN developed
https://en.wikipedia.org/wiki/T-cell%20vaccination
T-cell vaccination is immunization with inactivated autoreactive T cells. The concept of T-cell vaccination is, at least partially, analogous to classical vaccination against infectious disease. However, the agents to be eliminated or neutralized are not foreign microbial agents but a pathogenic autoreactive T-cell population. Research on T-cell vaccination so far has focused mostly on multiple sclerosis and to a lesser extent on rheumatoid arthritis, Crohn's disease and AIDS. References Immunology
https://en.wikipedia.org/wiki/Nielsen%20theory
Nielsen theory is a branch of mathematical research with its origins in topological fixed-point theory. Its central ideas were developed by Danish mathematician Jakob Nielsen, and bear his name. The theory developed in the study of the so-called minimal number of a map f from a compact space to itself, denoted MF[f]. This is defined as: where ~ indicates homotopy of mappings, and #Fix(g) indicates the number of fixed points of g. The minimal number was very difficult to compute in Nielsen's time, and remains so today. Nielsen's approach is to group the fixed-point set into classes, which are judged "essential" or "nonessential" according to whether or not they can be "removed" by a homotopy. Nielsen's original formulation is equivalent to the following: We define an equivalence relation on the set of fixed points of a self-map f on a space X. We say that x is equivalent to y if and only if there exists a path c from x to y with f(c) homotopic to c as paths. The equivalence classes with respect to this relation are called the Nielsen classes of f, and the Nielsen number N(f) is defined as the number of Nielsen classes having non-zero fixed-point index sum. Nielsen proved that making his invariant a good tool for estimating the much more difficult MF[f]. This leads immediately to what is now known as the Nielsen fixed-point theorem: Any map f has at least N(f) fixed points. Because of its definition in terms of the fixed-point index, the Nielsen number is closely related to the Lefschetz number. Indeed, shortly after Nielsen's initial work, the two invariants were combined into a single "generalized Lefschetz number" (more recently called the Reidemeister trace) by Wecken and Reidemeister. Bibliography External links Survey article on Nielsen theory by Robert F. Brown at Topology Atlas Fixed-point theorems Fixed points (mathematics) Topology
https://en.wikipedia.org/wiki/Herzog%E2%80%93Sch%C3%B6nheim%20conjecture
In mathematics, the Herzog–Schönheim conjecture is a combinatorial problem in the area of group theory, posed by Marcel Herzog and Jochanan Schönheim in 1974. Let be a group, and let be a finite system of left cosets of subgroups of . Herzog and Schönheim conjectured that if forms a partition of with , then the (finite) indices cannot be distinct. In contrast, if repeated indices are allowed, then partitioning a group into cosets is easy: if is any subgroup of with index then can be partitioned into left cosets of . Subnormal subgroups In 2004, Zhi-Wei Sun proved an extended version of the Herzog–Schönheim conjecture in the case where are subnormal in . A basic lemma in Sun's proof states that if are subnormal and of finite index in , then and hence where denotes the set of prime divisors of . Mirsky–Newman theorem When is the additive group of integers, the cosets of are the arithmetic progressions. In this case, the Herzog–Schönheim conjecture states that every covering system, a family of arithmetic progressions that together cover all the integers, must either cover some integers more than once or include at least one pair of progressions that have the same difference as each other. This result was conjectured in 1950 by Paul Erdős and proved soon thereafter by Leon Mirsky and Donald J. Newman. However, Mirsky and Newman never published their proof. The same proof was also found independently by Harold Davenport and Richard Rado. In 1970, a geometric coloring problem equivalent to the Mirsky–Newman theorem was given in the Soviet mathematical olympiad: suppose that the vertices of a regular polygon are colored in such a way that every color class itself forms the vertices of a regular polygon. Then, there exist two color classes that form congruent polygons. References Combinatorial group theory Conjectures Unsolved problems in mathematics
https://en.wikipedia.org/wiki/Tire%20code
Automotive tires are described by an alphanumeric tire code (in North American English) or tyre code (in Commonwealth English), which is generally molded into the sidewall of the tire. This code specifies the dimensions of the tire, and some of its key limitations, such as load-bearing ability, and maximum speed. Sometimes the inner sidewall contains information not included on the outer sidewall, and vice versa. The code has grown in complexity over the years, as is evident from the mix of SI and USC units, and ad-hoc extensions to lettering and numbering schemes. New automotive tires frequently have ratings for traction, treadwear, and temperature resistance, all collectively known as the Uniform Tire Quality Grading. Most tires sizes are given using the ISO metric sizing system. However, some pickup trucks and SUVs use the Light Truck Numeric or Light Truck High Flotation system. National technical standards regulations DOT code The DOT code is an alphanumeric character sequence molded into the sidewall of the tire and allows the identification of the tire and its age. The code is mandated by the U.S. Department of Transportation but is used worldwide. The DOT code is also useful in identifying tires subject to product recall or at end of life due to age. Since February 2021 UK vehicle Regulations do not permit the use of tyres over 10 years old on the front steered axle(s) of heavy goods vehicles, buses and coaches. The ban also applies to all tyres in single configuration on minibuses. In addition, it is a requirement for all tyres on these vehicles to display a legible date code. ETRTO and TRA The European Tyre and Rim Technical Organisation (ETRTO) and the Tire and Rim Association (TRA) are two organizations that influence national tire standards. The objectives of the ETRTO include aligning national tire and rim standards in Europe. The Tire and Rim Association, formerly known as The Tire and Rim Association of America, Inc., is an American trade
https://en.wikipedia.org/wiki/Vertex%20configuration
In geometry, a vertex configuration is a shorthand notation for representing the vertex figure of a polyhedron or tiling as the sequence of faces around a vertex. For uniform polyhedra there is only one vertex type and therefore the vertex configuration fully defines the polyhedron. (Chiral polyhedra exist in mirror-image pairs with the same vertex configuration.) A vertex configuration is given as a sequence of numbers representing the number of sides of the faces going around the vertex. The notation "" describes a vertex that has 3 faces around it, faces with , , and sides. For example, "" indicates a vertex belonging to 4 faces, alternating triangles and pentagons. This vertex configuration defines the vertex-transitive icosidodecahedron. The notation is cyclic and therefore is equivalent with different starting points, so is the same as The order is important, so is different from (the first has two triangles followed by two pentagons). Repeated elements can be collected as exponents so this example is also represented as . It has variously been called a vertex description, vertex type, vertex symbol, vertex arrangement, vertex pattern, face-vector. It is also called a Cundy and Rollett symbol for its usage for the Archimedean solids in their 1952 book Mathematical Models. Vertex figures A vertex configuration can also be represented as a polygonal vertex figure showing the faces around the vertex. This vertex figure has a 3-dimensional structure since the faces are not in the same plane for polyhedra, but for vertex-uniform polyhedra all the neighboring vertices are in the same plane and so this plane projection can be used to visually represent the vertex configuration. Variations and uses Different notations are used, sometimes with a comma (,) and sometimes a period (.) separator. The period operator is useful because it looks like a product and an exponent notation can be used. For example, 3.5.3.5 is sometimes written as (3.5)2. The notatio
https://en.wikipedia.org/wiki/XML%20for%20Analysis
XML for Analysis (XMLA) is an industry standard for data access in analytical systems, such as online analytical processing (OLAP) and data mining. XMLA is based on other industry standards such as XML, SOAP and HTTP. XMLA is maintained by XMLA Council with Microsoft, Hyperion and SAS Institute being the XMLA Council founder members. History The XMLA specification was first proposed by Microsoft as a successor for OLE DB for OLAP in April 2000. By January 2001 it was joined by Hyperion endorsing XMLA. The 1.0 version of the standard was released in April 2001, and in September 2001 the XMLA Council was formed. In April 2002 SAS joined Microsoft and Hyperion as founding member of XMLA Council. With time, more than 25 companies joined with their support for the standard. API XMLA consists of only two SOAP methods.: execute and discover. It was designed in such a way to preserve simplicity. Execute Execute method has two parameters: Command - command to be executed. It can be MDX, DMX or SQL. Properties - XML list of command properties such as Timeout, Catalog name, etc. The result of Execute command could be Multidimensional Dataset or Tabular Rowset. Discover Discover method was designed to model all the discovery methods possible in OLEDB including various schema rowset, properties, keywords, etc. Discover method allows users to specify both what needs to be discovered and the possible restrictions or properties. The result of Discover method is a rowset. Query language XMLA specifies MDXML as the query language. In the XMLA 1.1 version, the only construct in MDXML is an MDX statement enclosed in the <Statement> tag. Example Below is an example of XMLA Execute request with MDX query in command. <soap:Envelope> <soap:Body> <Execute xmlns="urn:schemas-microsoft-com:xml-analysis"> <Command> <Statement>SELECT Measures.MEMBERS ON COLUMNS FROM Sales</Statement> </Command> <Properties> <PropertyList> <DataSourceInfo/> <Catalog>FoodM
https://en.wikipedia.org/wiki/FASMI
Fast Analysis of Shared Multidimensional Information (FASMI) is an alternative term for OLAP. The term was coined by Nigel Pendse of The OLAP Report (now known as The BI Verdict), because he felt that the 12 rules that Tedd Codd used to define OLAP were too controversial and biased (the rules were sponsored by Arbor Software, the company which developed Essbase). Also, Pendse considered that the list of 12 rules was too long, and the OLAP concept could be defined in only five rules. References Pendse, Nigel (2005), "What is OLAP?", in The BI Verdict, Business Application Research Center, 2009. Exposition of "Fast Analysis of Shared Multidimensional Information" (FASMI). Online analytical processing
https://en.wikipedia.org/wiki/Cecotrope
Cecotropes, also called caecotrophs, caecal pellets, or night feces, are the product of the cecum, a part of the digestive system in mammals of the order Lagomorpha, which includes two families: Leporidae (hares and rabbits), and Ochotonidae (pikas). Cecotropes are passed through the intestines and subsequently reingested for added nutrients in a process known as "cecotrophy", "cecophagy", "pseudorumination", "refection", coprophagia or "coprophagy". Reingestion is also practiced by a few species of rodent (such as the beaver, capybara, and guinea pig), some marsupials (the common ringtail possum (Pseudocheirus peregrinus) and possibly the coppery ringtail possum (Pseudochirops cupreus)) and one species of primate (the sportive lemur (Lepilemur mustelinus)). Production The process by which cecotropes are produced is called "hindgut fermentation". Food passes through the esophagus, stomach, small intestine, where nutrients are initially absorbed ineffectively, and then into the colon. Through reverse peristalsis, the food is forced back into the cecum where it is broken down into simple sugars (i.e. monosaccharides) by bacterial fermentation. The cecotrope then passes through the colon, the anus, and is eliminated by the animal and then reingested. The process occurs 4 to 8 hours after eating. This type of reingestion to obtain more nutrients is similar to the chewing of cud in cattle. Disorder The process of cecotrophy can have irregularities. An animal can fail to assimilate nutrients (vitamins, amino acids) synthesized by microorganisms in the cecotrope. See also Coprophagia Rabbit eating habits References Lagomorphs
https://en.wikipedia.org/wiki/Percy%20John%20Heawood
Percy John Heawood (8 September 1861 – 24 January 1955) was a British mathematician, who concentrated on graph colouring. Life He was the son of the Rev. John Richard Heawood of Newport, Shropshire, and his wife Emily Heath, daughter of the Rev. Joseph Heath of Wigmore, Herefordshire; and a first cousin of Oliver Lodge, whose mother Grace was also a daughter of Joseph Heath. He was educated at Queen Elizabeth's School, Ipswich, and matriculated at Exeter College, Oxford in 1880, graduating B.A. in 1883 and M.A. in 1887. Heawood spent his academic career at Durham University, where he was appointed Lecturer in 1885. He was, successively, Censor of St Cuthbert's Society between 1897 and 1901 succeeding Frank Byron Jevons in the role, Senior Proctor of the university from 1901, Professor in 1910 and Vice-Chancellor between 1926 and 1928. He was awarded an OBE, as Honorary Secretary of the Preservation Fund, for his part in raising £120,000 to prevent Durham Castle from collapsing into the River Wear. Heawood was fond of country pursuits, and one of his interests was Hebrew. His nickname was "Pussy". Durham University awards an annual Heawood Prize to a student graduating in Mathematics whose performance is outstanding in the final year. Works Heawood devoted himself to the four colour theorem and related questions. In 1890 he exposed a flaw in Alfred Kempe's proof, that had been considered as valid for 11 years. The four colour theorem being an open question again, he established the weaker five colour theorem. The four colour theorem itself was finally established by a computer-based proof in 1976. Heawood also studied colouring of maps on higher surfaces and established the upper bound on the chromatic number of such a graph in terms of the connectivity (genus, or number of handles) of the surface. This upper bound was proved only in 1968 to be the actual maximum. Writing in the Journal of the London Mathematical Society, G. A. Dirac wrote: Family Heawood m
https://en.wikipedia.org/wiki/Amaurote
Amaurote is a British video game for 8-bit computer systems that was released in 1987 by Mastertronic on their Mastertronic Added Dimension label. The music for the game was written by David Whittaker. Plot From the game's instructions: The city of Amaurote has been invaded by huge, aggressive insects who have built colonies in each of the city's 25 sectors. As the only uninjured army officer left after the invasion (that'll teach you for hiding!) the job falls to you to destroy all the insect colonies. Gameplay The player controls an "Arachnus 4", an armoured fighting-machine that moves on four legs. The player must first select a sector to play in via a map screen and then control the Arachnus as it wanders an isometric (top-down in the Commodore 64 version) view of the cityscape attacking marauding insects and searching for the insect queen using a scanner. The Arachnus attacks by launching bouncing bombs. It can only launch one at a time so if a bomb misses its intended target the player will have to wait until it hits the scenery or bounces against the fence of the play area before firing again. Once the queen has been located, the player can radio-in a "supa-bomb" which can be used to destroy the queen. The player can also radio-in other supplies such as additional bombs and even ask to be pulled out of the combat zone. Extra weaponry costs the player "dosh", the in-game currency. Reception The game was favourably reviewed by Crash magazine who said it was graphically impressive, well designed and fun to play. It was given a 92% overall rating. Zzap!64 were less impressed by the Commodore 64 version which was criticised for dull gameplay and programming bugs. It was rated 39% overall. References External links Amaurote at Atari Mania 1987 video games Action games Amstrad CPC games Atari 8-bit family games Binary Design games Commodore 64 games Fictional populated places Mastertronic games MSX games Single-player video games Video games about insects
https://en.wikipedia.org/wiki/Kruskal%27s%20tree%20theorem
In mathematics, Kruskal's tree theorem states that the set of finite trees over a well-quasi-ordered set of labels is itself well-quasi-ordered under homeomorphic embedding. History The theorem was conjectured by Andrew Vázsonyi and proved by ; a short proof was given by . It has since become a prominent example in reverse mathematics as a statement that cannot be proved in ATR0 (a second-order arithmetic theory with a form of arithmetical transfinite recursion). In 2004, the result was generalized from trees to graphs as the Robertson–Seymour theorem, a result that has also proved important in reverse mathematics and leads to the even-faster-growing SSCG function which dwarfs TREE(3). A finitary application of the theorem gives the existence of the fast-growing TREE function. Statement The version given here is that proven by Nash-Williams; Kruskal's formulation is somewhat stronger. All trees we consider are finite. Given a tree with a root, and given vertices , , call a successor of if the unique path from the root to contains , and call an immediate successor of if additionally the path from to contains no other vertex. Take to be a partially ordered set. If , are rooted trees with vertices labeled in , we say that is inf-embeddable in and write if there is an injective map from the vertices of to the vertices of such that For all vertices of , the label of precedes the label of , If is any successor of in , then is a successor of , and If , are any two distinct immediate successors of , then the path from to in contains . Kruskal's tree theorem then states: If is well-quasi-ordered, then the set of rooted trees with labels in is well-quasi-ordered under the inf-embeddable order defined above. (That is to say, given any infinite sequence of rooted trees labeled in , there is some so that .) Friedman's work For a countable label set , Kruskal's tree theorem can be expressed and proven using second-order arithmetic. However, l
https://en.wikipedia.org/wiki/Xerox%20NoteTaker
The Xerox NoteTaker is a portable computer developed at Xerox PARC in Palo Alto, California, in 1978. Although it did not enter production, and only around ten prototypes were built, it strongly influenced the design of the later Osborne 1 and Compaq Portable computers. Development The NoteTaker was developed by a team that included Adele Goldberg, Douglas Fairbairn, and Larry Tesler. It drew heavily on earlier research by Alan Kay, who had previously developed the Dynabook project. While the Dynabook was a concept for a transportable computer that was impossible to implement with available technology, the NoteTaker was intended to show what could be done. Description The computer employed what was then highly advanced technology, including a built-in monochrome display monitor, a floppy disk drive and a mouse. It had 256 KB of RAM, then a very large amount, and used a 5 MHz Intel 8086 CPU. It used a version of the Smalltalk operating system that was originally written for the Xerox Alto computer, which pioneered the graphical user interface. The NoteTaker fitted into a case similar in form to that of a portable sewing machine; the keyboard folded out from the bottom to reveal the monitor and floppy drive. The form factor was later used on the highly successful "luggable" computers, including the Osborne 1 and Compaq Portable. However, these later models were about half as heavy as the NoteTaker, which weighed . See also IBM 5100 Osborne 1 Kaypro Compaq Portable Portable Computers References External links Firmware - memos - schematics for NoteTaker Early laptops Mobile computers Portable computers NoteTaker Prototypes
https://en.wikipedia.org/wiki/Fano%27s%20inequality
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook. It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation. Let the random variables and represent input and output messages with a joint probability . Let represent an occurrence of error; i.e., that , with being an approximate version of . Fano's inequality is where denotes the support of , is the conditional entropy, is the probability of the communication error, and is the corresponding binary entropy. Proof Define an indicator random variable , that indicates the event that our estimate is in error, Consider . We can use the chain rule for entropies to expand this in two different ways Equating the two Expanding the right most term, Since means ; being given the value of allows us to know the value of with certainty. This makes the term . On the other hand, means that , hence given the value of , we can narrow down to one of different values, allowing us to upper bound the conditional entropy . Hence The other term, , because conditioning reduces entropy. Because of the way is defined, , meaning that . Putting it all together, Because is a Markov chain, we have by the data processing inequality, and hence , giving us Intuition Fano's inequality can be interpreted as a way of dividing the uncertainty of a conditional distribution into two questions given an arbitrary predictor. The first question, corresponding to the term , relates to the uncertainty of the predictor. If the prediction is correct, there is no more uncertainty remaining. If the prediction is incorrect, the uncertainty of any discrete distrib
https://en.wikipedia.org/wiki/Atomic%20layer%20deposition
Atomic layer deposition (ALD) is a thin-film deposition technique based on the sequential use of a gas-phase chemical process; it is a subclass of chemical vapour deposition. The majority of ALD reactions use two chemicals called precursors (also called "reactants"). These precursors react with the surface of a material one at a time in a sequential, self-limiting, manner. A thin film is slowly deposited through repeated exposure to separate precursors. ALD is a key process in fabricating semiconductor devices, and part of the set of tools for synthesising nanomaterials. Introduction During atomic layer deposition, a film is grown on a substrate by exposing its surface to alternate gaseous species (typically referred to as precursors or reactants). In contrast to chemical vapor deposition, the precursors are never present simultaneously in the reactor, but they are inserted as a series of sequential, non-overlapping pulses. In each of these pulses the precursor molecules react with the surface in a self-limiting way, so that the reaction terminates once all the available sites on the surface are consumed. Consequently, the maximum amount of material deposited on the surface after a single exposure to all of the precursors (a so-called ALD cycle) is determined by the nature of the precursor-surface interaction. By varying the number of cycles it is possible to grow materials uniformly and with high precision on arbitrarily complex and large substrates. ALD is a deposition method with great potential for producing very thin, conformal films with control of the thickness and composition of the films possible at the atomic level. A major driving force for the recent interest is the prospective seen for ALD in scaling down microelectronic devices according to Moore's law. ALD is an active field of research, with hundreds of different processes published in the scientific literature, though some of them exhibit behaviors that depart from that of an ideal ALD process. Cu
https://en.wikipedia.org/wiki/Xavier%20Guichard
Xavier Guichard (1870–1947) was a French Director of Police, archaeologist and writer. His 1936 book Eleusis Alesia: Enquête sur les origines de la civilisation européenne is an early example of speculative thinking concerning Earth mysteries, based on his observations of apparent alignments between Alesia-like place names on a map of France. His theories are analogous to those of his near-contemporary in the United Kingdom, Alfred Watkins, concerning Ley lines. Xavier Guichard appears as a character in the novels of Georges Simenon, where he is the superior of the fictional detective Jules Maigret. See also 366 geometry References 1870 births 1947 deaths 20th-century French archaeologists French male non-fiction writers Sacred geometry
https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW%20primality%20test
The Baillie–PSW primality test is a probabilistic or possibly deterministic primality testing algorithm that determines whether a number is composite or is a probable prime. It is named after Robert Baillie, Carl Pomerance, John Selfridge, and Samuel Wagstaff. The Baillie–PSW test is a combination of a strong Fermat probable prime test to base 2 and a standard or strong Lucas probable prime test. The Fermat and Lucas test each have their own list of pseudoprimes, that is, composite numbers that pass the test. For example, the first ten strong pseudoprimes to base 2 are 2047, 3277, 4033, 4681, 8321, 15841, 29341, 42799, 49141, and 52633 . The first ten strong Lucas pseudoprimes (with Lucas parameters (P, Q) defined by Selfridge's Method A) are 5459, 5777, 10877, 16109, 18971, 22499, 24569, 25199, 40309, and 58519 . There is no known overlap between these lists, and there is even evidence that the numbers tend to be of different kind, in fact even with standard and not strong Lucas test there is no known overlap. For example, Fermat pseudoprimes to base 2 tend to fall into the residue class 1 (mod m) for many small m, whereas Lucas pseudoprimes tend to fall into the residue class −1 (mod m). As a result, a number that passes both a strong Fermat base 2 and a strong Lucas test is very likely to be prime. If you choose a random base, there might be some composite n that passes both the Fermat and Lucas tests. For example, n=5777 is a strong psp base 76, and is also a strong Lucas pseudoprime. No composite number below 264 (approximately 1.845·1019) passes the strong or standard Baillie–PSW test, that result was also separately verified by Charles Greathouse in June 2011. Consequently, this test is a deterministic primality test on numbers below that bound. There are also no known composite numbers above that bound that pass the test, in other words, there are no known Baillie–PSW pseudoprimes. In 1980, the authors Pomerance, Selfridge, and Wagstaff offered $30 for
https://en.wikipedia.org/wiki/Chip%20art
Chip art, also known as silicon art, chip graffiti or silicon doodling, refers to microscopic artwork built into integrated circuits, also called chips or ICs. Since ICs are printed by photolithography, not constructed a component at a time, there is no additional cost to include features in otherwise unused space on the chip. Designers have used this freedom to put all sorts of artwork on the chips themselves, from designers' simple initials to rather complex drawings. Given the small size of chips, these figures cannot be seen without a microscope. Chip graffiti is sometimes called the hardware version of software easter eggs. Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived. A 1984 revision of the US copyright law (the Semiconductor Chip Protection Act of 1984) made all chip masks automatically copyrighted, with exclusive rights to the creator, and similar rules apply in most other countries that manufacture ICs. Since an exact copy is now automatically a copyright violation, the doodles serve no useful purpose. Creating chip art Integrated Circuits are constructed from multiple layers of material, typically silicon, silicon dioxide (glass), and aluminum. The composition and thickness of these layers give them their distinctive color and appearance. These elements created an irresistible palette for IC design and layout engineers. The creative process involved in the design of these chips, a strong sense of pride in their work, and an artistic temperament combined compels people to want to mark their work as their own. It is very common to find initials, or groups of initials on chips. This is the design engineer's way of "signing" his or her work. Often this creative artist's instinct extends to the inclusion of small pictures or icons
https://en.wikipedia.org/wiki/Rise%20over%20thermal
In wireless communication systems, the rise over thermal (ROT) indicates the ratio between the total interference received on a base station and the thermal noise. The ROT is a measurement of congestion of a cellular telephone network. The acceptable level of ROT is often used to define the capacity of systems using CDMA (code-division multiple access). References Wireless networking
https://en.wikipedia.org/wiki/Invader%20potential
Ecologically, invader potential is the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem. This is often seen through climate matching. There are many reasons why a species may invade a new area. The term invader potential may also be interchangeable with invasiveness. Invader potential is a large threat to global biodiversity. It has been shown that there is an ecosystem function loss due to the introduction of species in areas they are not native to. Invaders are species that, through biomass, abundance, and strong interactions with natives, have significantly altered the structure and composition of the established community. This differs greatly from the term "introduced", which merely refers to species that have been introduced to an environment, disregarding whether or not they have created a successful establishment.1 They are simply organisms that have been accidentally, or deliberately, placed into an unfamiliar area .2 Many times, in fact, species do not have a strong impact on the introduced habitat. This can be for a variety of reasons; either the newcomers are not abundant or because they are small and unobtrusive.1 Understanding the mechanisms of invader potential is important to understanding why species relocate and to predict future invasions. There are three predicted reasons as to why species invade an area. They are as follows: adaptation to physical environment, resource competition and/or utilization, and enemy release. Some of these reasons as to why species move seem relatively simple to understand. For example, species may adapt to the new physical environment through having great phenotypic plasticity and environmental tolerance. Species with high rates of these find it easier to adapt to new environments. In terms of resources, those with low resource requirements thrive in unknown areas more than those with complex resource needs. This is shown directly through Tilman's R* rule. Tho
https://en.wikipedia.org/wiki/Hierarchy%20%28mathematics%29
In mathematics, a hierarchy is a set-theoretical object, consisting of a preorder defined on a set. This is often referred to as an ordered set, though that is an ambiguous term that many authors reserve for partially ordered sets or totally ordered sets. The term pre-ordered set is unambiguous, and is always synonymous with a mathematical hierarchy. The term hierarchy is used to stress a hierarchical relation among the elements. Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set of natural numbers N is equipped with a natural pre-order structure, where whenever we can find some other number so that . That is, is bigger than only because we can get to from using . This idea can be applied to any commutative monoid. On the other hand, the set of integers Z requires a more sophisticated argument for its hierarchical structure, since we can always solve the equation by writing . A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of a hierarchy in the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, or complex networks, are much too rich to be described in the category Set of sets. This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory. Other natural hierarchies arise in computer science, where the word refers to partially ordered sets whose elements are classes of objects of increasing complexity. In that case, the preorder defining the hierarchy is the class-containment relation. Containment hierarchies are thus special cases of hierarchies. Related terminology Individual elements of a hierarchy are often called levels and a hierarchy is said to be infinite if it has infinitely many distinct levels but said to collapse if it has only finitely many distinct levels. Examp
https://en.wikipedia.org/wiki/IBM%203705%20Communications%20Controller
The IBM 3705 Communications Controller is a simple computer which attaches to an IBM System/360 or System/370. Its purpose is to connect communication lines to the mainframe channel. It was a first communications controller of the popular IBM 37xx series. It was announced in March 1972. Designed for semiconductor memory which was not ready at the time of announcement, the 3705-I had to use 1.2 microsecond core storage; the later 3705-II uses 1.0 microsecond SRAM. Solid Logic Technology components, similar to those in S/370, were used. The 3705 normally occupies a single frame two feet wide and three feet deep. Up to three expansion frames can be attached for a theoretical capacity of 352 half-duplex lines and two independent channel adapters. The 3704 is an entry level version of the 3705 with limited features. Purpose IBM intended it to be used in three ways: Emulation of the older IBM 2703 Communications Controller and its predecessors. The relevant software is the Emulation Program or EP. Connection of Systems Network Architecture (SNA) devices to a mainframe. The relevant software is Network Control Program (NCP). When used in this fashion, the 3705 is considered an SNA PU4. Combining the two methods above in a configuration is called a Partitioned Emulation Program or PEP. Architecture The storage word length is 16 bits. The registers have the same width as the address bus. Their length varies between 16, 18 and 20 bits depending on the amount of storage installed. A particular interrupt level has eight registers. Register zero is the program counter which gave the address of the next instruction to be executed; the other seven are accumulators. The four odd-numbered accumulators can be addressed as eight single-byte accumulators. Instructions are fairly simple. Most are register-to-register or register-immediate instructions which execute in a single memory cycle. There are eight storage reference instructions which require two or three storage cycle
https://en.wikipedia.org/wiki/List%20of%20computer%20worms
See also Timeline of notable computer viruses and worms Comparison of computer viruses List of trojan horses References Computer worms Worms
https://en.wikipedia.org/wiki/MSN%20Games
MSN Games (also known as Zone.com - formerly known as The Village, Internet Gaming Zone, MSN Gaming Zone, and MSN Games by Zone.com) is a casual gaming web site, with single player, multiplayer, PC download, and social casino video games. Games are available in free online, trial, and full feature pay-to-play versions. MSN Games is a part of Xbox Game Studios, associated with the MSN portal, and is owned by Microsoft, headquartered in Redmond, Washington. History The first version of the site, which was then called "The Village", was founded by Kevin Binkley, Ted Griggs, and Hoon Im. In 1996, Steve Murch, an employee of Microsoft, convinced Bill Gates and Steve Ballmer to acquire the small online game site, then owned by Electric Gravity. The site was rebranded to "Internet Gaming Zone" and launched in 1996. It started with a handful of card and board games like Hearts, Spades, Checkers, Backgammon, and Bridge. For the following 5 years, the Internet Gaming Zone would be renamed several times and would increase in popularity with the introduction of popular retail- and MMORPG-games, such as MechWarrior, Rainbow Six, UltraCorps, Age of Empires, Asheron's Call and Fighter Ace. The website also featured a community forum which was set-up in 2006. This lasted until the closure of MSN Groups in 2009. Microsoft announced in July 2019 that it would be shutting down the Internet series of games built into Windows operating systems. Windows XP and ME games were shut down on July 31, 2019, while the remaining games on Windows 7 were shut down on January 22, 2020 (a little over a week after Microsoft ceased support for Windows 7). CD-ROM matchmaking MSN Games announced in early 2006 the retiring of support for CD-ROM games, chat lobbies, the ZoneFriends client and the Member Plus program, scheduled for June 19, 2006: "...as of June 19, 2006, we will be retiring our CD-ROM matchmaking service, along with the original versions of several classic card and board games: Cl
https://en.wikipedia.org/wiki/AMD%20FireMV
AMD FireMV, formerly ATI FireMV, is brand name for graphics cards marketed as a Multi-Display 2D video card, with 3D capabilities same as the low-end Radeon graphics products. It competes directly with Matrox professional video cards. FireMV cards aims at the corporate environment who require several displays attached to a single computer. FireMV cards has options of dual GPU, a total of four display output via a VHDCI connector, or single GPU, a total of two display output via a DMS-59 connector. FireMV cards are available for PCI and PCI Express interfaces. Although these are marketed by ATI as mainly 2D cards, the FireMV 2250 cards support OpenGL 2.0 since it is based on the RV516 GPU found in the Radeon X1000 Series released 2005. The FireMV 2260 is the first video card to carry dual DisplayPort output in the workstation 2D graphics market, sporting DirectX 10.1 support. Chipset table See also AMD Eyefinity – introduced with Radeon HD 5000 Series in September 2009 List of AMD graphics processing units References External links FireMV series page at ATI https://www.amd.com/Documents/ATI_FireMV_2260_Data_Sheet.pdf ATI Technologies products Graphics cards Multi-monitor
https://en.wikipedia.org/wiki/Tube%20lemma
In mathematics, particularly topology, the tube lemma, also called Wallace's theorem, is a useful tool in order to prove that the finite product of compact spaces is compact. Statement The lemma uses the following terminology: If and are topological spaces and is the product space, endowed with the product topology, a slice in is a set of the form for . A tube in is a subset of the form where is an open subset of . It contains all the slices for . Using the concept of closed maps, this can be rephrased concisely as follows: if is any topological space and a compact space, then the projection map is closed. Examples and properties 1. Consider in the product topology, that is the Euclidean plane, and the open set The open set contains but contains no tube, so in this case the tube lemma fails. Indeed, if is a tube containing and contained in must be a subset of for all which means contradicting the fact that is open in (because is a tube). This shows that the compactness assumption is essential. 2. The tube lemma can be used to prove that if and are compact spaces, then is compact as follows: Let be an open cover of . For each , cover the slice by finitely many elements of (this is possible since is compact, being homeomorphic to ). Call the union of these finitely many elements By the tube lemma, there is an open set of the form containing and contained in The collection of all for is an open cover of and hence has a finite subcover . Thus the finite collection covers . Using the fact that each is contained in and each is the finite union of elements of , one gets a finite subcollection of that covers . 3. By part 2 and induction, one can show that the finite product of compact spaces is compact. 4. The tube lemma cannot be used to prove the Tychonoff theorem, which generalizes the above to infinite products. Proof The tube lemma follows from the generalized tube lemma by taking and It therefore
https://en.wikipedia.org/wiki/Norpak
Norpak Corporation was a company headquartered in Kanata, Ontario, Canada, that specialized in the development of systems for television-based data transmission. In 2010, it was acquired by Ross Video Ltd. of Iroquois and Ottawa, Ontario. Norpak developed the NABTS (North American Broadcast Teletext Standard) protocol for teletext in the 1980s, as an improved version to the then-incumbent World System Teletext, or WST, protocol. NABTS was designed to improve graphics capability over WST, but required a much more complex and expensive decoder, making NABTS somewhat of a market failure for teletext. However, NABTS still thrives as a data protocol for embedding almost any form of digital data within the VBI of an analog video signal. Norpak's products, now part of and complementary to the Ross Video line, include equipment for embedding data in a television or video signal such as for closed captioning, XDS, V-chip data, non-teletext NABTS data for closed-circuit data transmission, and other data protocols for VBI transmission. References Electronics companies of Canada Television technology Defunct manufacturing companies of Canada Manufacturing companies based in Ontario Defunct electronics companies Manufacturing companies disestablished in 2010 2010 disestablishments in Canada
https://en.wikipedia.org/wiki/HLH%20Orion
The Orion was a series of 32-bit super-minicomputers designed and produced in the 1980s by High Level Hardware Limited (HLH), a company based in Oxford, UK. The company produced four versions of the machine: The original Orion, sometimes referred to as the "Microcodeable Orion". The Orion 1/05, in which the microcodeable CPU was replaced with the much faster Fairchild Clipper RISC C-100 processor providing approximately 5.5 MIPS of integer performance and 1 Mflop of double precision floating point performance. The Orion 1/07 which offered approximately 33% greater performance over the 1/05 (7.3 MIPS and 1.33 Mflops). The Orion 1/10 based on a later generation C-300 Clipper from the Advanced Processor Division at Intergraph Corporation that required extensive cooling. The Orion 1/10 offered a further 30% improvement for integer and single precision floating point operations and over 150% improvement for double precision floating point (10 MIPS and 3 Mflops). All four machines employed the same I/O sub-system. Background High Level Hardware was an independent British company formed in early 1982 by David G. Small and Timothy B. Robinson. David Small was previously a founder shareholder and director of Oxford-based Research Machines Limited. Both partners were previously senior members of Research Machine's Special Projects Group. In 1984, as a result of that research, High Level Hardware launched the Orion, a high performance, microcodeable, UNIX superminicomputer targeted particularly at scientific applications such as mathematical modeling, artificial intelligence and symbolic algebra. In April 1987 High Level Hardware introduced a series of Orions based upon the Fairchild Clipper processor but abandoned the hardware market in late 1989 to concentrate on high-end Apple Macintosh sales. Microcodeable Orion The original Orion employed a processor architecture based on Am2900-series devices. This CPU was novel in that its microcode was writable; in other w
https://en.wikipedia.org/wiki/Diagrid
A diagrid (a portmanteau of diagonal grid) is a framework of diagonally intersecting metal, concrete, or wooden beams that is used in the construction of buildings and roofs. It requires less structural steel than a conventional steel frame. Hearst Tower in New York City, designed by Norman Foster, uses 21 percent less steel than a standard design. The diagrid obviates the need for columns and can be used to make large column-free expanses of roofing. Another iconic building designed by Foster, 30 St Mary Axe, in London, UK, known as "The Gherkin", also uses the diagrid system. British architect Ian Ritchie wrote in 2012: Buildings utilizing diagrid Shukhov Tower in Polibino, Polibino, Russia (1896) Shukhov Rotunda at the All-Russia exhibition, Nizhny Novgorod, Russia (1896) Shukhov Tower, Moscow, Russia Hearst Tower, New York, USA 30 St Mary Axe, London, England 1 The Avenue, Manchester, England CCTV Headquarters, Beijing, China The Bow, Calgary, Canada Seattle Central Library, Seattle, USA Capital Gate, Abu Dhabi, United Arab Emirates Aldar headquarters, Abu Dhabi, United Arab Emirates Guangzhou International Finance Center, Guangzhou, China Queen Elizabeth II Great Court at the British Museum, London, England Nagoya Dome, Nagoya, Japan Westhafen Tower, Frankfurt, Germany Merdeka 118, Kuala Lumpur, Malaysia MyZeil, Frankfurt, Germany The Crystal, Copenhagen, Denmark United Steelworkers Building, Pittsburgh, USA Tornado Tower, Doha, Qatar Newfoundland Quay, London, England Lotte World Tower, Seoul, Republic of Korea Atrio Towers, Bogotá, Colombia See also References Bibliography Design and construction of steel diagrid structures by K. Moon, School of Architecture, Yale University The diagrid system of Hearst Tower by the Steel Institute of New York Building Construction Building engineering Roofs Structural system
https://en.wikipedia.org/wiki/Algorithmic%20state%20machine
The algorithmic state machine (ASM) is a method for designing finite state machines (FSMs) originally developed by Thomas E. Osborne at the University of California, Berkeley (UCB) since 1960, introduced to and implemented at Hewlett-Packard in 1968, formalized and expanded since 1967 and written about by Christopher R. Clare since 1970. It is used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but more structured and, thus, easier to understand. An ASM chart is a method of describing the sequential operations of a digital system. ASM method The ASM method is composed of the following steps: 1. Create an algorithm, using pseudocode, to describe the desired operation of the device. 2. Convert the pseudocode into an ASM chart. 3. Design the datapath based on the ASM chart. 4. Create a detailed ASM chart based on the datapath. 5. Design the control logic based on the detailed ASM chart. ASM chart An ASM chart consists of an interconnection of four types of basic elements: state name, state box, decision box, and conditional outputs box. An ASM state, represented as a rectangle, corresponds to one state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box. State Name: The name of the state is indicated inside the circle and the circle is placed in the top left corner or the name is placed without the circle. State Box: The output of the state is indicated inside the rectangle box Decision Box: A diamond indicates that the stated condition/expression is to be tested and the exit path is to be chosen accordingly. The condition expression contains one or more inputs to the FSM (Finite State Machine). An ASM condition check, indicated by a diamond with one input and two outputs (for true and false), is used to conditionally transfer between two State Boxes, to another Decision Box, or to a Conditional Output Box. The decision box contains the stated condition expressio
https://en.wikipedia.org/wiki/Radio%20access%20network
A radio access network (RAN) is part of a mobile telecommunication system implementing a radio access technology (RAT). Conceptually, it resides between a device such as a mobile phone, a computer, or any remotely controlled machine and provides connection with its core network (CN). Depending on the standard, mobile phones and other wireless connected devices are varyingly known as user equipment (UE), terminal equipment, mobile station (MS), etc. RAN functionality is typically provided by a silicon chip residing in both the core network as well as the user equipment. See the following diagram: CN / ⧵ / ⧵ RAN RAN / ⧵ / ⧵ UE UE UE UE Examples of RAN types are: GRAN: GSM GERAN: essentially the same as GRAN but specifying the inclusion of EDGE packet radio services UTRAN: UMTS E-UTRAN: The Long Term Evolution (LTE) high speed and low latency It is also possible for a single handset/phone to be simultaneously connected to multiple RANs. Handsets capable of this are sometimes called dual-mode handsets. For instance it is common for handsets to support both GSM and UMTS (a.k.a. "3G") RATs. Such devices seamlessly transfer an ongoing call between different radio access networks without the user noticing any disruption in service. See also AirHop Communications IP connectivity access network C-RAN Access network References Radio technology
https://en.wikipedia.org/wiki/Work%20%28thermodynamics%29
Thermodynamic work is one of the principal processes by which a thermodynamic system can interact with its surroundings and exchange energy. This exchange results in externally measurable macroscopic forces on the system's surroundings, which can cause mechanical work, to lift a weight, for example, or cause changes in electromagnetic, or gravitational variables. The surroundings also can perform work on a thermodynamic system, which is measured by an opposite sign convention. For thermodynamic work, appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system, which always occur in conjugate pairs, for example pressure and volume or magnetic flux density and magnetization. In the International System of Units (SI), work is measured in joules (symbol J). The rate at which work is performed is power, measured in joules per second, and denoted with the unit watt (W). History 1824 Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire, where he used the term motive power for work. Specifically, according to Carnot: We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised. 1845 In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water. In this experiment, the motion of the paddle wheel, through agitation and friction, heated the body of water, s
https://en.wikipedia.org/wiki/Gonioreflectometer
A gonioreflectometer is a device for measuring a bidirectional reflectance distribution function (BRDF). The device consists of a light source illuminating the material to be measured and a sensor that captures light reflected from that material. The light source should be able to illuminate and the sensor should be able to capture data from a hemisphere around the target. The hemispherical rotation dimensions of the sensor and light source are the four dimensions of the BRDF. The 'gonio' part of the word refers to the device's ability to measure at different angles. Several similar devices have been built and used to capture data for similar functions. Most of these devices use a camera instead of the light intensity-measuring sensor to capture a two-dimensional sample of the target. Examples include: a spatial gonioreflectometer for capturing the SBRDF (McAllister, 2002). a camera gantry for capturing the light field (Levoy and Hanrahan, 1996). an unnamed device for capturing the bidirectional texture function (Dana et al., 1999). References Dana, Kristin et al. 1999. Reflectance and Texture of Real-World Surfaces. in ACM Transactions on Graphics. Volume 18, Issue 1 (January, 1999). New York, NY, USA: ACM Press. Pages 1-34. Foo, Sing Choong. 1997. A Gonioreflectometer for measuring the bidirectional reflectance of materials for use in illumination computations. Masters thesis. Cornell University. Ithaca, New York, USA. Levoy, Marc & Hanrahan, Pat. 1996. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. McAllister, David. 2002. A Generalized Surface Appearance Representation for Computer Graphics. PhD dissertation. University of North Carolina at Chapel Hill, Department of Computer Science. Chapel Hill, USA. 118p. Computer graphics Photometry Measuring instruments
https://en.wikipedia.org/wiki/X86%20memory%20models
In computing, the x86 memory models are a set of six different memory models of the x86 CPU operating in real mode which control how the segment registers are used and the default size of pointers. Memory segmentation Four registers are used to refer to four segments on the 16-bit x86 segmented memory architecture. DS (data segment), CS (code segment), SS (stack segment), and ES (extra segment). Another 16-bit register can act as an offset into a given segment, and so a logical address on this platform is written segment:offset, typically in hexadecimal notation. In real mode, in order to calculate the physical address of a byte of memory, the hardware shifts the contents of the appropriate segment register 4 bits left (effectively multiplying by 16), and then adds the offset. For example, the logical address 7522:F139 yields the 20-bit physical address: Note that this process leads to aliasing of memory, such that any given physical address has up to 4096 corresponding logical addresses. This complicates the comparison of pointers to different segments. Pointer sizes Pointer formats are known as near, far, or huge. Near pointers are 16-bit offsets within the reference segment, i.e. DS for data and CS for code. They are the fastest pointers, but are limited to point to 64 KB of memory (to the associated segment of the data type). Near pointers can be held in registers (typically SI and DI). mov bx, word [reg] mov ax, word [bx] mov dx, word [bx+2] Far pointers are 32-bit pointers containing a segment and an offset. To use them the segment register ES is used by using the instruction les [reg]|[mem],dword [mem]|[reg]. They may reference up to 1024 KiB of memory. Note that pointer arithmetic (addition and subtraction) does not modify the segment portion of the pointer, only its offset. Operations which exceed the bounds of zero or 65535 (0xFFFF) will undergo modulo 64K operation just as any normal 16-bit operation. For example, if the segment regi
https://en.wikipedia.org/wiki/Motorola%206847
The MC6847 is a Video Display Generator (VDG) first introduced by Motorola in 1978 and used in the TRS-80 Color Computer, Dragon 32/64, Laser 200, TRS-80 MC-10/Matra Alice, NEC PC-6000 series, Acorn Atom, and the APF Imagination Machine, among others. It is a relatively simple display generator compared to other display chips of the time. It is capable of displaying alphanumeric text, semigraphics and raster graphics contained within a roughly square display matrix 256 pixels wide by 192 lines high. The ROM includes a 5 x 7 pixel font, compatible with 6-bit ASCII. Effects such as inverse video or colored text (green on dark green; orange on dark orange) are possible. The hardware palette is composed of twelve colors: black, green, yellow, blue, red, buff (almost-but-not-quite white), cyan, magenta, and orange (two extra colors, dark green and dark orange, are the ink colours for all alphanumeric text mode characters, and a light orange color is available as an alternative to green as the background color). According to the MC6847 datasheet, the colors are formed by the combination of three signals: with 6 possible levels, (or with 3 possible levels) and (or with 3 possible levels), based on the YPbPr colorspace, and then converted for output into a NTSC analog signal. The low display resolution is a necessity of using television sets as display monitors. Making the display wider risked cutting off characters due to overscan. Compressing more dots into the display window would easily exceed the resolution of the television and be useless. Variants According to the datasheets, there are non-interlaced (6847) and interlaced (6847Y) variants, plus the 6847T1 (non-interlaced only). The chips can be found with ceramic (L suffix), plastic (P suffix) or CERDIP (S suffix) packages. Die pictures Signal levels and color palette The chip outputs a NTSC-compatible progressive scan signal composed of one field of 262 lines 60 times per second. According to the MC6847
https://en.wikipedia.org/wiki/Phase-shift%20oscillator
A phase-shift oscillator is a linear electronic oscillator circuit that produces a sine wave output. It consists of an inverting amplifier element such as a transistor or op amp with its output fed back to its input through a phase-shift network consisting of resistors and capacitors in a ladder network. The feedback network 'shifts' the phase of the amplifier output by 180 degrees at the oscillation frequency to give positive feedback. Phase-shift oscillators are often used at audio frequency as audio oscillators. The filter produces a phase shift that increases with frequency. It must have a maximum phase shift of more than 180 degrees at high frequencies so the phase shift at the desired oscillation frequency can be 180 degrees. The most common phase-shift network cascades three identical resistor-capacitor stages that produce a phase shift of zero at low frequencies and 270° at high frequencies. The first integrated circuit was a phase shift oscillator invented by Jack Kilby in 1958. Implementations Bipolar implementation This schematic drawing shows the oscillator using a common-emitter connected bipolar transistor as an amplifier. The two resistors R and three capacitors C form the RC phase-shift network which provides feedback from collector to base of the transistor. Resistor Rb provides base bias current. Resistor Rc is the collector load resistor for the collector current. Resistor Rs isolates the circuit from the external load. FET implementation This circuit implements the oscillator with a FET. R1, R2, Rs, and Cs provide bias for the transistor. Note that the topology used for positive feedback is voltage series feedback. Op-amp implementation The implementation of the phase-shift oscillator shown in the diagram uses an operational amplifier (op-amp), three capacitors and four resistors. The circuit's modeling equations for the oscillation frequency and oscillation criterion are complicated because each RC stage loads the preceding ones. Assum
https://en.wikipedia.org/wiki/Minimal%20instruction%20set%20computer
Minimal instruction set computer (MISC) is a central processing unit (CPU) architecture, usually in the form of a microprocessor, with a very small number of basic operations and corresponding opcodes, together forming an instruction set. Such sets are commonly stack-based rather than register-based to reduce the size of operand specifiers. Such a stack machine architecture is inherently simpler since all instructions operate on the top-most stack entries. One result of the stack architecture is an overall smaller instruction set, allowing a smaller and faster instruction decode unit with overall faster operation of individual instructions. Characteristics and design philosophy Separate from the stack definition of a MISC architecture, is the MISC architecture being defined by the number of instructions supported. Typically a minimal instruction set computer is viewed as having 32 or fewer instructions, where NOP, RESET, and CPUID type instructions are usually not counted by consensus due to their fundamental nature. 32 instructions is viewed as the highest allowable number of instructions for a MISC, though 16 or 8 instructions are closer to what is meant by "Minimal Instructions". A MISC CPU cannot have zero instructions as that is a zero instruction set computer. A MISC CPU cannot have one instruction as that is a one instruction set computer. The implemented CPU instructions should by default not support a wide set of inputs, so this typically means an 8-bit or 16-bit CPU. If a CPU has an NX bit, it is more likely to be viewed as being a complex instruction set computer (CISC) or reduced instruction set computer (RISC). MISC chips typically lack hardware memory protection of any kind, unless there is an application specific reason to have the feature. If a CPU has a microcode subsystem, that excludes it from being a MISC. The only addressing mode considered acceptable for a MISC CPU to have is load/store, the same as for reduced instruction s
https://en.wikipedia.org/wiki/Compaq%20Deskpro
The Compaq Deskpro is a line of business-oriented desktop computers manufactured by Compaq, then discontinued after the merger with Hewlett-Packard. Models were produced containing microprocessors from the 8086 up to the x86-based Intel Pentium 4. History Deskpro (8086) and Deskpro 286 The original Compaq Deskpro (released in 1984), available in several disk configurations, is an XT-class PC equipped with an 8 MHz 8086 CPU and Compaq's unique display hardware that combined Color Graphics Adapter graphics with high resolution Monochrome Display Adapter text. As a result, it was considerably faster than the original IBM PC, the XT and the AT, and had a much better quality text display compared to IBM PCs which were equipped with either the IBM Monochrome Display Adapter or Color Graphics Adapter cards. Its hardware and BIOS were claimed to be 100% compatible with the IBM PC, like the earlier Compaq Portable. This compatibility had given Compaq the lead over companies like Columbia Data Products, Dynalogic, Eagle Computer and Corona Data Systems. The latter two companies were threatened by IBM for BIOS copyright infringement, and settled out of court, agreeing to re-implement their BIOS. Compaq used a clean room design reverse-engineered BIOS, avoiding legal jeopardy. In 1985, Compaq released the Deskpro 286, which looks quite similar to the IBM PC/AT. Deskpro 386 In September 1986, the Deskpro 386 was announced after Intel released its 80386 microprocessor, beating IBM by seven months on their comparable 386 computer, thus making a name for themselves. The IBM-made 386DX machine, the IBM PS/2 Model 80, reached the market almost a year later, PC Tech Journal honored the Deskpro 386 with its 1986 Product of the Year award. The Deskpro 386/25 was released August, 1988 and cost $10,299. Other The form factor for the Compaq Deskpro is mostly the desktop model which lies upon a desk, with a monitor placed on top of it. Compaq has produced many tower upright models t
https://en.wikipedia.org/wiki/Neon%20Museum
The Neon Museum in Las Vegas, Nevada, United States, features signs from old casinos and other businesses displayed outdoors on . The museum features a restored lobby shell from the defunct La Concha Motel as its visitors' center, which officially opened on October 27, 2012. For many years, the Young Electric Sign Company (YESCO) stored many of these old signs in their "boneyard." The signs were slowly being destroyed by exposure to the elements. The signs are considered by Las Vegas locals, business owners and government organizations to be not only artistically, but also historically, significant to the culture of the city. Each of the restored signs in the collection holds a story about who created it and why it is important. History The Neon Museum was founded in 1996 as a partnership between the Allied Arts Council of Southern Nevada and the City of Las Vegas. Today, it is an independent 501(c)3 non-profit. Located on Las Vegas Boulevard North, the Neon Museum includes the Neon Boneyard and the North Gallery. The impetus behind the collecting of signs was the loss of the iconic sign from The Sands; after it was replaced with a new sign in the 1980s. There was no place to store the massive sign, and it was scrapped. After nearly 10 years of collecting signs, the Allied Arts Council of Southern Nevada and the city of Las Vegas worked together to create an institution to house and care for the saved signs. To mark its official opening in November 1996, the Neon Museum restored and installed the Hacienda Horse & Rider sign at the intersection of Las Vegas Boulevard and Fremont Street. However, access to the collection was provided by appointment only. Annual attendance was approximately 12–20,000 during this time. In 2005, the historic La Concha lobby was donated to the museum by owners of the La Concha Motel, the Doumani family. Although it cost nearly $3 million to move and restore the La Concha, the plans to open a museum became concrete after the donation
https://en.wikipedia.org/wiki/Altera%20Hardware%20Description%20Language
Altera Hardware Description Language (AHDL) is a proprietary hardware description language (HDL) developed by Altera Corporation. AHDL is used for digital logic design entry for Altera's complex programmable logic devices (CPLDs) and field-programmable gate arrays (FPGAs). It is supported by Altera's MAX-PLUS and Quartus series of design software. AHDL has an Ada-like syntax and its feature set is comparable to the synthesizable portions of the Verilog and VHDL hardware description languages. In contrast to HDLs such as Verilog and VHDL, AHDL is a design-entry language only; all of its language constructs are synthesizable. By default, Altera software expects AHDL source files to have a .tdf extension (Text Design Files). Example % a simple AHDL up counter, released to public domain 13 November 2006 % % [block quotations achieved with percent sign] % % like c, ahdl functions must be prototyped % % PROTOTYPE: FUNCTION COUNTER (CLK) RETURNS (CNTOUT[7..0]); % % function declaration, where inputs, outputs, and bidirectional pins are declared % % also like c, square brackets indicate an array % SUBDESIGN COUNTER ( CLK :INPUT; CNTOUT[7..0] :OUTPUT; ) % variables can be anything from flip-flops (as in this case), tri-state buffers, state machines, to user defined functions % VARIABLE TIMER[7..0]: DFF; % as with all hardware description languages, think of this less as an algorithm and more as wiring nodes together % BEGIN DEFAULTS TIMER[].prn = VCC; % this takes care of d-ff resets % TIMER[].clrn = VCC; END DEFAULTS; TIMER[].d = TIMER[].q + H"1"; END; References Scarpino, Frank A., VHDL and AHDL Digital System Implementation. Prentice Hall PTR, 1998. Hardware description languages
https://en.wikipedia.org/wiki/UNIVAC%201050
The UNIVAC 1050 was a variable word-length (one to 16 characters) decimal and binary computer. It was initially announced in May 1962 as an off-line input-output processor for larger UNIVAC systems. Instructions were fixed length (30 bits – five characters), consisting of a five-bit "op code", a three-bit index register specifier, one reserved bit, a 15-bit address, and a six-bit "detail field" whose function varies with each instruction. The memory was up to 32K of six-bit characters. Like the IBM 1401, the 1050 was commonly used as an off-line peripheral controller in many installations of both large "scientific computers and large "business computers". In these installations the big computer (e.g., a UNIVAC III) did all of its input-output on magnetic tapes and the 1050 was used to format input data from other peripherals (e.g., punched card readers) on the tapes and transfer output data from the tapes to other peripherals (e.g., punched card punches or the line printer). A version used by the U.S. Air Force, the U1050-II real-time system, had some extra peripherals. The most significant of these was the FASTRAND 1 Drum Storage Unit. This physically large device had two contra-rotating drums mounted horizontally, one above the other in a pressurized cabinet. Read-write heads were mounted on a horizontally moving beam between the drums, driven by a voice coil servo external to the pressurized cabinet. This high-speed access subsystem allowed the real-time operation. Another feature was the communications subsystem with modem links to remote sites. A Uniservo VI-C tape drive provided an audit trail for the transactions. Other peripherals were the card reader and punch, and printer. The operator's console had the 'stop and go' buttons and a Teletype Model 33 teleprinter for communication and control. The initial Air Force order in November 1963 was for 152 systems. Subsequently, UNIVAC released the 1050 Model III (1050-III) and 1050 Model IV (1050-IV) f
https://en.wikipedia.org/wiki/Actuarial%20present%20value
The actuarial present value (APV) is the expected value of the present value of a contingent cash flow stream (i.e. a series of payments which may or may not be made). Actuarial present values are typically calculated for the benefit-payment or series of payments associated with life insurance and life annuities. The probability of a future payment is based on assumptions about the person's future mortality which is typically estimated using a life table. Life insurance Whole life insurance pays a pre-determined benefit either at or soon after the insured's death. The symbol (x) is used to denote "a life aged x" where x is a non-random parameter that is assumed to be greater than zero. The actuarial present value of one unit of whole life insurance issued to (x) is denoted by the symbol or in actuarial notation. Let G>0 (the "age at death") be the random variable that models the age at which an individual, such as (x), will die. And let T (the future lifetime random variable) be the time elapsed between age-x and whatever age (x) is at the time the benefit is paid (even though (x) is most likely dead at that time). Since T is a function of G and x we will write T=T(G,x). Finally, let Z be the present value random variable of a whole life insurance benefit of 1 payable at time T. Then: where i is the effective annual interest rate and δ is the equivalent force of interest. To determine the actuarial present value of the benefit we need to calculate the expected value of this random variable Z. Suppose the death benefit is payable at the end of year of death. Then T(G, x) := ceiling(G - x) is the number of "whole years" (rounded upwards) lived by (x) beyond age x, so that the actuarial present value of one unit of insurance is given by: where is the probability that (x) survives to age x+t, and is the probability that (x+t) dies within one year. If the benefit is payable at the moment of death, then T(G,x): = G - x and the actuarial present value of one un
https://en.wikipedia.org/wiki/Two-dimensional%20electron%20gas
A two-dimensional electron gas (2DEG) is a scientific model in solid-state physics. It is an electron gas that is free to move in two dimensions, but tightly confined in the third. This tight confinement leads to quantized energy levels for motion in the third direction, which can then be ignored for most problems. Thus the electrons appear to be a 2D sheet embedded in a 3D world. The analogous construct of holes is called a two-dimensional hole gas (2DHG), and such systems have many useful and interesting properties. Realizations Most 2DEGs are found in transistor-like structures made from semiconductors. The most commonly encountered 2DEG is the layer of electrons found in MOSFETs (metal–oxide–semiconductor field-effect transistors). When the transistor is in inversion mode, the electrons underneath the gate oxide are confined to the semiconductor-oxide interface, and thus occupy well defined energy levels. For thin-enough potential wells and temperatures not too high, only the lowest level is occupied (see the figure caption), and so the motion of the electrons perpendicular to the interface can be ignored. However, the electron is free to move parallel to the interface, and so is quasi-two-dimensional. Other methods for engineering 2DEGs are high-electron-mobility-transistors (HEMTs) and rectangular quantum wells. HEMTs are field-effect transistors that utilize the heterojunction between two semiconducting materials to confine electrons to a triangular quantum well. Electrons confined to the heterojunction of HEMTs exhibit higher mobilities than those in MOSFETs, since the former device utilizes an intentionally undoped channel thereby mitigating the deleterious effect of ionized impurity scattering. Two closely spaced heterojunction interfaces may be used to confine electrons to a rectangular quantum well. Careful choice of the materials and alloy compositions allow control of the carrier densities within the 2DEG. Electrons may also be confined to the surf
https://en.wikipedia.org/wiki/Exception%20handling%20syntax
Exception handling syntax is the set of keywords and/or structures provided by a computer programming language to allow exception handling, which separates the handling of errors that arise during a program's operation from its ordinary processes. Syntax for exception handling varies between programming languages, partly to cover semantic differences but largely to fit into each language's overall syntactic structure. Some languages do not call the relevant concept "exception handling"; others may not have direct facilities for it, but can still provide means to implement it. Most commonly, error handling uses a try...[catch...][finally...] block, and errors are created via a throw statement, but there is significant variation in naming and syntax. Catalogue of exception handling syntaxes Ada Exception declarations Some_Error : exception; Raising exceptions raise Some_Error; raise Some_Error with "Out of memory"; -- specific diagnostic message Exception handling and propagation with Ada.Exceptions, Ada.Text_IO; procedure Foo is Some_Error : exception; begin Do_Something_Interesting; exception -- Start of exception handlers when Constraint_Error => ... -- Handle constraint error when Storage_Error => -- Propagate Storage_Error as a different exception with a useful message raise Some_Error with "Out of memory"; when Error : others => -- Handle all others Ada.Text_IO.Put("Exception: "); Ada.Text_IO.Put_Line(Ada.Exceptions.Exception_Name(Error)); Ada.Text_IO.Put_Line(Ada.Exceptions.Exception_Message(Error)); end Foo; Assembly language Most assembly languages will have a macro instruction or an interrupt address available for the particular system to intercept events such as illegal op codes, program check, data errors, overflow, divide by zero, and other such. IBM and Univac mainframes had the STXIT macro. Digital Equipment Corporation RT11 systems had trap vectors for program errors, i/o interrupts, and such. DOS h
https://en.wikipedia.org/wiki/Micro%20ISV
A micro ISV (abbr. mISV or μISV), a term coined by Eric Sink, is an independent software vendor with fewer than 10 or even just one software developer. In such an environment the company owner develops software, manages sales and does public relations. The term has come to include more than just a "one-man shop," but any ISV with more than 10 employees is generally not considered a micro ISV. Small venture capital-funded software shops are also generally not considered micro ISVs. Micro ISVs sell their software through a number of marketing models. The shareware marketing model (where potential customers can try the software before they buy it), along with the freeware marketing model, have become the dominant methods of marketing packaged software with even the largest ISVs offering their enterprise solutions as trials via free download, e.g. Oracle's Oracle database. Microsoft and other micro ISV outreach efforts Microsoft has a dedicated MicroISV/Shareware Evangelist, Michael Lehman. Part of the Microsoft micro ISV technical evangelism program includes Project Glidepath, which is a kind of framework to assist micro ISVs in bringing a product from concept through development and on to market. Although not specifically targeted at micro ISVs, the Microsoft Empower Program for ISVs is used by many micro ISVs. Microsoft Empower Program members are required to release at least one software title for the Windows family of operating systems within 18 months of joining the program. The Microsoft Action Pack Subscription is similar to the Empower Program in some ways. Alternatively, rapid application development PaaS platforms like Wolf Frameworks have partner programs specifically targeted towards Micro ISVs enabling them with software, hardware and even free development services. Industry shows The Software Industry Conference is an annual event in the United States attended by many micro ISVs. The European Software Conference is also attended by many micro ISV
https://en.wikipedia.org/wiki/Jack%20Edmonds
Jack R. Edmonds (born April 5, 1934) is an American-born and educated computer scientist and mathematician who lived and worked in Canada for much of his life. He has made fundamental contributions to the fields of combinatorial optimization, polyhedral combinatorics, discrete mathematics and the theory of computing. He was the recipient of the 1985 John von Neumann Theory Prize. Early career Edmonds attended Duke University before completing his undergraduate degree at George Washington University in 1957. He thereafter received a master's degree in 1960 at the University of Maryland under Bruce L. Reinhart with a thesis on the problem of embedding graphs into surfaces. From 1959 to 1969 he worked at the National Institute of Standards and Technology (then the National Bureau of Standards), and was a founding member of Alan Goldman’s newly created Operations Research Section in 1961. Goldman proved to be a crucial influence by enabling Edmonds to work in a RAND Corporation-sponsored workshop in Santa Monica, California. It is here that Edmonds first presented his findings on defining a class of algorithms that could run more efficiently. Most combinatorics scholars, during this time, were not focused on algorithms. However Edmonds was drawn to them and these initial investigations were key developments for his later work between matroids and optimization. He spent the years from 1961 to 1965 on the subject of NP versus P and in 1966 originated the conjectures NP ≠ P and NP ∩ coNP = P. Research Edmonds's 1965 paper “Paths, Trees and Flowers” was a preeminent paper in initially suggesting the possibility of establishing a mathematical theory of efficient combinatorial algorithms. One of his earliest and notable contributions is the blossom algorithm for constructing maximum matchings on graphs, discovered in 1961 and published in 1965. This was the first polynomial-time algorithm for maximum matching in graphs. Its generalization to weighted graphs was a conceptua
https://en.wikipedia.org/wiki/Electrical%20resistivity%20tomography
Electrical resistivity tomography (ERT) or electrical resistivity imaging (ERI) is a geophysical technique for imaging sub-surface structures from electrical resistivity measurements made at the surface, or by electrodes in one or more boreholes. If the electrodes are suspended in the boreholes, deeper sections can be investigated. It is closely related to the medical imaging technique electrical impedance tomography (EIT), and mathematically is the same inverse problem. In contrast to medical EIT, however, ERT is essentially a direct current method. A related geophysical method, induced polarization (or spectral induced polarization), measures the transient response and aims to determine the subsurface chargeability properties. Electrical resistivity measurements can be used for identification and quantification of depth of groundwater, detection of clays, and measurement of groundwater conductivity. History The technique evolved from techniques of electrical prospecting that predate digital computers, where layers or anomalies were sought rather than images. Early work on the mathematical problem in the 1930s assumed a layered medium (see for example Langer, Slichter). Andrey Nikolayevich Tikhonov who is best known for his work on regularization of inverse problems also worked on this problem. He explains in detail how to solve the ERT problem in a simple case of 2-layered medium. During the 1940s, he collaborated with geophysicists and without the aid of computers they discovered large deposits of copper. As a result, they were awarded a State Prize of Soviet Union. When adequate computers became widely available, the inverse problem of ERT could be solved numerically. The work of Loke and Barker at Birmingham University was among the first such solution and their approach is still widely used. With the advancement in the field of Electrical Resistivity Tomography (ERT) from 1D to 2D and nowadays 3D, ERT has explored many fields. The applications of ERT incl
https://en.wikipedia.org/wiki/Claw-free%20permutation
In the mathematical and computer science field of cryptography, a group of three numbers (x,y,z) is said to be a claw of two permutations f0 and f1 if f0(x) = f1(y) = z. A pair of permutations f0 and f1 are said to be claw-free if there is no efficient algorithm for computing a claw. The terminology claw free was introduced by Goldwasser, Micali, and Rivest in their 1984 paper, "A Paradoxical Solution to the Signature Problem" (and later in a more complete journal paper), where they showed that the existence of claw-free pairs of trapdoor permutations implies the existence of digital signature schemes secure against adaptive chosen-message attack. This construction was later superseded by the construction of digital signatures from any one-way trapdoor permutation. The existence of trapdoor permutations does not by itself imply claw-free permutations exist; however, it has been shown that claw-free permutations do exist if factoring is hard. The general notion of claw-free permutation (not necessarily trapdoor) was further studied by Ivan Damgård in his PhD thesis The Application of Claw Free Functions in Cryptography (Aarhus University, 1988), where he showed how to construct Collision Resistant Hash Functions from claw-free permutations. The notion of claw-freeness is closely related to that of collision resistance in hash functions. The distinction is that claw-free permutations are pairs of functions in which it is hard to create a collision between them, while a collision-resistant hash function is a single function in which it's hard to find a collision, i.e. a function H is collision resistant if it's hard to find a pair of distinct values x,y such that H(x) = H(y). In the hash function literature, this is commonly termed a hash collision. A hash function where collisions are difficult to find is said to have collision resistance. Bit commitment Given a pair of claw-free permutations f0 and f1 it is straightforward to create a commitment scheme.
https://en.wikipedia.org/wiki/Tondo%20%28art%29
A tondo (plural "tondi" or "tondos") is a Renaissance term for a circular work of art, either a painting or a sculpture. The word derives from the Italian rotondo, "round." The term is usually not used in English for small round paintings, but only those over about 60 cm (two feet) in diameter, thus excluding many round portrait miniatures – for sculpture the threshold is rather lower. A circular or oval relief sculpture is also called a roundel. The infrequently-encountered synonym rondo usually refers to the musical form. History Artists have created tondi since Greek antiquity. The circular paintings in the centre of painted vases of that period are known as tondi, and the inside of the broad low winecup called a kylix also lent itself to circular enframed compositions. Although the earliest true Renaissance, or late Gothic painted tondo is Burgundian, from Champmol (of a Pietá by Jean Malouel of 1400–1415, now in the Louvre), the tondo became fashionable in 15th-century Florence, revived as a classical form especially in architecture. It may also have developed from the smaller desco da parto or birthing tray. The Desco da parto by Masaccio from around 1423 may be one of the first to use linear perspective, another feature of the Renaissance. Also using linear perspective was Donatello for the stucco tondi created around 1435–1440 for the Sagrestia Vecchia at the Basilica of San Lorenzo designed by Brunelleschi, one of the most prominent buildings of the Early Renaissance. For Brunelleschi's Hospital of the Innocents already (1421–24), Andrea della Robbia provided glazed terracotta babes in swaddling clothes in tondos with plain blue backgrounds to be set in the spandrels of the arches. Andrea and Luca della Robbia created glazed terracotta tondi that were often framed in a wreath of fruit and leaves, which were intended for immuring in a stuccoed wall. Filippo Lippi's Bartolini Tondo (1452-1453) was one of the earliest examples of such paintings. In paint
https://en.wikipedia.org/wiki/List%20of%20heaviest%20people
This is a list of the heaviest people who have been weighed and verified, living and dead. The list is organised by the peak weight reached by an individual and is limited to those who are over . Heaviest people ever recorded See also Big Pun (1971–2000), American rapper whose weight at death was . Edward Bright (1721–1750) and Daniel Lambert (1770–1809), men from England who were famous in their time for their obesity. Happy Humphrey, the heaviest professional wrestler, weighing in at at his peak. Israel Kamakawiwoʻole (1959–1997), Hawaiian singer whose weight peaked at . Paul Kimelman (born 1947), holder of Guinness World Record for the greatest weight-loss in the shortest amount of time, 1982 Billy and Benny McCrary, holders of Guinness World Records's World's Heaviest Twins. Alayna Morgan (1948–2009), heavy woman from Santa Rosa, California. Ricky Naputi (1973–2012), heaviest man from Guam. Carl Thompson (1982–2015), heaviest man in the United Kingdom whose weight at death was . Renee Williams (1977–2007), woman from Austin, Texas. Yokozuna, the heaviest WWE wrestler, weighing between and at his peak. Barry Austin and Jack Taylor, two obese British men documented in the comedy-drama The Fattest Man in Britain. Yamamotoyama Ryūta, heaviest Japanese-born sumo wrestler; is also thought to be the heaviest Japanese person ever at . References Heaviest people Biological records Lists of people-related superlatives Obesity People
https://en.wikipedia.org/wiki/Hopf%20bifurcation
In the mathematical theory of bifurcations, a Hopf bifurcation is a critical point where, as a parameter changes, a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis as a parameter crosses a threshold value. Under reasonably generic assumptions about the dynamical system, the fixed point becomes a small-amplitude limit cycle as the parameter changes. A Hopf bifurcation is also known as a Poincaré–Andronov–Hopf bifurcation, named after Henri Poincaré, Aleksandr Andronov and Eberhard Hopf. Overview Supercritical and subcritical Hopf bifurcations The limit cycle is orbitally stable if a specific quantity called the first Lyapunov coefficient is negative, and the bifurcation is supercritical. Otherwise it is unstable and the bifurcation is subcritical. The normal form of a Hopf bifurcation is the following time-dependent differential equation: where z, b are both complex and λ is a real parameter. Write: The number α is called the first Lyapunov coefficient. If α is negative then there is a stable limit cycle for λ > 0: where The bifurcation is then called supercritical. If α is positive then there is an unstable limit cycle for λ < 0. The bifurcation is called subcritical. Intuition The normal form of the supercritical Hopf bifurcation can be expressed intuitively in polar coordinates, where is the instantaneous amplitude of the oscillation and is its instantaneous angular position. The angular velocity is fixed. When , the differential equation for has an unstable fixed point at and a stable fixed point at . The system thus describes a stable circular limit cycle with radius and angular velocity . When then is the only fixed point and it is stable. In that case, the system describes a spiral that con
https://en.wikipedia.org/wiki/Salinometer
A salinometer is a device designed to measure the salinity, or dissolved salt content, of a solution. Since the salinity affects both the electrical conductivity and the specific gravity of a solution, a salinometer often consist of an ec meter or hydrometer and some means of converting those readings to a salinity reading. A salinometer may be calibrated in either micromhos, a unit of electrical conductivity, (usually 0-22) or else directly calibrated for salt in 'grains per gallon' (0-0.5). A typical reading on-board ship would be 2 micromhos or 0.05 grains per gallon. A reading of twice this may trigger a warning light or alarm. Applications Fresh water generators (Evaporators) use salinometers on the distillate discharge in order to gauge the quality of the water. Water from the evaporator can be destined for potable water supplies, so salty water is not desirable for human consumption. In some ships, extremely high quality distillate is required for use in water-tube boilers, where salt water would be disastrous. In these ships, a salinometer is also installed on the feed system where it would alert the engineer to any salt contamination. The salinometer may switch the evaporator's output from fresh-water to feed-water tanks automatically, depending on the water quality. The higher quality (lower salinity) is required for the boiler feedwater, not for drinking. See also TDS meter – used for checking of Total Dissolved solids of a liquid. Saline (medicine) – A saline solution being isotonic with that of human blood is 0.9% w/v, c. 300 mOsm/L (and this being the same as the salinity of the ocean is a common myth. The ocean on average has a much higher concentration of sodium chloride as well as many other salts. This is why ocean water is not suitable for drinking) Saline refractometer References External links History of the development of salinometers Modern oceanographic salinometers Measuring instruments
https://en.wikipedia.org/wiki/Scalar%E2%80%93tensor%20theory
In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction. Tensor fields and field theory Modern physics tries to derive all physical theories from as few principles as possible. In this way, Newtonian mechanics as well as quantum mechanics are derived from Hamilton's principle of least action. In this approach, the behavior of a system is not described via forces, but by functions which describe the energy of the system. Most important are the energetic quantities known as the Hamiltonian function and the Lagrangian function. Their derivatives in space are known as Hamiltonian density and the Lagrangian density. Going to these quantities leads to the field theories. Modern physics uses field theories to explain reality. These fields can be scalar, vectorial or tensorial. An example of a scalar field is the temperature field. An example of a vector field is the wind velocity field. An example of a tensor field is the stress tensor field in a stressed body, used in continuum mechanics. Gravity as field theory In physics, forces (as vectorial quantities) are given as the derivative (gradient) of scalar quantities named potentials. In classical physics before Einstein, gravitation was given in the same way, as consequence of a gravitational force (vectorial), given through a scalar potential field, dependent of the mass of the particles. Thus, Newtonian gravity is called a scalar theory. The gravitational force is dependent of the distance r of the massive objects to each other (more exactly, their centre of mass). Mass is a parameter and space and time are unchangeable. Einstein's theory of gravity, the General Relativity (GR) is of another nature. It unifies space and time in a 4-dimensional manifold called space-time. In GR there i
https://en.wikipedia.org/wiki/Wet-folding
Wet-folding is an origami technique developed by Akira Yoshizawa that employs water to dampen the paper so that it can be manipulated more easily. This process adds an element of sculpture to origami, which is otherwise purely geometric. Wet-folding is used very often by professional folders for non-geometric origami, such as animals. Wet-folders usually employ thicker paper than what would usually be used for normal origami, to ensure that the paper does not tear. One of the most prominent users of the wet-folding technique is Éric Joisel, who specialized in origami animals, humans, and legendary creatures. He also created origami masks. Other folders who practice this technique are Robert J. Lang and John Montroll. The process of wet-folding allows a folder to preserve a curved shape more easily. It also reduces the number of wrinkles substantially. Wet-folding allows for increased rigidity and structure due to a process called sizing. Sizing is a water-soluble adhesive, usually methylcellulose or methyl acetate, that may be added during the manufacture of the paper. As the paper dries, the chemical bonds of the fibers of the paper tighten together which results in a crisper and stronger sheet. In order to moisten the paper, an artist typically wipes the sheet with a dampened cloth. The amount of moisture added to the paper is crucial because too little will cause the paper to dry quickly and spring back into its original position before the folding is complete, while too much will either fray the edges of the paper or will cause the paper to split at high-stress points. Notes and references See also Papier-mâché External links Mini-documentary about Joisel at YouTube An illustrated introduction to wet-folding Origami Mathematics and art
https://en.wikipedia.org/wiki/Vortex%20lift
Vortex lift is that portion of lift due to the action of leading edge vortices. It is generated by wings with highly sweptback, sharp, leading edges (beyond 50 degrees of sweep) or highly-swept wing-root extensions added to a wing of moderate sweep. It is sometimes known as non-linear lift due to its rapid increase with angle of attack. and controlled separation lift, to distinguish it from conventional lift which occurs with attached flow. How it works Vortex lift works by capturing vortices generated from the sharply swept leading edge of the wing. The vortex, formed roughly parallel to the leading edge of the wing, is trapped by the airflow and remains fixed to the upper surface of the wing. As the air flows around the leading edge, it flows over the trapped vortex and is pulled in and down to generate the lift. A straight, or moderate sweep, wing may experience, depending on its airfoil section, a leading-edge stall and loss of lift, as a result of flow separation at the leading edge and a non-lifting wake over the top of the wing. However, on a highly-swept wing leading-edge separation still occurs but instead creates a vortex sheet that rolls up above the wing producing spanwise flow beneath. Flow not entrained by the vortex passes over the top of the vortex and reattaches to the wing surface. The vortex generates a high negative pressure field on the top of the wing. Vortex lift increases with angle of attack (AOA) as seen on lift~AOA plots which show the vortex, or unattached flow, adding to the normal attached lift as an extra non-linear component of the overall lift. Vortex lift has a limiting AoA at which the vortex bursts or breaks down. Applications Four basic configurations which have used vortex lift are, in chronological order, the 60-degree delta wing; the ogive delta wing with its sharply-swept leading edge at the root; the moderately-swept wing with a leading-edge extension, which is known as a hybrid wing; and the sharp-edge forebody, or vor
https://en.wikipedia.org/wiki/Stormbringer%20%28video%20game%29
Stormbringer is a computer game written by David Jones and released in 1987 by Mastertronic on the Mastertronic Added Dimension label. It was originally released on the ZX Spectrum, Commodore 64, Amstrad CPC and MSX. A version for the Atari ST was published in 1988. It is the fourth and final game in the Magic Knight series. The in-game music is by David Whittaker. Plot Magic Knight returns home, having obtained a second-hand time machine from the Tyme Guardians at the end of Knight Tyme. However, there has been an accident whilst travelling back and there are now two Magic Knights - the other being "Off-White Knight", the dreaded Stormbringer (so called because of his storm cloud which he plans to use to destroy Magic Knight). Magic Knight cannot kill Off-White Knight without destroying himself in the process. His only option is to find Off-White Knight and merge with him. Gameplay Gameplay takes the form of a graphic adventure, with commands being inputted via the "Windimation" menu-driven interface, in the style of the previous two games, Spellbound and Knight Tyme (1986). Magic Knight again has a limited amount of strength which is consumed by performing actions and moving from screen to screen as well as being sapped by various enemies such as the Stormbringer's storm cloud and spinning axes and balls that bounce around some rooms and should be avoided. The need for the player to monitor Magic Knight's strength and avoid enemies means that Stormbringers gameplay is closer to the arcade adventure feel of Spellbound rather than the much more pure graphic adventure feel of Knight Tyme''. As with the previous two Magic Knight games, there are characters with whom Magic Knight can interact and have help him. Magic Knight's spellcasting abilities are also important for solving the game's puzzles including the "merge" spell to be used when he finds Off-White Knight. External links Information about the Atari ST version 1987 video games Amstrad CPC games Atari Ja
https://en.wikipedia.org/wiki/Vacuum%20furnace
A vacuum furnace is a type of furnace in which the product in the furnace is surrounded by a vacuum during processing. The absence of air or other gases prevents oxidation, heat loss from the product through convection, and removes a source of contamination. This enables the furnace to heat materials (typically metals and ceramics) to temperatures as high as with select materials. Maximum furnace temperatures and vacuum levels depend on melting points and vapor pressures of heated materials. Vacuum furnaces are used to carry out processes such as annealing, brazing, sintering and heat treatment with high consistency and low contamination. Characteristics of a vacuum furnace are: Uniform temperatures in the range. Commercially available vacuum pumping systems can reach vacuum levels as low as Temperature can be controlled within a heated zone, typically surrounded by heat shielding or insulation. Low contamination of the product by carbon, oxygen and other gases. Vacuum pumping systems remove low temperature by-products from the process materials during heating, resulting in a higher purity end product. Quick cooling (quenching) of product can be used to shorten process cycle times. The process can be computer controlled to ensure repeatability. Heating metals to high temperatures in open to atmosphere normally causes rapid oxidation, which is undesirable. A vacuum furnace removes the oxygen and prevents this from happening. An inert gas, such as Argon, is often used to quickly cool the treated metals back to non-metallurgical levels (below ) after the desired process in the furnace. This inert gas can be pressurized to two times atmosphere or more, then circulated through the hot zone area to pick up heat before passing through a heat exchanger to remove heat. This process continues until the desired temperature is reached. Common uses Vacuum furnaces are used in a wide range of applications in both production industries and research laboratories. F
https://en.wikipedia.org/wiki/Spectre%20GCR
The Spectre GCR is a hardware and software package for the Atari ST computers. The hardware consists of a cartridge that plugs into the Atari ST's cartridge port and a cable that connects between the cartridge and one of the floppy ports on the ST. Designed by David Small and sold through his company Gadgets by Small, it allows the Atari ST to run most Macintosh software. It is Small's third Macintosh emulator for the ST, replacing his previous Magic Sac and Spectre 128. The Spectre GCR requires the owner to provide official Apple Macintosh 128K ROMs and Macintosh Operating System 6.0.8 disks. This avoids any legal issues of copying Apple's software. The emulator runs best with a high-resolution monochrome monitor, such as Atari's own SM124, but will run on color displays by either displaying a user-selectable half of the Macintosh screen, or missing out alternate lines to fit the lower resolution color display. The Spectre GCR plugs into the cartridge slot and floppy port, and modifies the frequency of the data to/from the single-speed floppy drive of the Atari ST, allowing it to read Macintosh GCR format discs which require a multi-speed floppy drive. The manual claims the speed to be 20% faster than an actual Mac Plus with a 30% larger screen area and resolution. Although Spectre GCR runs in 1MB of memory, 2MB or more is recommended. References External links Official Spectre Webpage and On-line Resource Atari ST software Macintosh platform emulators
https://en.wikipedia.org/wiki/Natural%20competence
In microbiology, genetics, cell biology, and molecular biology, competence is the ability of a cell to alter its genetics by taking up extracellular ("naked") DNA from its environment in the process called transformation. Competence may be differentiated between natural competence, a genetically specified ability of bacteria which is thought to occur under natural conditions as well as in the laboratory, and induced or artificial competence, which arises when cells in laboratory cultures are treated to make them transiently permeable to DNA. Competence allows for rapid adaptation and DNA repair of the cell. This article primarily deals with natural competence in bacteria, although information about artificial competence is also provided. History Natural competence was discovered by Frederick Griffith in 1928, when he showed that a preparation of killed cells of a pathogenic bacterium contained something that could transform related non-pathogenic cells into the pathogenic type. In 1944 Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrated that this 'transforming factor' was pure DNA . This was the first compelling evidence that DNA carries the genetic information of the cell. Since then, natural competence has been studied in a number of different bacteria, particularly Bacillus subtilis, Streptococcus pneumoniae (Griffith's "pneumococcus"), Neisseria gonorrhoeae, Haemophilus influenzae and members of the Acinetobacter genus. Areas of active research include the mechanisms of DNA transport, the regulation of competence in different bacteria, and the evolutionary function of competence. Mechanisms of DNA uptake In the laboratory, DNA is provided by the researcher, often as a genetically engineered fragment or plasmid. During uptake, DNA is transported across the cell membrane(s), and the cell wall if one is present. Once the DNA is inside the cell it may be degraded to nucleotides, which are reused for DNA replication and other metabolic functions.
https://en.wikipedia.org/wiki/TigerSHARC
TigerSHARC refers to a family of microprocessors currently manufactured by Analog Devices Inc (ADI). See also SHARC Blackfin External links TigerSHARC processor website Digital signal processors VLIW microprocessors
https://en.wikipedia.org/wiki/Extract
An extract (essence) is a substance made by extracting a part of a raw material, often by using a solvent such as ethanol, oil or water. Extracts may be sold as tinctures, absolutes or in powder form. The aromatic principles of many spices, nuts, herbs, fruits, etc., and some flowers, are marketed as extracts, among the best known of true extracts being almond, cinnamon, cloves, ginger, lemon, nutmeg, orange, peppermint, pistachio, rose, spearmint, vanilla, violet, rum, and wintergreen. Extraction techniques Most natural essences are obtained by extracting the essential oil from the feedstock, such as blossoms, fruit, and roots, or from intact plants through multiple techniques and methods: Expression (juicing, pressing) involves physical extraction material from feedstock, used when the oil is plentiful and easily obtained from materials such as citrus peels, olives, and grapes. Absorption (steeping, decoction). Extraction is done by soaking material in a solvent, as used for vanilla beans or tea leaves. Maceration, as used to soften and degrade material without heat, normally using oils, such as for peppermint extract and wine making. Distillation or separation process, creating a higher concentration of the extract by heating material to a specific boiling point, then collecting this and condensing the extract, leaving the unwanted material behind, as used for lavender extract. The distinctive flavors of nearly all fruits are desirable adjuncts to many food preparations, but only a few are practical sources of sufficiently concentrated flavor extract, such as from lemons, oranges, and vanilla beans. Artificial extracts The majority of concentrated fruit flavors, such as banana, cherry, peach, pineapple, raspberry, and strawberry, are produced by combining a variety of esters with special oils. Suitable coloring is generally obtained by the use of dyes. Among the esters most generally employed are ethyl acetate and ethyl butyrate. The chief factors
https://en.wikipedia.org/wiki/Source%20transformation
Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively. Process Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit. Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources. Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery. Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source in parallel with an impedance , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance retains its value and the new voltage source has value equal to the ideal current source's value times the impedance, according to Ohm's Law . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has
https://en.wikipedia.org/wiki/Meiotic%20drive
Meiotic drive is a type of intragenomic conflict, whereby one or more loci within a genome will affect a manipulation of the meiotic process in such a way as to favor the transmission of one or more alleles over another, regardless of its phenotypic expression. More simply, meiotic drive is when one copy of a gene is passed on to offspring more than the expected 50% of the time. According to Buckler et al., "Meiotic drive is the subversion of meiosis so that particular genes are preferentially transmitted to the progeny. Meiotic drive generally causes the preferential segregation of small regions of the genome". Meiotic drive in plants The first report of meiotic drive came from Marcus Rhoades who in 1942 observed a violation of Mendelian segregation ratios for the R locus - a gene controlling the production of the purple pigment anthocyanin in maize kernels - in a maize line carrying abnormal chromosome 10 (Ab10). Ab10 differs from the normal chromosome 10 by the presence of a 150-base pair heterochromatic region called 'knob', which functions as a centromere during division (hence called 'neocentromere') and moves to the spindle poles faster than the centromeres during meiosis I and II. The mechanism for this was later found to involve the activity of a kinesin-14 gene called Kinesin driver (Kindr). Kindr protein is a functional minus-end directed motor, displaying quicker minus-end directed motility than an endogenous kinesin-14, such as Kin11. As a result Kindr outperforms the endogenous kinesins, pulling the 150 bp knobs to the poles faster than the centromeres and causing Ab10 to be preferentially inherited during meiosis Meiotic drive in animals The unequal inheritance of gametes has been observed since the 1950s, in contrast to Gregor Mendel's First and Second Laws (the law of segregation and the law of independent assortment), which dictate that there is a random chance of each allele being passed on to offspring. Examples of selfish drive genes in ani
https://en.wikipedia.org/wiki/Structural%20load
A structural load or structural action is a force, deformation, or acceleration applied to structural elements. A load causes stress, deformation, and displacement in a structure. Structural analysis, a discipline in engineering, analyzes the effects of loads on structures and structural elements. Excess load may cause structural failure, so this should be considered and controlled during the design of a structure. Particular mechanical structures—such as aircraft, satellites, rockets, space stations, ships, and submarines—are subject to their own particular structural loads and actions. Engineers often evaluate structural loads based upon published regulations, contracts, or specifications. Accepted technical standards are used for acceptance testing and inspection. Types Dead loads are static forces that are relatively constant for an extended time. They can be in tension or compression. The term can refer to a laboratory test method or to the normal usage of a material or structure. Live loads are usually variable or moving loads. These can have a significant dynamic element and may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids, etc. An impact load is one whose time of application on a material is less than one-third of the natural period of vibration of that material. Cyclic loads on a structure can lead to fatigue damage, cumulative damage, or failure. These loads can be repeated loadings on a structure or can be due to vibration. Loads on architectural and civil engineering structures Structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, while remaining fit for use. Minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and building materials. Structural loads are split into categories by
https://en.wikipedia.org/wiki/Curve%20orientation
In mathematics, an orientation of a curve is the choice of one of the two possible directions for travelling on the curve. For example, for Cartesian coordinates, the -axis is traditionally oriented toward the right, and the -axis is upward oriented. In the case of a planar simple closed curve (that is, a curve in the plane whose starting point is also the end point and which has no other self-intersections), the curve is said to be positively oriented or counterclockwise oriented, if one always has the curve interior to the left (and consequently, the curve exterior to the right), when traveling on it. Otherwise, that is if left and right are exchanged, the curve is negatively oriented or clockwise oriented. This definition relies on the fact that every simple closed curve admits a well-defined interior, which follows from the Jordan curve theorem. The inner loop of a beltway road in a country where people drive on the right side of the road is an example of a negatively oriented (clockwise) curve. In trigonometry, the unit circle is traditionally oriented counterclockwise. The concept of orientation of a curve is just a particular case of the notion of orientation of a manifold (that is, besides orientation of a curve one may also speak of orientation of a surface, hypersurface, etc.). Orientation of a curve is associated with parametrization of its points by a real variable. A curve may have equivalent parametrizations when there is a continuous increasing monotonic function relating the parameter of one curve to the parameter of the other. When there is a decreasing continuous function relating the parameters, then the parametric representations are opposite and the orientation of the curve is reversed. Orientation of a simple polygon In two dimensions, given an ordered set of three or more connected vertices (points) (such as in connect-the-dots) which forms a simple polygon, the orientation of the resulting polygon is directly related to the sign of
https://en.wikipedia.org/wiki/Concepts%2C%20Techniques%2C%20and%20Models%20of%20Computer%20Programming
Concepts, Techniques, and Models of Computer Programming is a textbook published in 2004 about general computer programming concepts from MIT Press written by Université catholique de Louvain professor Peter Van Roy and Royal Institute of Technology, Sweden professor Seif Haridi. Using a carefully selected progression of subsets of the Oz programming language, the book explains the most important programming concepts, techniques, and models (paradigms). Translations of this book have been published in French (by Dunod Éditeur, 2007), Japanese (by Shoeisha, 2007) and Polish (by Helion, 2005). External links Official CTM site, with supplementary material Yves Deville et al. review CTM wiki 2004 non-fiction books Computer_programming_books Computer science books MIT Press books
https://en.wikipedia.org/wiki/Sony%20Reader
The was a line of e-book readers manufactured by Sony. The first model was the PRS-500 released in September 2006 and was related to the earlier Sony Librie, the first commercial E Ink e-reader in 2004 using an electronic paper display developed by E Ink Corporation. The last model was the PRS-T3, after which Sony announced it would no longer release a new consumer e-reader. Sony sold e-books for the Reader from the Sony eBook Library in the US, UK, Japan, Germany, Austria, Canada, France, Italy, and Spain. The Reader also could display Adobe PDFs, ePub format, RSS newsfeeds, JPEGs, and Sony's proprietary BBeB ("BroadBand eBook") format. Some Readers could play MP3 and unencrypted AAC audio files. Compatibility with Adobe digital rights management (DRM) protected PDF and ePub files allowed Sony Reader owners to borrow ebooks from lending libraries in many countries. The DRM rules of the Reader allowed any purchased e-book to be read on up to six devices, at least one of which must be a personal computer running Windows or Mac OS X. Although the owner could not share purchased eBooks on others' devices and accounts, the ability to register five Readers to a single account and share books accordingly was a possible workaround. Models and availability Ten models were produced. The PRS-500 (PRS standing for Portable Reader System) was made available in the United States in September 2006. On 1 November 2006, Readers went on display and for sale at Borders bookstores throughout the US. Borders had an exclusive contract for the Reader until the end of 2006. From April 2007, Sony Reader has been sold in the US by multiple merchants, including Fry's Electronics, Costco, Borders and Best Buy. The eBook Store from Sony is only available to US or Canadian residents or to customers who purchased a US-model reader with bundled eBook Store credit. On July 24, 2007, Sony announced that the PRS-505 Reader would be available in the UK with a launch date of September 3, 2008. Wa
https://en.wikipedia.org/wiki/Sarvega
Sarvega, Inc., was an Intel-owned company that provided XML appliances. The Intel purchase was announced on August 17, 2005, and the company brought into Intel's Software and Services Group (SSG). Other Global 1000 organizations using Sarvega XPE Switches include Fujitsu, health care supplier Mt. Sinai Hospital Systems, Reactivity and Westbridge Technology. Sarvega is also trying to establish a security appliances product for developing and maintaining safety policy and settings. See also XML appliance References External links Intel Press Release Intel Software & Services Group Web services Enterprise application integration XML organizations Intel acquisitions Defunct software companies of the United States
https://en.wikipedia.org/wiki/Bouncy%20Castle%20%28cryptography%29
Bouncy Castle is a collection of APIs used in cryptography. It includes APIs for both the Java and the C# programming languages. The APIs are supported by a registered Australian charitable organization: Legion of the Bouncy Castle Inc. Bouncy Castle is Australian in origin and therefore American restrictions on the export of cryptography from the United States do not apply to it. History Bouncy Castle started when two colleagues were tired of having to re-invent a set of cryptography libraries each time they changed jobs working in server-side Java SE. One of the developers was active in Java ME (J2ME at that time) development as a hobby and a design consideration was to include the greatest range of Java VMs for the library, including those on J2ME. This design consideration led to the architecture that exists in Bouncy Castle. The project, founded in May 2000, was originally written in Java only, but added a C# API in 2004. The original Java API consisted of approximately 27,000 lines of code, including test code and provided support for J2ME, a JCE/JCA provider, and basic X.509 certificate generation. In comparison, the 1.53 release consists of 390,640 lines of code, including test code. It supports the same functionality as the original release with a larger number of algorithms, plus PKCS#10, PKCS#12, CMS, S/MIME, OpenPGP, DTLS, TLS, OCSP, TSP, CMP, CRMF, DVCS, DANE, EST and Attribute Certificates. The C# API is around 145,000 lines of code and supports most of what the Java API does. Some key properties of the project are: Strong emphasis on standards compliance and adaptability. Public support facilities include an issue tracker, dev mailing list and a wiki all available at the website. Commercial support provided under resources for the relevant API listed on the Bouncy Castle website On 18 October 2013, a not-for-profit association, the Legion of the Bouncy Castle Inc. was established in the state of Victoria, Australia, by the core developers and
https://en.wikipedia.org/wiki/Rain%20garden
Rain gardens, also called bioretention facilities, are one of a variety of practices designed to increase rain runoff reabsorption by the soil. They can also be used to treat polluted stormwater runoff. Rain gardens are designed landscape sites that reduce the flow rate, total quantity, and pollutant load of runoff from impervious urban areas like roofs, driveways, walkways, parking lots, and compacted lawn areas. Rain gardens rely on plants and natural or engineered soil medium to retain stormwater and increase the lag time of infiltration, while remediating and filtering pollutants carried by urban runoff. Rain gardens provide a method to reuse and optimize any rain that falls, reducing or avoiding the need for additional irrigation. A benefit of planting rain gardens is the consequential decrease in ambient air and water temperature, a mitigation that is especially effective in urban areas containing an abundance of impervious surfaces that absorb heat in a phenomenon known as the heat-island effect. Rain garden plantings commonly include wetland edge vegetation, such as wildflowers, sedges, rushes, ferns, shrubs and small trees. These plants take up nutrients and water that flow into the rain garden, and they release water vapor back to the atmosphere through the process of transpiration. Deep plant roots also create additional channels for stormwater to filter into the ground. Root systems enhance infiltration, maintain or even augment soil permeability, provide moisture redistribution, and sustain diverse microbial populations involved in biofiltration. Microbes help to break down organic compounds (including some pollutants) and remove nitrogen. Rain gardens are beneficial for many reasons; they improve water quality by filtering runoff, provide localized flood control, create aesthetic landscaping sites, and provide diverse planting opportunities. They also encourage wildlife and biodiversity, tie together buildings and their surrounding environments in i
https://en.wikipedia.org/wiki/Kenneth%20Binmore
Kenneth George "Ken" Binmore, (born 27 September 1940) is an English mathematician, economist, and game theorist, a Professor Emeritus of Economics at University College London (UCL) and a Visiting Emeritus Professor of Economics at the University of Bristol. As a founder of modern economic theory of bargaining (with Nash and Rubinstein), he made important contributions to the foundations of game theory, experimental economics, evolutionary game theory and analytical philosophy. He took up economics after holding the Chair of Mathematics at the London School of Economics. The switch has put him at the forefront of developments in game theory. His other interests include political and moral philosophy, decision theory, and statistics. He has written over 100 scholarly papers and 14 books. Education Binmore studied mathematics at Imperial College London, where he was awarded a 1st class-honours BSc with a Governor's Prize, and later a PhD in mathematical analysis. Research Binmore's major research contributions are to the theory of bargaining and its testing in the laboratory. He is a pioneer of experimental economics. He began his experimental work in the 1980s when most economists thought that game theory would not work in the laboratory. Binmore and his collaborators established that game theory can often predict the behaviour of experienced players very well in laboratory settings, even in the case of human bargaining behaviour, a particularly challenging case for game theory. This has brought him into conflict with some proponents of behavioural economics who emphasise the importance of other-regarding or social preferences, and argue that their findings threaten traditional game theory. Binmore's work in political and moral philosophy began in the 1980s when he first applied bargaining theory to John Rawls' original position. His search for the philosophical foundations of the original position took him first to Kant's works, and then to Hume. Hume inspired
https://en.wikipedia.org/wiki/Integration%20appliance
An integration appliance is a computer system specifically designed to lower the cost of integrating computer systems. Most integration appliances send or receive electronic messages from other computers that are exchanging electronic documents. Most Integration Appliances support XML messaging standards such as SOAP and Web services are frequently referred to as XML appliances and perform functions that can be grouped together as XML-Enabled Networking. Vendors providing integration appliances DataPower XI50 and IBM MQ Appliance — IBM Intel SOA Products Division Premier, Inc. References Networking hardware Computer systems
https://en.wikipedia.org/wiki/XML%20firewall
An XML firewall is a specialized device used to protect applications exposed through XML based interfaces like WSDL and REST and scan XML traffic coming into and going out from an organization. Typically deployed in a DMZ environment an XML Firewall is often used to validate XML traffic, control access to XML based resources, filter XML content and rate limit requests to back-end applications exposed through XML based interfaces. XML Firewalls are commonly deployed as hardware but can also be found as software and virtual appliance for VMWare, Xen or Amazon EC2. A number of brands of XML Firewall exist and they often differ based on parameters like performance (with or without hardware acceleration, 32 Vs 64 bit), scalability (how do they cluster and perform under load), security certification (common criteria, FIPS being the most common), identity support (for SAML, OAuth, enterprise SSO solutions) and extensibility (they can support different transport protocols like IBM MQ, Tibco EMS, etc.). XML Firewalling functionality is typically embedded inside XML Appliances and SOA Gateways. See also XML appliance Web Services WS-Security Representational State Transfer Firewall software
https://en.wikipedia.org/wiki/Toad%20%28software%29
Toad is a database management toolset from Quest Software for managing relational and non-relational databases using SQL aimed at database developers, database administrators, and data analysts. The Toad toolset runs against Oracle, SQL Server, IBM DB2 (LUW & z/OS), SAP and MySQL. A Toad product for data preparation supports many data platforms. History A practicing Oracle DBA, Jim McDaniel, designed Toad for his own use in the mid-1990s. He called it Tool for Oracle Application Developers, shortened to "TOAD". McDaniel initially distributed the tool as shareware and later online as freeware. Quest Software acquired TOAD in October 1998. Quest Software itself was acquired by Dell in 2012 to form Dell Software. In June 2016, Dell announced the sale of their software division, including the Quest business, to Francisco Partners and Elliott Management Corporation. On October 31, 2016, the sale was finalized. On November 1, 2016, the sale of Dell Software to Francisco Partners and Elliott Management was completed, and the company re-launched as Quest Software. Features Connection Manager - Allow users to connect natively to the vendor’s database whether on-premise or DBaaS. Browser - Allow users to browse all the different database/schema objects and their properties effective management. Editor - A way to create and maintain scripts and database code with debugging and integration with source control. Unit Testing (Oracle) - Ensures code is functionally tested before it is released into production. Static code review (Oracle) - Ensures code meets required quality level using a rules-based system. SQL Optimization - Provides developers with a way to tune and optimize SQL statements and database code without relying on a DBA. Advanced optimization enables DBAs to tune SQL effectively in production. Scalability testing and database workload replay - Ensures that database code and SQL will scale properly before it gets released into production. Books Toad Po
https://en.wikipedia.org/wiki/Dynamization
In computer science, dynamization is the process of transforming a static data structure into a dynamic one. Although static data structures may provide very good functionality and fast queries, their utility is limited because of their inability to grow/shrink quickly, thus making them inapplicable for the solution of dynamic problems, where the input data changes. Dynamization techniques provide uniform ways of creating dynamic data structures. Decomposable search problems We define problem of searching for the predicate match in the set as . Problem is decomposable if the set can be decomposed into subsets and there exists an operation of result unification such that . Decomposition Decomposition is a term used in computer science to break static data structures into smaller units of unequal size. The basic principle is the idea that any decimal number can be translated into a representation in any other base. For more details about the topic see Decomposition (computer science). For simplicity, binary system will be used in this article but any other base (as well as other possibilities such as Fibonacci numbers) can also be utilized. If using the binary system, a set of elements is broken down into subsets of sizes with elements where is the -th bit of in binary. This means that if has -th bit equal to 0, the corresponding set does not contain any elements. Each of the subset has the same property as the original static data structure. Operations performed on the new dynamic data structure may involve traversing sets formed by decomposition. As a result, this will add factor as opposed to the static data structure operations but will allow insert/delete operation to be added. Kurt Mehlhorn proved several equations for time complexity of operations on the data structures dynamized according to this idea. Some of these equalities are listed. If is the time to build the static data structure is the time to query the static data s
https://en.wikipedia.org/wiki/AdStar
AdStar (an acronym for Advanced Storage and Retrieval) was a division of IBM that encompassed all the company's storage products including disk, tape and optical storage systems and storage software. History In 1992 IBM combined their Storage Products businesses comprising eleven sites in eight countries into this division. On its creation, AdStar became the largest information storage business in the world. It had a revenue of $6.11 billion, of which $500 million were sales to other manufacturers (OEM sales), and generated a gross profit of about $440 million (before taxes and restructuring). To provide additional autonomy—thereby further encouraging OEM sales—IBM established AdStar as a wholly owned subsidiary in April 1993, with outsider Ed Zschau as Chairman and CEO. To some observers this appeared to be an admission by IBM that the storage subsidiary no longer provided a strategic advantage by providing proprietary devices for its mainframe products, and that it was being positioned to be sold off as a part of then the IBM chairman John Akers' business strategy. The replacement of Akers by Lou Gerstner in April 1993 changed the strategy from spinout to turnaround, but the disk drive business under Zschau continued to be troubled, declining to $3 billion in 1995. Zschau left AdStar in October 1995, replaced by IBM insider Jim Vanderslice. The AdStar division was dismembered thereafter; the AdStar Distributed Storage Manager (ADSM) was renamed Tivoli Storage Manager in 1999, and the disk drive business component was sold off to Hitachi in 2003. References 1992 establishments in New York (state) 1995 disestablishments in New York (state) American companies established in 1992 American companies disestablished in 1995 Computer companies established in 1992 Computer companies disestablished in 1995 Computer storage companies Defunct computer companies of the United States Former IBM subsidiaries
https://en.wikipedia.org/wiki/Cytoarchitecture
Cytoarchitecture (Greek κύτος= "cell" + ἀρχιτεκτονική= "architecture"), also known as cytoarchitectonics, is the study of the cellular composition of the central nervous system's tissues under the microscope. Cytoarchitectonics is one of the ways to parse the brain, by obtaining sections of the brain using a microtome and staining them with chemical agents which reveal where different neurons are located. The study of the parcellation of nerve fibers (primarily axons) into layers forms the subject of myeloarchitectonics (<Gk. μυελός=marrow + ἀρχιτεκτονική=architecture), an approach complementary to cytoarchitectonics. History of the cerebral cytoarchitecture Defining cerebral cytoarchitecture began with the advent of histology—the science of slicing and staining brain slices for examination. It is credited to the Viennese psychiatrist Theodor Meynert (1833–1892), who in 1867 noticed regional variations in the histological structure of different parts of the gray matter in the cerebral hemispheres. Paul Flechsig was the first to present the cytoarchitecture of the human brain into 40 areas. Alfred Walter Campbell then divided it into 14 areas. Sir Grafton Elliot Smith (1871–1937), a New South Wales native working in Cairo, identified 50 areas. Korbinian Brodmann worked on the brains of diverse mammalian species and developed a division of the cerebral cortex into 52 discrete areas (of which 44 in the human, and the remaining 8 in non-human primate brain). Brodmann used numbers to categorize the different architectural areas, now referred to as a Brodmann Area, and he believed that each of these regions served a unique functional purpose. Constantin von Economo and Georg N. Koskinas, two neurologists in Vienna, produced a landmark work in brain research by defining 107 cortical areas on the basis of cytoarchitectonic criteria. They used letters to categorize the architecture, e.g., "F" for areas of the frontal lobe. The Nissl staining technique The Nissl stain
https://en.wikipedia.org/wiki/162%20%28number%29
162 (one hundred [and] sixty-two) is the natural number between 161 and 163. In mathematics Having only 2 and 3 as its prime divisors, 162 is a 3-smooth number. 162 is also an abundant number, since its sum of divisors is greater than it. As the product of numbers three units apart from each other, it is a triple factorial number. There are 162 ways of partitioning seven items into subsets of at least two items per subset. 16264 + 1 is a prime number. In religion Jared was 162 when he became the father of Enoch. In sports 162 is the total number of baseball games each team plays during a regular season in Major League Baseball. References Integers
https://en.wikipedia.org/wiki/Electroless%20deposition
Electroless deposition (ED) or electroless plating is defined as the autocatalytic process through which metals and metal alloys are deposited onto conductive and nonconductive surfaces. These nonconductive surfaces include plastics, ceramics, and glass etc., which can then become decorative, anti-corrosive, and conductive depending on their final functions. Electroplating unlike electroless deposition only deposits on other conductive or semi-conductive materials when a external current is applied. Electroless deposition deposits metals onto 2D and 3D structures such as screws, nanofibers, and carbon nanotubes, unlike other plating methods such as Physical Vapor Deposition ( PVD), Chemical Vapor Deposition (CVD), and electroplating, which are limited to 2D surfaces. Commonly the surface of the substrate is characterized via pXRD, SEM-EDS, and XPS which relay set parameters based their final funtionality. These parameters are referred to a Key Performance Indicators crucial for a researcher’ or company's purpose. Electroless deposition continues to rise in importance within the microelectronic industry, oil and gas, and aerospace industry. History Electroless deposition was serendipitously discovered by Charles Wurtz in 1846. Wurtz noticed the nickel-phosphorus bath when left sitting on the benchtop spontaneously decomposed and formed a black powder. 70 years later François Auguste Roux rediscovered the electroless deposition process and patented it in United States as the ‘Process of producing metallic deposits. Roux deposited nickel-posphorous (Ni-P) electroless deposition onto a substrate but his invention went uncommercialized. In 1946 the process was re-discovered by Abner Brenner and Grace E. Riddell while working at the National Bureau of Standards. They presented their discovery at the 1946 Convention of the American Electroplaters' Society (AES); a year later, at the same conference they proposed the term "electroless" for the process and described opt
https://en.wikipedia.org/wiki/165%20%28number%29
165 (one hundred [and] sixty-five) is the natural number following 164 and preceding 166. In mathematics 165 is: an odd number, a composite number, and a deficient number. a sphenic number. a tetrahedral number the sum of the sums of the divisors of the first 14 positive integers. a self number in base 10. a palindromic number in binary (101001012) and bases 14 (BB14), 32 (5532) and 54 (3354). a unique period in base 2. In astronomy 165 Loreley is a large Main belt asteroid 165P/LINEAR is a periodic comet in the Solar System The planet Neptune takes about 165 years to orbit the Sun. In the military Caproni Ca.165 Italian fighter aircraft developed before World War II was a United States Navy tanker, part of the U.S. Reserve Fleet, Beaumont, Texas was a United States Navy Barracuda-class submarine during World War II was a United States Navy during World War II was a United States Navy during World War II USS Counsel (AM-165) was a United States Navy during World War II was a United States Navy minesweeper during World War II was a United States Navy Oxford-class technical research ship following World War II was a United States Navy during World War I was a United States Navy during World War II was a United States Navy transport and cargo ship during World War II was a United States Navy yacht during World War I The 165 Squadron, Republic of Singapore Air Force Air Defence Operations Command, Republic of Singapore Air Force In transportation British Rail Class 165 The Blériot 165 was a French four-engine biplane airliner of the early 1920s The Cessna 165 single-engine plane of the 1930s LOT Polish Airlines Flight 165, en route from Warsaw to Cracow Balice airport crashed during a snowstorm on April 2, 1969 In other fields 165 is also: The year AD 165 or 165 BC 165 AH is a year in the Islamic calendar that corresponds to 781 – 782 CE The atomic number of an element temporarily called Unhexpentium G.165 is a Telecommunicat
https://en.wikipedia.org/wiki/Chemical%20field-effect%20transistor
A ChemFET is a chemically-sensitive field-effect transistor, that is a field-effect transistor used as a sensor for measuring chemical concentrations in solution. When the target analyte concentration changes, the current through the transistor will change accordingly. Here, the analyte solution separates the source and gate electrodes. A concentration gradient between the solution and the gate electrode arises due to a semi-permeable membrane on the FET surface containing receptor moieties that preferentially bind the target analyte. This concentration gradient of charged analyte ions creates a chemical potential between the source and gate, which is in turn measured by the FET. Construction A ChemFET's source and drain are constructed as for an ISFET, with the gate electrode separated from the source electrode by a solution. The gate electrode's interface with the solution is a semi-permeable membrane containing the receptors, and a gap to allow the substance under test to come in contact with the sensitive receptor moieties. A ChemFET's threshold voltage depends on the concentration gradient between the analyte in solution and the analyte in contact with its receptor-embedded semi-permeable barrier. Often, ionophores are used to facilitate analyte ion mobility through the substrate to the receptor. For example, when targeting anions, quaternary ammonium salts (such as tetraoctylammonium bromide) are used to provide cationic nature to the membrane, facilitating anion mobility through the substrate to the receptor moieties. Applications ChemFETs can be utilized in either liquid or gas phase to detect target analyte, requiring reversible binding of analyte with a receptor located in the gate electrode membrane. There is a wide range of applications of ChemFETs, including most notably anion or cation selective sensing. More work has been done with cation-sensing ChemFETs than anion-sensing ChemFETs. Anion-sensing is more complicated than cation-sensing in ChemF
https://en.wikipedia.org/wiki/Sommerfeld%20identity
The Sommerfeld identity is a mathematical identity, due Arnold Sommerfeld, used in the theory of propagation of waves, where is to be taken with positive real part, to ensure the convergence of the integral and its vanishing in the limit and . Here, is the distance from the origin while is the distance from the central axis of a cylinder as in the cylindrical coordinate system. Here the notation for Bessel functions follows the German convention, to be consistent with the original notation used by Sommerfeld. The function is the zeroth-order Bessel function of the first kind, better known by the notation in English literature. This identity is known as the Sommerfeld identity. In alternative notation, the Sommerfeld identity can be more easily seen as an expansion of a spherical wave in terms of cylindrically-symmetric waves: Where The notation used here is different form that above: is now the distance from the origin and is the radial distance in a cylindrical coordinate system defined as . The physical interpretation is that a spherical wave can be expanded into a summation of cylindrical waves in direction, multiplied by a two-sided plane wave in the direction; see the Jacobi-Anger expansion. The summation has to be taken over all the wavenumbers . The Sommerfeld identity is closely related to the two-dimensional Fourier transform with cylindrical symmetry, i.e., the Hankel transform. It is found by transforming the spherical wave along the in-plane coordinates (,, or , ) but not transforming along the height coordinate . Notes References Mathematical identities Wave mechanics
https://en.wikipedia.org/wiki/Probabilistic%20number%20theory
In mathematics, Probabilistic number theory is a subfield of number theory, which explicitly uses probability to answer questions about the integers and integer-valued functions. One basic idea underlying it is that different prime numbers are, in some serious sense, like independent random variables. This however is not an idea that has a unique useful formal expression. The founders of the theory were Paul Erdős, Aurel Wintner and Mark Kac during the 1930s, one of the periods of investigation in analytic number theory. Foundational results include the Erdős–Wintner theorem and the Erdős–Kac theorem on additive functions. See also Number theory Analytic number theory Areas of mathematics List of number theory topics List of probability topics Probabilistic method Probable prime References Further reading Number theory