source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Getaddrinfo
In C programming, the functions getaddrinfo() and getnameinfo() convert domain names, hostnames, and IP addresses between human-readable text representations and structured binary formats for the operating system's networking API. Both functions are contained in the POSIX standard application programming interface (API). getaddrinfo and getnameinfo are inverse functions of each other. They are network protocol agnostic, and support both IPv4 and IPv6. It is the recommended interface for name resolution in building protocol independent applications and for transitioning legacy IPv4 code to the IPv6 Internet. Internally, the functions may use a variety of resolution methods not limited to the Domain Name System (DNS). The Name Service Switch is commonly used on Unix-like systems and affects most implementation of this pair as it did with their BSD-socket era predecessors. struct addrinfo The C data structure used to represent addresses and hostnames within the networking API is the following: struct addrinfo { int ai_flags; int ai_family; int ai_socktype; int ai_protocol; socklen_t ai_addrlen; struct sockaddr* ai_addr; char* ai_canonname; /* canonical name */ struct addrinfo* ai_next; /* this struct can form a linked list */ }; In some older systems the type of ai_addrlen is size_t instead of socklen_t. Most socket functions, such as accept() and getpeername(), require the parameter to have type socklen_t * and programmers often pass the address to the ai_addrlen element of the addrinfo structure. If the types are incompatible, e.g., on a 64-bit Solaris 9 system where size_t is 8 bytes and socklen_t is 4 bytes, then run-time errors may result. The structure contains structures ai_family and sockaddr with its own sa_family field. These are set to the same value when the structure is created with function getaddrinfo in some implementations. getaddrinfo() getaddrinfo() converts human-readabl
https://en.wikipedia.org/wiki/Ramanujan%20theta%20function
In mathematics, particularly -analog theory, the Ramanujan theta function generalizes the form of the Jacobi theta functions, while capturing their general properties. In particular, the Jacobi triple product takes on a particularly elegant form when written in terms of the Ramanujan theta. The function is named after mathematician Srinivasa Ramanujan. Definition The Ramanujan theta function is defined as for . The Jacobi triple product identity then takes the form Here, the expression denotes the -Pochhammer symbol. Identities that follow from this include and and This last being the Euler function, which is closely related to the Dedekind eta function. The Jacobi theta function may be written in terms of the Ramanujan theta function as: Integral representations We have the following integral representation for the full two-parameter form of Ramanujan's theta function: The special cases of Ramanujan's theta functions given by and also have the following integral representations: This leads to several special case integrals for constants defined by these functions when (cf. theta function explicit values). In particular, we have that and that Application in string theory The Ramanujan theta function is used to determine the critical dimensions in Bosonic string theory, superstring theory and M-theory. References Q-analogs Elliptic functions Theta functions Srinivasa Ramanujan
https://en.wikipedia.org/wiki/PyMOL
PyMOL is an open source but proprietary molecular visualization system created by Warren Lyford DeLano. It was commercialized initially by DeLano Scientific LLC, which was a private software company dedicated to creating useful tools that become universally accessible to scientific and educational communities. It is currently commercialized by Schrödinger, Inc. As the original software license was a permissive licence, they were able to remove it; new versions are no longer released under the Python license, but under a custom license (granting broad use, redistribution, and modification rights, but assigning copyright to any version to Schrodinger, LLC.), and some of the source code is no longer released. PyMOL can produce high-quality 3D images of small molecules and biological macromolecules, such as proteins. According to the original author, by 2009, almost a quarter of all published images of 3D protein structures in the scientific literature were made using PyMOL. PyMOL is one of the few mostly open-source model visualization tools available for use in structural biology. The Py part of the software's name refers to the program having been written in the programming language Python. PyMOL uses OpenGL Extension Wrangler Library (GLEW) and FreeGLUT, and can solve Poisson–Boltzmann equations using the Adaptive Poisson Boltzmann Solver. PyMOL used Tk for the GUI widgets and had native Aqua binaries for macOS through Schrödinger, which were replaced with a PyQt user interface on all platforms with the release of version 2.0. History and commercialization Early versions of PyMol were released under the Python License. On 1 August 2006, DeLano Scientific adopted a controlled-access download system for precompiled PyMOL builds (including betas) distributed by the company. Access to these executables is now limited to registered users who are paying customers; educational builds are available free to students and teachers. However, most of the current source code
https://en.wikipedia.org/wiki/A%20Symbolic%20Analysis%20of%20Relay%20and%20Switching%20Circuits
"A Symbolic Analysis of Relay and Switching Circuits" is the title of a master's thesis written by computer science pioneer Claude E. Shannon while attending the Massachusetts Institute of Technology (MIT) in 1937. In his thesis, Shannon, a dual degree graduate of the University of Michigan, proved that Boolean algebra could be used to simplify the arrangement of the relays that were the building blocks of the electromechanical automatic telephone exchanges of the day. Shannon went on to prove that it should also be possible to use arrangements of relays to solve Boolean algebra problems. The utilization of the binary properties of electrical switches to perform logic functions is the basic concept that underlies all electronic digital computer designs. Shannon's thesis became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after World War II. At the time, the methods employed to design logic circuits (for example, contemporary Konrad Zuse's Z1) were ad hoc in nature and lacked the theoretical discipline that Shannon's paper supplied to later projects. Psychologist Howard Gardner described Shannon's thesis as "possibly the most important, and also the most famous, master's thesis of the century". A version of the paper was published in the 1938 issue of the Transactions of the American Institute of Electrical Engineers, and in 1940, it earned Shannon the Alfred Noble American Institute of American Engineers Award. References External links Full text at MIT Computer science papers Information theory Applied mathematics 1937 in science 1937 documents Claude Shannon
https://en.wikipedia.org/wiki/Rogers%E2%80%93Ramanujan%20identities
In mathematics, the Rogers–Ramanujan identities are two identities related to basic hypergeometric series and integer partitions. The identities were first discovered and proved by , and were subsequently rediscovered (without a proof) by Srinivasa Ramanujan some time before 1913. Ramanujan had no proof, but rediscovered Rogers's paper in 1917, and they then published a joint new proof . independently rediscovered and proved the identities. Definition The Rogers–Ramanujan identities are and . Here, denotes the q-Pochhammer symbol. Combinatorial interpretation Consider the following: is the generating function for partitions with exactly parts such that adjacent parts have difference at least 2. is the generating function for partitions such that each part is congruent to either 1 or 4 modulo 5. is the generating function for partitions with exactly parts such that adjacent parts have difference at least 2 and such that the smallest part is at least 2. is the generating function for partitions such that each part is congruent to either 2 or 3 modulo 5. The Rogers–Ramanujan identities could be now interpreted in the following way. Let be a non-negative integer. The number of partitions of such that the adjacent parts differ by at least 2 is the same as the number of partitions of such that each part is congruent to either 1 or 4 modulo 5. The number of partitions of such that the adjacent parts differ by at least 2 and such that the smallest part is at least 2 is the same as the number of partitions of such that each part is congruent to either 2 or 3 modulo 5. Alternatively, The number of partitions of such that with parts the smallest part is at least is the same as the number of partitions of such that each part is congruent to either 1 or 4 modulo 5. The number of partitions of such that with parts the smallest part is at least is the same as the number of partitions of such that each part is congruent to either 2 or 3 modu
https://en.wikipedia.org/wiki/Quadro
Quadro was Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning from 2000 to 2020. Quadro-branded graphics cards differed from the mainstream GeForce lines in that the Quadro cards included the use of ECC memory and enhanced floating point precision. These are desirable properties when the cards are used for calculations which require greater reliability and precision compared to graphics rendering for video games. Nvidia has moved away from the Quadro branding for new products, starting with the launch of the Ampere architecture-based RTX A6000 on October 5, 2020. To indicate the upgrade to the Nvidia Ampere architecture for their graphics cards technology, Nvidia RTX is the product line being produced and developed moving forward for use in professional workstations. The Nvidia Quadro product line directly competed with AMD's Radeon Pro (formerly FirePro/FireGL) line of professional workstation cards. History The Quadro line of GPU cards emerged in an effort towards market segmentation by Nvidia. In introducing Quadro, Nvidia was able to charge a premium for essentially the same graphics hardware in professional markets, and direct resources to properly serve the needs of those markets. To differentiate their offerings, Nvidia used driver software and firmware to selectively enable features vital to segments of the workstation market, such as high-performance anti-aliased lines and two-sided lighting, in the Quadro product. These features were of little value to the gamers that Nvidia's products already sold to, but their lack prevented high-end customers from using the less expensive products. The Quadro line also received improved support through a certified driver program. There are parallels between the market segmentation used to sell the Quadro line of products to wor
https://en.wikipedia.org/wiki/Quantum%20network
Quantum networks form an important element of quantum computing and quantum communication systems. Quantum networks facilitate the transmission of information in the form of quantum bits, also called qubits, between physically separated quantum processors. A quantum processor is a small quantum computer being able to perform quantum logic gates on a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems. Basics Quantum networks for computation Networked quantum computing or distributed quantum computing works by linking multiple quantum processors through a quantum network by sending qubits in-between them. Doing this creates a quantum computing cluster and therefore creates more computing potential. Less powerful computers can be linked in this way to create one more powerful processor. This is analogous to connecting several classical computers to form a computer cluster in classical computing. Like classical computing, this system is scalable by adding more and more quantum computers to the network. Currently quantum processors are only separated by short distances. Quantum networks for communication In the realm of quantum communication, one wants to send qubits from one quantum processor to another over long distances. This way, local quantum networks can be intra connected into a quantum internet. A quantum internet supports many applications, which derive their power from the fact that by creating quantum entangled qubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such as quantum key distribution in quantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This i
https://en.wikipedia.org/wiki/Bifurcation%20theory
Bifurcation theory is the mathematical study of changes in the qualitative or topological structure of a given family of curves, such as the integral curves of a family of vector fields, and the solutions of a family of differential equations. Most commonly applied to the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change made to the parameter values (the bifurcation parameters) of a system causes a sudden 'qualitative' or topological change in its behavior. Bifurcations occur in both continuous systems (described by ordinary, delay or partial differential equations) and discrete systems (described by maps). The name "bifurcation" was first introduced by Henri Poincaré in 1885 in the first paper in mathematics showing such a behavior. Bifurcation types It is useful to divide bifurcations into two principal classes: Local bifurcations, which can be analysed entirely through changes in the local stability properties of equilibria, periodic orbits or other invariant sets as parameters cross through critical thresholds; and Global bifurcations, which often occur when larger invariant sets of the system 'collide' with each other, or with equilibria of the system. They cannot be detected purely by a stability analysis of the equilibria (fixed points). Local bifurcations A local bifurcation occurs when a parameter change causes the stability of an equilibrium (or fixed point) to change. In continuous systems, this corresponds to the real part of an eigenvalue of an equilibrium passing through zero. In discrete systems (described by maps), this corresponds to a fixed point having a Floquet multiplier with modulus equal to one. In both cases, the equilibrium is non-hyperbolic at the bifurcation point. The topological changes in the phase portrait of the system can be confined to arbitrarily small neighbourhoods of the bifurcating fixed points by moving the bifurcation parameter close to the bifurcation point (hence 'local'). More
https://en.wikipedia.org/wiki/Internet%20Communications%20Engine
The Internet Communications Engine, or Ice, is an open-source RPC framework developed by ZeroC. It provides SDKs for C++, C#, Java, JavaScript, MATLAB, Objective-C, PHP, Python, Ruby and Swift, and can run on various operating systems, including Linux, Windows, macOS, iOS and Android. Ice implements a proprietary application layer communications protocol, called the Ice protocol, that can run over TCP, TLS, UDP, WebSocket and Bluetooth. As its name indicates, Ice can be suitable for applications that communicate over the Internet, and includes functionality for traversing firewalls. History Initially released in February 2003, Ice was influenced by the Common Object Request Broker Architecture (CORBA) in its design, and indeed was created by several influential CORBA developers, including Michi Henning. However, according to ZeroC, it was smaller and less complex than CORBA because it was designed by a small group of experienced developers, instead of suffering from design by committee. In 2004, it was reported that a game called "Wish" by a company named Mutable Realms used Ice. In 2008, it was reported that Big Bear Solar Observatory had used the software since 2005. The source code repository for Ice is on GitHub since May 2015. Components Ice components include object-oriented remote-object-invocation, replication, grid-computing, failover, load-balancing, firewall-traversals and publish-subscribe services. To gain access to those services, applications are linked to a stub library or assembly, which is generated from a language-independent IDL-like syntax called slice. IceStorm is an object-oriented publish-and-subscribe framework that also supports federation and quality-of-service. Unlike other publish-subscribe frameworks such as Tibco Software's Rendezvous or SmartSockets, message content consist of objects of well defined classes rather than of structured text. IceGrid is a suite of frameworks that provide object-oriented load balancing, failover
https://en.wikipedia.org/wiki/Bottle-shock
Bottle-shock or Bottle-sickness is a temporary condition of wine characterized by muted or disjointed fruit flavors. It often occurs immediately after bottling or when wines (usually fragile wines) are given an additional dose of sulfur (in the form of sulfur dioxide or sulfite solution). After a few weeks, the condition usually disappears. References Oenology Wine terminology
https://en.wikipedia.org/wiki/Predicta
The Philco Predicta is a black and white television chassis style, which was made in several cabinet models with 17” or 21” screens by the American company Philco from 1958 to 1960. The Predicta was marketed as the world’s first swivel-screen television. Designed by Catherine Winkler, Severin Jonassen and Richard Whipple, it featured a picture tube (CRT) that separated from the rest of the cabinet. The safety mask on the front of the picture tube was made with a new organic plastic product by Eastman Plastics called “tenite”, which protected the glass and provided implosion protection for the user and produced a greenish tint. The Predicta also had a thinner picture tube than many other televisions at the time, which led it to be marketed as the more futuristic television set. Predicta television sets were constructed with a variety of cabinet configurations, some detachable but all separate from the tube itself and connected by wires. As its manufacturer explained in mid-1959, “The world’s first separate screen receiver, Philco’s ‘Predicta,’ marked a revolution in the design and engineering of television sets. Announced in June 1958, ‘Predicta’ was made possible by the development of a shorter 21-inch picture tube called the “SF” tube (for ‘semi-flat’), and a newly-designed contour chassis....’Slender Seventeener’ portables in the ‘Predicta’ line are the thinnest and most compact portables on the market today.” The Predicta Tandem model had a fully detached picture tube and an umbilical cable, which allowed the controls and speaker for the set to be next to the viewer, with the screen up to 25 feet away. Also unique to this version was a large handle over the top to carry the cathode ray tube portion wherever the viewer wanted it. This version also required more internal circuitry to drive the video signal through the cable. Philco also made Directa, a short-lived remote series in 1959 before the firm was bought by the Ford Motor Company in 1961. This set f
https://en.wikipedia.org/wiki/Quality%2C%20cost%2C%20delivery
Quality, cost, delivery (QCD), sometimes expanded to quality, cost, delivery, morale, safety (QCDMS), is a management approach originally developed by the British automotive industry. QCD assess different components of the production process and provides feedback in the form of facts and figures that help managers make logical decisions. By using the gathered data, it is easier for organizations to prioritize their future goals. QCD helps break down processes to organize and prioritize efforts before they grow overwhelming. QCD is a "three-dimensional" approach. If there is a problem with even one dimension, the others will inevitably suffer as well. One dimension cannot be sacrificed for the sake of the other two. Quality Quality is the ability of a product or service to meet and exceed customer expectations. It is the result of the efficiency of the entire production process formed of people, material, and machinery. Customer requirements determine the quality scope. Quality is a competitive advantage; poor quality often results in bad business. The U.S. business organizations in the 1970s focused more on cost and productivity. That approach led to Japanese businesses capturing a major share of the U.S. market. It was not until the late 1970s and the beginning of the 1980s that the quality factor drastically shifted and became a strategic approach, created by Harvard professor David Garvin. This approach focuses on preventing mistakes and puts a great emphasis on customer satisfaction. Quality basis David A. Garvin lists eight dimensions of quality: Performance is a product's primary operating characteristics. For example, for a vehicle audio system, those characteristics include sound quality, surround sound, and Wi-Fi connectivity. Conformance refers to the degree to which a certain product meets the customer's expectations. Special features or extras are additional features of a product or service. An example of extras could be free meals on an airpla
https://en.wikipedia.org/wiki/Pseudo-Anosov%20map
In mathematics, specifically in topology, a pseudo-Anosov map is a type of a diffeomorphism or homeomorphism of a surface. It is a generalization of a linear Anosov diffeomorphism of the torus. Its definition relies on the notion of a measured foliation introduced by William Thurston, who also coined the term "pseudo-Anosov diffeomorphism" when he proved his classification of diffeomorphisms of a surface. Definition of a measured foliation A measured foliation F on a closed surface S is a geometric structure on S which consists of a singular foliation and a measure in the transverse direction. In some neighborhood of a regular point of F, there is a "flow box" φ: U → R2 which sends the leaves of F to the horizontal lines in R2. If two such neighborhoods Ui and Uj overlap then there is a transition function φij defined on φj(Uj), with the standard property which must have the form for some constant c. This assures that along a simple curve, the variation in y-coordinate, measured locally in every chart, is a geometric quantity (i.e. independent of the chart) and permits the definition of a total variation along a simple closed curve on S. A finite number of singularities of F of the type of "p-pronged saddle", p≥3, are allowed. At such a singular point, the differentiable structure of the surface is modified to make the point into a conical point with the total angle πp. The notion of a diffeomorphism of S is redefined with respect to this modified differentiable structure. With some technical modifications, these definitions extend to the case of a surface with boundary. Definition of a pseudo-Anosov map A homeomorphism of a closed surface S is called pseudo-Anosov if there exists a transverse pair of measured foliations on S, Fs (stable) and Fu (unstable), and a real number λ > 1 such that the foliations are preserved by f and their transverse measures are multiplied by 1/λ and λ. The number λ is called the stretch factor or dilatation of f. Significa
https://en.wikipedia.org/wiki/System.map
In Linux, the file is a symbol table used by the kernel. A symbol table is a look-up between symbol names and their addresses in memory. A symbol name may be the name of a variable or the name of a function. The System.map is required when the address of a symbol name, or the symbol name of an address, is needed. It is especially useful for debugging kernel panics and kernel oopses. The kernel does the address-to-name translation itself when CONFIG_KALLSYMS is enabled so that tools like ksymoops are not required. Internals The following is part of a System.map file: c041bc90 b packet_sklist c041bc94 b packet_sklist_lock c041bc94 b packet_socks_nr c041bc98 A __bss_stop c041bc98 A _end c041c000 A pg0 ffffe400 A __kernel_vsyscall ffffe410 A SYSENTER_RETURN ffffe420 A __kernel_sigreturn ffffe440 A __kernel_rt_sigreturn Because addresses may change from one build to the next, a new System.map is generated for each build of the kernel. Symbol types The character between the address and the symbol (separated by spaces) is the type of a symbol. The nm utility program on Unix systems lists the symbols from object files. The System.map is directly related to it, in that this file is produced by nm on the whole kernel program just like nm lists the symbols and their types for any small object programs. Some of these types are: A for absolute B or b for uninitialized data section (called BSS) D or d for initialized data section G or g for initialized data section for small objects (global) i for sections specific to DLLs N for debugging symbol p for stack unwind section R or r for read only data section S or s for uninitialized data section for small objects T or t for text (code) section U for undefined V or v for weak object W or w for weak objects which have not been tagged so - for stabs symbol in an a.out object file ? for "symbol type unknown" Filesystem location After building the Linux kernel, System.map is located in the root of the sou
https://en.wikipedia.org/wiki/Medium-dependent%20interface
A medium dependent interface (MDI) describes the interface (both physical and electrical/optical) in a computer network from a physical layer implementation to the physical medium used to carry the transmission. Ethernet over twisted pair also defines a medium dependent interface crossover (MDI-X) interface. Auto MDI-X ports on newer network interfaces detect if the connection would require a crossover, and automatically chooses the MDI or MDI-X configuration to properly match the other end of the link. Ethernet The popular Ethernet family defines common medium-dependent interfaces. For 10BASE5, connection to the coaxial cable was made with either a vampire tap or a pair of N connectors. For 10BASE2, the connection to the coaxial cable was typically made with a single BNC connector to which a T-piece was attached. For twisted-pair cabling 8P8C, modular connectors are used (often incorrectly called "RJ45" in this context). For fiber, a variety of connectors are used depending on manufacturer and physical space availability. With 10BASE-T and 100BASE-TX, separate twisted pairs are used for the two directions of communication. Since twisted pair cables are conventionally wired pin to pin (straight-through) there are two different pinouts used for the medium-dependent interface. These are referred to as MDI and MDI-X (medium-dependent interface crossover). When connecting an MDI port to an MDI-X port, a straight-through cable is used, while to connect two MDI ports or two MDI-X ports, a crossover cable must be used. Conventionally, MDI is used on end devices and routers while MDI-X is used on hubs and switches. Some hubs and switches have an MDI uplink port (often switchable) to connect to other hubs or switches without a crossover cable. MDI vs. MDI-X The terminology generally refers to variants of the Ethernet over twisted pair technology that use a female 8P8C port connection on a computer, or other network device. The X refers to the fact that transmit wir
https://en.wikipedia.org/wiki/Architectural%20rendering
Architectural rendering, architectural illustration, or architectural visualization (often abbreviated to archviz or ArchViz) is the art of creating three-dimensional images or animations showing the attributes of a proposed architectural design. Computer generated renderings Images that are generated by a computer using three-dimensional modeling software or other computer software for presentation purposes are commonly termed "Computer Generated Renderings". Rendering techniques vary. Some methods create simple flat images or images with basic shadows. A popular technique uses sophisticated software to approximate accurate lighting and materials. This technique is often referred to as a "Photo Real" rendering. Renderings are usually created for presentation, marketing and design analysis purposes. Still renderings 3D Walkthrough and Flythrough animations (movie) Virtual Tours Live Virtual Reality Floor Plans Photorealistic 3D Rendering Realtime 3D Renderings Panoramic Renderings Light and Shadow (sciography) study renderings Renovation Renderings (photomontage) and others Common types of architectural renderings Architectural renderings are often categorized into 3 sub-types: Exterior Renderings, Interior Renderings, and Aerial Renderings. Exterior renderings are defined as images where the vantage point or viewing angle is located outside of the building, while interior renderings are defined as images where the vantage point or viewing angle is located inside of the building. Aerial renderings are similar to exterior renderings, however, their viewing angle is located outside and above the building, looking down, usually at an angle. Education Traditionally rendering techniques were taught in a "master class" practice (such as the École des Beaux-Arts), where a student works creatively with a mentor in the study of fine arts. Contemporary architects use hand-drawn sketches, pen and ink drawings, and watercolor renderings to represent their design with the
https://en.wikipedia.org/wiki/Nodal%20analysis
In electric circuits analysis, nodal analysis, node-voltage analysis, or the branch current method is a method of determining the voltage (potential difference) between "nodes" (points where elements or branches connect) in an electrical circuit in terms of the branch currents. In analyzing a circuit using Kirchhoff's circuit laws, one can either do nodal analysis using Kirchhoff's current law (KCL) or mesh analysis using Kirchhoff's voltage law (KVL). Nodal analysis writes an equation at each electrical node, requiring that the branch currents incident at a node must sum to zero. The branch currents are written in terms of the circuit node voltages. As a consequence, each branch constitutive relation must give current as a function of voltage; an admittance representation. For instance, for a resistor, Ibranch = Vbranch * G, where G (=1/R) is the admittance (conductance) of the resistor. Nodal analysis is possible when all the circuit elements' branch constitutive relations have an admittance representation. Nodal analysis produces a compact set of equations for the network, which can be solved by hand if small, or can be quickly solved using linear algebra by computer. Because of the compact system of equations, many circuit simulation programs (e.g., SPICE) use nodal analysis as a basis. When elements do not have admittance representations, a more general extension of nodal analysis, modified nodal analysis, can be used. Procedure Note all connected wire segments in the circuit. These are the nodes of nodal analysis. Select one node as the ground reference. The choice does not affect the element voltages (but it does affect the nodal voltages) and is just a matter of convention. Choosing the node with the most connections can simplify the analysis. For a circuit of N nodes the number of nodal equations is N−1. Assign a variable for each node whose voltage is unknown. If the voltage is already known, it is not necessary to assign a variable. For each unk
https://en.wikipedia.org/wiki/Vasculogenesis
Vasculogenesis is the process of blood vessel formation, occurring by a de novo production of endothelial cells. It is sometimes paired with angiogenesis, as the first stage of the formation of the vascular network, closely followed by angiogenesis. Process In the sense distinguished from angiogenesis, vasculogenesis is different in one aspect: whereas angiogenesis is the formation of new blood vessels from pre-existing ones, vasculogenesis is the formation of new blood vessels, in blood islands, when there are no pre-existing ones. For example, if a monolayer of endothelial cells begins sprouting to form capillaries, angiogenesis is occurring. Vasculogenesis, in contrast, is when endothelial precursor cells (angioblasts) migrate and differentiate in response to local cues (such as growth factors and extracellular matrices) to form new blood vessels. These vascular trees are then pruned and extended through angiogenesis. Occurrences Vasculogenesis occurs during embryologic development of the circulatory system. Specifically, around blood islands, which first arise in the mesoderm of the yolk sac at 3 weeks of development. Vasculogenesis can also occur in the adult organism from circulating endothelial progenitor cells (derivatives of stem cells) able to contribute, albeit to varying degrees, to neovascularization. Examples of where vasculogenesis can occur in adults are: Tumor growth (see HP59) Revascularization or neovascularization after trauma, for example, after cardiac ischemia or retinal ischemia Endometriosis - It appears that up to 37% of the microvascular endothelium of the ectopic endometrial tissue originates from endothelial progenitor cells. See also Vascular remodelling in the embryo Vasculogenic mimicry References Angiogenesis Cardiovascular physiology Embryology of cardiovascular system
https://en.wikipedia.org/wiki/Xanthoria%20parietina
Xanthoria parietina is a foliose lichen in the family Teloschistaceae. It has wide distribution, and many common names such as common orange lichen, yellow scale, maritime sunburst lichen and shore lichen. It can be found near the shore on rocks or walls (hence the epithet parietina meaning "on walls"), and also on inland rocks, walls, or tree bark. It was chosen as a model organism for genomic sequencing (planned in 2006) by the US Department of Energy Joint Genome Institute (JGI). Taxonomy The species was first scientifically described by Carl Linnaeus in 1753, as Lichen parietinus. Xanthoria coomae, described from New South Wales in 2007, and Xanthoria polessica, described from Belarus in 2013, were later determined to be synonyms of Xanthoria parietina. Description The vegetative body of the lichen, the thallus, is foliose, and typically less than wide. The lobes of the thallus are 1–4 mm in diameter, and flattened down. The upper surface is some shade of yellow, orange, or greenish yellow, almost green when growing in shady situations, while the lower surface is white, with a cortex, and with sparse pale rhizines or . The vegetative reproductive structures soredia and isidia are absent in this species, however, apothecia are usually present. The outer "skin" of the lichen, the cortex, is composed of closely packed fungal hyphae and serves to protect the thallus from water loss due to evaporation as well as harmful effects of high levels of irradiation. In Xanthoria parietina, the thickness of the thalli is known to vary depending on the habitat in which it grows. Thalli are much thinner in shady locations than in those exposed to full sunshine; this has the effect of protecting the algae that cannot tolerate high light intensities. The lichen pigment parietin gives this species a deep yellow or orange-red color. Xanthoria parietina prefers growing on bark and wood; it is found more rarely on rock. Nutrient enrichment by bird droppings enhances the ability
https://en.wikipedia.org/wiki/Phosphate-buffered%20saline
Phosphate-buffered saline (PBS) is a buffer solution (pH ~ 7.4) commonly used in biological research. It is a water-based salt solution containing disodium hydrogen phosphate, sodium chloride and, in some formulations, potassium chloride and potassium dihydrogen phosphate. The buffer helps to maintain a constant pH. The osmolarity and ion concentrations of the solutions match those of the human body (isotonic). Applications PBS has many uses because it is isotonic and non-toxic to most cells. These uses include substance dilution and cell container rinsing. PBS with EDTA is also used to disengage attached and clumped cells. Divalent metals such as zinc, however, cannot be added as this will result in precipitation. For these types of applications, Good's buffers are recommended. PBS has been shown to be an acceptable alternative to viral transport medium regarding transport and storage of RNA viruses, such as SARS-CoV-2. Preparation There are many different ways to prepare PBS solutions (one of them is Dulbecco's phosphate-buffered saline (DPBS), which has a different composition than that of standard PBS). Some formulations do not contain potassium and magnesium, while other ones contain calcium and/or magnesium (depending on whether or not the buffer is used on live or fixed tissue: the latter does not require CaCl2 or MgCl2 ). Start with 800 mL of distilled water to dissolve all salts. Add distilled water to a total volume of 1 liter. The resultant 1× PBS will have a final concentration of 157 mM Na+, 140mM Cl−, 4.45mM K+, 10.1 mM HPO42−, 1.76 mM H2PO4− and a pH of 7.96. Add 2.84 mM of HCl to shift the buffer to 7.3 mM HPO42− and 4.6 mM H2PO4− for a final pH of 7.4 and a Cl− concentration of 142 mM. The pH of PBS is ~7.4. When making buffer solutions, it is good practice to always measure the pH directly using a pH meter. If necessary, pH can be adjusted using hydrochloric acid or sodium hydroxide. PBS can also be prepared by using commercially made PBS bu
https://en.wikipedia.org/wiki/Smn%20theorem
In computability theory the theorem, (also called the translation lemma, parameter theorem, and the parameterization theorem) is a basic result about programming languages (and, more generally, Gödel numberings of the computable functions) (Soare 1987, Rogers 1967). It was first proved by Stephen Cole Kleene (1943). The name comes from the occurrence of an S with subscript n and superscript m in the original formulation of the theorem (see below). In practical terms, the theorem says that for a given programming language and positive integers m and n, there exists a particular algorithm that accepts as input the source code of a program with free variables, together with m values. This algorithm generates source code that effectively substitutes the values for the first m free variables, leaving the rest of the variables free. Details The basic form of the theorem applies to functions of two arguments (Nies 2009, p. 6). Given a Gödel numbering of recursive functions, there is a primitive recursive function s of two arguments with the following property: for every Gödel number p of a partial computable function f with two arguments, the expressions and are defined for the same combinations of natural numbers x and y, and their values are equal for any such combination. In other words, the following extensional equality of functions holds for every x: More generally, for any m, , there exists a primitive recursive function of arguments that behaves as follows: for every Gödel number p of a partial computable function with arguments, and all values of x1, …, xm: The function s described above can be taken to be . Formal statement Given arities and , for every Turing Machine of arity and for all possible values of inputs , there exists a Turing machine of arity , such that Furthermore, there is a Turing machine that allows to be calculated from and ; it is denoted . Informally, finds the Turing Machine that is the result of hardcodin
https://en.wikipedia.org/wiki/Molecular%20memory
Molecular memory is a term for data storage technologies that use molecular species as the data storage element, rather than e.g. circuits, magnetics, inorganic materials or physical shapes. The molecular component can be described as a molecular switch, and may perform this function by any of several mechanisms, including charge storage, photochromism, or changes in capacitance. In a perfect molecular memory device, each individual molecule contains a bit of data, leading to massive data capacity. However, practical devices are more likely to use large numbers of molecules for each bit, in the manner of 3D optical data storage (many examples of which can be considered molecular memory devices). The term "molecular memory" is most often used to mean very fast, electronically addressed solid-state data storage, as is the term computer memory. At present, molecular memories are still found only in laboratories. Examples One approach to molecular memories is based on special compounds such as porphyrin-based polymers which are capable of storing electric charge. Once a certain voltage threshold is achieved the material oxidizes, releasing an electric charge. The process is reversible, in effect creating an electric capacitor. The properties of the material allow for a much greater capacitance per unit area than with conventional DRAM memory, thus potentially leading to smaller and cheaper integrated circuits. Several universities and a number of companies (Hewlett-Packard, ZettaCore) have announced work on molecular memories, which some hope will supplant DRAM memory as the lowest cost technology for high-speed computer memory. NASA is also supporting research on non-volatile molecular memories. In 2018, researches from the University of Jyväskylä in Finland, developed a molecular memory which can memorize the direction of a magnetic field for long periods of time after being switched off at extremely low temperatures, which would aid in enhancing the storage capaci
https://en.wikipedia.org/wiki/Critical%20radius
Critical radius is the minimum particle size from which an aggregate is thermodynamically stable. In other words, it is the lowest radius formed by atoms or molecules clustering together (in a gas, liquid or solid matrix) before a new phase inclusion (a bubble, a droplet or a solid particle) is viable and begins to grow. Formation of such stable nuclei is called nucleation. At the beginning of the nucleation process, the system finds itself in an initial phase. Afterwards, the formation of aggregates or clusters from the new phase occurs gradually and randomly at the nanoscale. Subsequently, if the process is feasible, the nucleus is formed. Notice that the formation of aggregates is conceivable under specific conditions. When these conditions are not satisfied, a rapid creation-annihilation of aggregates takes place and the nucleation and posterior crystal growth process does not happen. In precipitation models, nucleation is generally a prelude to models of the crystal growth process. Sometimes precipitation is rate-limited by the nucleation process. An example would be when someone takes a cup of superheated water from a microwave and, when jiggling it with a spoon or against the wall of the cup, heterogeneous nucleation occurs and most of water particles convert into steam. If the change in phase forms a crystalline solid in a liquid matrix, the atoms might then form a dendrite. The crystal growth continues in three dimensions, the atoms attaching themselves in certain preferred directions, usually along the axes of a crystal, forming a characteristic tree-like structure of a dendrite. Mathematical derivation The critical radius of a system can be determined from its Gibbs free energy. It has two components, the volume energy and the surface energy . The first one describes how probable it is to have a phase change and the second one is the amount of energy needed to create an interface. The mathematical expression of , considering spherical particl
https://en.wikipedia.org/wiki/High-%CE%BA%20dielectric
In the semiconductor industry, the term high-κ dielectric refers to a material with a high dielectric constant (κ, kappa), as compared to silicon dioxide. High-κ dielectrics are used in semiconductor manufacturing processes where they are usually used to replace a silicon dioxide gate dielectric or another dielectric layer of a device. The implementation of high-κ gate dielectrics is one of several strategies developed to allow further miniaturization of microelectronic components, colloquially referred to as extending Moore's Law. Sometimes these materials are called "high-k" (pronounced "high kay"), instead of "high-κ" (high kappa). Need for high-κ materials Silicon dioxide () has been used as a gate oxide material for decades. As metal–oxide–semiconductor field-effect transistors (MOSFETs) have decreased in size, the thickness of the silicon dioxide gate dielectric has steadily decreased to increase the gate capacitance (per unit area) and thereby drive current (per device width), raising device performance. As the thickness scales below 2 nm, leakage currents due to tunneling increase drastically, leading to high power consumption and reduced device reliability. Replacing the silicon dioxide gate dielectric with a high-κ material allows increased gate capacitance without the associated leakage effects. First principles The gate oxide in a MOSFET can be modeled as a parallel plate capacitor. Ignoring quantum mechanical and depletion effects from the Si substrate and gate, the capacitance of this parallel plate capacitor is given by where is the capacitor area is the relative dielectric constant of the material (3.9 for silicon dioxide) is the permittivity of free space is the thickness of the capacitor oxide insulator Since leakage limitation constrains further reduction of , an alternative method to increase gate capacitance is to alter κ by replacing silicon dioxide with a high-κ material. In such a scenario, a thicker gate oxide layer might
https://en.wikipedia.org/wiki/Storage%20Resource%20Manager
The Storage Resource Management (SRM) technology was initiated by the Scientific Data Management Group at Lawrence Berkeley National Laboratory (LBNL) and developed in response to the growing needs of managing large datasets on a variety of storage systems. Dynamic storage management is essential to ensure: prevention of data loss, decrease of error rates of data replication, and decrease of the analysis time by ensuring that analysis tasks have the storage space to run to completion. Storage Resource Managers (SRMs) address issues by coordinating storage allocation, streaming the data between sites, and enforcing secure interfaces to the storage systems (i.e. dealing with special security requirements of each storage system at its home institution.) In a production environment, using SRMs has reduced error rates of large-scale replication from 1% to 0.02% in the STAR project. Furthermore, SRMs can prevent job failures. When running jobs on clusters, some of the local disks get filled before the job finishes resulting in loss of productivity, and therefore a delay in analysis. This occurs because space was not dynamically allocated and previous unneeded files were not removed. While there are tools for dynamically allocating compute and network resources, SRMs are the only tool available for providing dynamic space reservation, guaranteeing secure file availability with lifetime support, and automatic garbage collection that prevents clogging of storage systems. The SRM specification has evolved into an international standard, and many projects have committed to using this technology, especially in the HEP and HENP communities, such as the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG) that supports ATLAS and CMS. The SRM approach is to develop a uniform standard interface that allows multiple implementations by various institutions to interoperate. This approach removes the dependence on a single implementation and permits multiple groups to
https://en.wikipedia.org/wiki/Indicator%20diagram
An indicator diagram is a chart used to measure the thermal, or cylinder, performance of reciprocating steam and internal combustion engines and compressors. An indicator chart records the pressure in the cylinder versus the volume swept by the piston, throughout the two or four strokes of the piston which constitute the engine, or compressor, cycle. The indicator diagram is used to calculate the work done and the power produced in an engine cylinder or used in a compressor cylinder. The indicator diagram was developed by James Watt and his employee John Southern to help understand how to improve the efficiency of steam engines. In 1796, Southern developed the simple, but critical, technique to generate the diagram by fixing a board so as to move with the piston, thereby tracing the "volume" axis, while a pencil, attached to a pressure gauge, moved at right angles to the piston, tracing "pressure". The gauge enabled Watt to calculate the work done by the steam while ensuring that its pressure had dropped to zero by the end of the stroke, thereby ensuring that all useful energy had been extracted. The total work could be calculated from the area between the "volume" axis and the traced line. The latter fact had been realised by Davies Gilbert as early as 1792 and used by Jonathan Hornblower in litigation against Watt over patents on various designs. Daniel Bernoulli had also had the insight about how to calculate work. Watt used the diagram to make radical improvements to steam engine performance and long kept it a trade secret. Though it was made public in a letter to the Quarterly Journal of Science in 1822, it remained somewhat obscure, John Farey, Jr. only learned of it on seeing it used, probably by Watt's men, when he visited Russia in 1826. In 1834, Émile Clapeyron used a diagram of pressure against volume to illustrate and elucidate the Carnot cycle, elevating it to a central position in the study of thermodynamics. Later instruments for steam engine (i
https://en.wikipedia.org/wiki/SPECfp
SPECfp is a computer benchmark designed to test the floating-point performance of a computer. It is managed by the Standard Performance Evaluation Corporation. SPECfp is the floating-point performance testing component of the SPEC CPU testing suit. The first standard SPECfp was released in 1989 as SPECfp89. Later it was replaced by SPECfp92, then SPECfp95, then SPECfp2000, then SPECfp2006, and finally SPECfp2017. Background SPEC CPU2017 is a suite of benchmark applications designed to test the CPU performance. The suite is composed of two sets of tests. The first being CINT (aka SPECint) which is for evaluating the CPU performance in integer operations. The second set is CFP (aka SPECfp) which is for evaluating the CPU floating-point operations performance. The benchmark applications are programs that perform a strict set of operation that simulate real time situations, such as physical simulations, 3D graphics, and image processing. These applications are written in different programming languages, C, C++ and Fortran. Many SPECfp benchmark applications are derived from applications that are freely available to the public and each application is assigned a weight based on its importance. To compute the SPECfp score, benchmark applications run on a reference machine and the time each application requires for completion is recorded as the reference time. When evaluating the performance of another machine, the benchmark application is run on that system and the time the application requires for completion is recorded. Then the ratio between the recorded time and the reference time is computed. The geometric mean of all the benchmark suite application ratios is then computed as the SPECfp score. For example, 126.gcc application takes 1280 seconds to complete on the AlphaStation 200 4/100, while it takes 1700 seconds on the reference machine. So, the ratio is: 1700/1280 = 1.328, which implies that AlphaStation 200 4/100 is 32.8% faster than the reference machine in
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20Riemann%20zeta%20function
In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted and is named after the mathematician Bernhard Riemann. When the argument is a real number greater than one, the zeta function satisfies the equation It can therefore provide the sum of various convergent infinite series, such as Explicit or numerically efficient formulae exist for at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments. The same equation in above also holds when is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation, except for a simple pole at . The complex derivative exists in this more general region, making the zeta function a meromorphic function. The above equation no longer applies for these extended values of , for which the corresponding summation would diverge. For example, the full zeta function exists at (and is therefore finite there), but the corresponding series would be whose partial sums would grow indefinitely large. The zeta function values listed below include function values at the negative even numbers (, ), for which and which make up the so-called trivial zeros. The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis. The Riemann zeta function at 0 and 1 At zero, one has At 1 there is a pole, so ζ(1) is not finite but the left and right limits are: Since it is a pole of first order, it has a complex residue Positiv
https://en.wikipedia.org/wiki/Formwork
Formwork is molds into which concrete or similar materials are either precast or cast-in-place. In the context of concrete construction, the falsework supports the shuttering molds. In specialty applications formwork may be permanently incorporated into the final structure, adding insulation or helping reinforce the finished structure. Types Formwork may be made of wood, metal, plastic, or composite materials: Traditional timber formwork. The formwork is built on site out of timber and plywood or moisture-resistant particleboard. It is easy to produce but time-consuming for larger structures, and the plywood facing has a relatively short lifespan. It is still used extensively where the labour costs are lower than the costs for procuring reusable formwork. It is also the most flexible type of formwork, so even where other systems are in use, complicated sections may use it. Engineered Formwork System. This formwork is built out of prefabricated modules with a metal frame (usually steel or aluminium) and covered on the application (concrete) side with material having the wanted surface structure (steel, aluminum, timber, etc.). The two major advantages of formwork systems, compared to traditional timber formwork, are speed of construction (modular systems pin, clip, or screw together quickly) and lower life-cycle costs (barring major force, the frame is almost indestructible, while the covering if made of wood; may have to be replaced after a few - or a few dozen - uses, but if the covering is made with steel or aluminium the form can achieve up to two thousand uses depending on care and the applications). Metal formwork systems are better protected against rot and fire than traditional timber formwork. Re-usable plastic formwork. These interlocking and modular systems are used to build widely variable, but relatively simple, concrete structures. The panels are lightweight and very robust. They are especially suited for similar structure projects and low-cost, mass
https://en.wikipedia.org/wiki/Oblique%20Mercator%20projection
The oblique Mercator map projection is an adaptation of the standard Mercator projection. The oblique version is sometimes used in national mapping systems. When paired with a suitable geodetic datum, the oblique Mercator delivers high accuracy in zones less than a few degrees in arbitrary directional extent. Standard and oblique aspects The oblique Mercator projection is the oblique aspect of the standard (or Normal) Mercator projection. They share the same underlying mathematical construction and consequently the oblique Mercator inherits many traits from the normal Mercator: Both projections are cylindrical: for the Normal Mercator, the axis of the cylinder coincides with the polar axis and the line of tangency with the equator. For the transverse Mercator, the axis of the cylinder lies in the equatorial plane, and the line of tangency is any chosen meridian, thereby designated the central meridian. Both projections may be modified to secant forms, which means the scale has been reduced so that the cylinder slices through the model globe. Both exist in spherical and ellipsoidal versions. Both projections are conformal, so that the point scale is independent of direction and local shapes are well preserved; Both projections can have constant scale on the line of tangency (the equator for the normal Mercator and the central meridian for the transverse). For the ellipsoidal form, several developments in use do not have constant scale along the line (which is a geodesic) of tangency. Since the standard great circle of the oblique Mercator can be chosen at will, it may be used to construct highly accurate maps (of narrow width) anywhere on the globe. Spherical oblique Mercator In constructing a map on any projection, a sphere is normally chosen to model the Earth when the extent of the mapped region exceeds a few hundred kilometers in length in both dimensions. For maps of smaller regions, an ellipsoidal model must be chosen if greater accuracy is required; s
https://en.wikipedia.org/wiki/What%20Is%20Life%3F
What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?" In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches. Background The book, published i
https://en.wikipedia.org/wiki/Ace%20of%20Aces%20%28video%20game%29
Ace of Aces is a combat flight simulation video game developed by Artech Digital Entertainment and published in 1986 by Accolade in North America and U.S. Gold in Europe. It was released for the Amstrad CPC, Atari 8-bit family, Atari 7800, Commodore 64, MSX, MS-DOS, Master System, and ZX Spectrum. Set in World War II, the player flies a RAF Mosquito long range fighter-bomber equipped with rockets, bombs and a cannon. Missions include destroying German fighter planes, bombers, V-1 flying bombs, U-boats, and trains. In 1988 Atari Corporation released a version on cartridge styled for the then-new Atari XEGS. Ace of Aces received mixed reviews but went on to become one of the best-selling Commodore 64 video games published by Accolade. The game sold 100,000 units. Gameplay Upon launching the game a menu screen with options to either practice or partake in a proper mission is shown. If the player decides to do the practice mode, they can choose whether to do dog fight training or a U-boat or train bombing. When playing the practice mode, the enemies are less aggressive. There are five different view options — the cockpit, both left and right wings, the navigational map and the bomb bay — which can be accessed by using the keyboard or by double-tapping the fire button and moving the joystick to the desired direction. When in missions, the player controls a twin-engined balsa RAF Mosquito which is already airborne, mitigating the necessity of takeoff. When starting a mission, the player chooses what supplies they wish to bring, but the more the player brings the lower the maximum speed of the plane. At the end of missions, landing is not required and points are awarded according to how many enemies are shot down, along with the amount of unused fuel, bombs, and missiles. When missions are completed, the player can choose to combine two or more of the other missions to produce a mashup. Reception Commodore 64 In a 1987 Compute! article, Ace of Aces was noted as Acco
https://en.wikipedia.org/wiki/Instituto%20Nacional%20de%20Matem%C3%A1tica%20Pura%20e%20Aplicada
The Instituto Nacional de Matemática Pura e Aplicada (National Institute for Pure and Applied Mathematics) is widely considered to be the foremost research and educational institution of Brazil in the area of mathematics. It is located in the city of Rio de Janeiro, and was formerly known simply as Instituto de Matemática Pura e Aplicada (IMPA), whose abbreviation remains in use. It is a research and education institution qualified as a Social Organization (SO) under the auspices of the Ministry of Science, Technology, Innovations and Communications (MCTIC) and the Ministry of Education (MEC) of Brazil. Currently located in the Jardim Botânico neighborhood (South Zone) of Rio de Janeiro, Brazil, IMPA was founded on October 15, 1952. It was the first research unit of the National Research Council (CNPq), a federal funding agency created a year earlier. Its logo is a stylized Möbius strip, reproducing a large sculpture of a Möbius strip on display within the IMPA headquarters. Founded by Lélio Gama, Leopoldo Nachbin and Maurício Peixoto, IMPA's primary mission is to stimulate scientific research, the training of new researchers and the dissemination and improvement of mathematical culture in Brazil. Mathematical knowledge is fundamental for scientific and technological development, which are indispensable components for economic, social and human progress. Since 2015, IMPA is directed by Marcelo Viana. History At the time of creation, IMPA did not have its own headquarters: it was temporarily housed in a room in the headquarters of the Brazilian Center for Research in Physics (created in 1949), in Praia Vermelha, south zone of Rio de Janeiro. The scientific body was also diminutive, though illustrious: in addition to the director, astronomer Lélio Gama, who also headed the National Observatory, the institute counted only the young mathematicians Leopoldo Nachbin and Maurício Peixoto. Gama's performance at the helm of IMPA, with his experience and wisdom, played a
https://en.wikipedia.org/wiki/Highly%20optimized%20tolerance
In applied mathematics, highly optimized tolerance (HOT) is a method of generating power law behavior in systems by including a global optimization principle. It was developed by Jean M. Carlson and John Doyle in the early 2000s. For some systems that display a characteristic scale, a global optimization term could potentially be added that would then yield power law behavior. It has been used to generate and describe internet-like graphs, forest fire models and may also apply to biological systems. Example The following is taken from Sornette's book. Consider a random variable, , that takes on values with probability . Furthermore, let’s assume for another parameter for some fixed . We then want to minimize subject to the constraint Using Lagrange multipliers, this gives giving us a power law. The global optimization of minimizing the energy along with the power law dependence between and gives us a power law distribution in probability. See also self-organized criticality References . . . . . . . . . Mathematical optimization
https://en.wikipedia.org/wiki/Hilbert%27s%20twenty-third%20problem
Hilbert's twenty-third problem is the last of Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. In contrast with Hilbert's other 22 problems, his 23rd is not so much a specific "problem" as an encouragement towards further development of the calculus of variations. His statement of the problem is a summary of the state-of-the-art (in 1900) of the theory of calculus of variations, with some introductory comments decrying the lack of work that had been done of the theory in the mid to late 19th century. Original statement The problem statement begins with the following paragraph: So far, I have generally mentioned problems as definite and special as possible.... Nevertheless, I should like to close with a general problem, namely with the indication of a branch of mathematics repeatedly mentioned in this lecture-which, in spite of the considerable advancement lately given it by Weierstrass, does not receive the general appreciation which, in my opinion, it is due—I mean the calculus of variations. Calculus of variations Calculus of variations is a field of mathematical analysis that deals with maximizing or minimizing functionals, which are mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. The interest is in extremal functions that make the functional attain a maximum or minimum value – or stationary functions – those where the rate of change of the functional is zero. Progress Following the problem statement, David Hilbert, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions to the calculus of variations. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an al
https://en.wikipedia.org/wiki/Frame%20fields%20in%20general%20relativity
A frame field in general relativity (also called a tetrad or vierbein) is a set of four pointwise-orthonormal vector fields, one timelike and three spacelike, defined on a Lorentzian manifold that is physically interpreted as a model of spacetime. The timelike unit vector field is often denoted by and the three spacelike unit vector fields by . All tensorial quantities defined on the manifold can be expressed using the frame field and its dual coframe field. Frame fields were introduced into general relativity by Albert Einstein in 1928 and by Hermann Weyl in 1929. The index notation for tetrads is explained in tetrad (index notation). Physical interpretation Frame fields of a Lorentzian manifold always correspond to a family of ideal observers immersed in the given spacetime; the integral curves of the timelike unit vector field are the worldlines of these observers, and at each event along a given worldline, the three spacelike unit vector fields specify the spatial triad carried by the observer. The triad may be thought of as defining the spatial coordinate axes of a local laboratory frame, which is valid very near the observer's worldline. In general, the worldlines of these observers need not be timelike geodesics. If any of the worldlines bends away from a geodesic path in some region, we can think of the observers as test particles that accelerate by using ideal rocket engines with a thrust equal to the magnitude of their acceleration vector. Alternatively, if our observer is attached to a bit of matter in a ball of fluid in hydrostatic equilibrium, this bit of matter will in general be accelerated outward by the net effect of pressure holding up the fluid ball against the attraction of its own gravity. Other possibilities include an observer attached to a free charged test particle in an electrovacuum solution, which will of course be accelerated by the Lorentz force, or an observer attached to a spinning test particle, which may be accelerated by a sp
https://en.wikipedia.org/wiki/Hilbert%27s%20twenty-second%20problem
Hilbert's twenty-second problem is the penultimate entry in the celebrated list of 23 Hilbert problems compiled in 1900 by David Hilbert. It entails the uniformization of analytic relations by means of automorphic functions. Problem statement The entirety of the original problem statement is as follows: As Poincaré was the first to prove, it is always possible to reduce any algebraic relation between two variables to uniformity by the use of automorphic functions of one variable. That is, if any algebraic equation in two variables be given, there can always be found for these variables two such single valued automorphic functions of a single variable that their substitution renders the given algebraic equation an identity. The generalization of this fundamental theorem to any analytic non-algebraic relations whatever between two variables has likewise been attempted with success by Poincaré, though by a way entirely different from that which served him in the special problem first mentioned. From Poincaré's proof of the possibility of reducing to uniformity an arbitrary analytic relation between two variables, however, it does not become apparent whether the resolving functions can be determined to meet certain additional conditions. Namely, it is not shown whether the two single valued functions of the one new variable can be so chosen that, while this variable traverses the regular domain of those functions, the totality of all regular points of the given analytic field are actually reached and represented. On the contrary it seems to be the case, from Poincaré's investigations, that there are beside the branch points certain others, in general infinitely many other discrete exceptional points of the analytic field, that can be reached only by making the new variable approach certain limiting points of the functions. In view of the fundamental importance of Poincaré's formulation of the question it seems to me that an elucidation and resolution of this difficu
https://en.wikipedia.org/wiki/Hilbert%27s%20thirteenth%20problem
Hilbert's thirteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It entails proving whether a solution exists for all 7th-degree equations using algebraic (variant: continuous) functions of two arguments. It was first presented in the context of nomography, and in particular "nomographic construction" — a process whereby a function of several variables is constructed using functions of two variables. The variant for continuous functions was resolved affirmatively in 1957 by Vladimir Arnold when he proved the Kolmogorov–Arnold representation theorem, but the variant for algebraic functions remains unresolved. Introduction Using the methods pioneered by Tschirnhaus (1683), Bring (1786), and Jerrard (1834), William Rowan Hamilton showed in 1836 that every seventh-degree equation can be reduced via radicals to the form . Regarding this equation, Hilbert asked whether its solution, x, considered as a function of the three variables a, b and c, can be expressed as the composition of a finite number of two-variable functions. History Hilbert originally posed his problem for algebraic functions (Hilbert 1927, "...Existenz von algebraischen Funktionen...", i.e., "...existence of algebraic functions..."; also see Abhyankar 1997, Vitushkin 2004). However, Hilbert also asked in a later version of this problem whether there is a solution in the class of continuous functions. A generalization of the second ("continuous") variant of the problem is the following question: can every continuous function of three variables be expressed as a composition of finitely many continuous functions of two variables? The affirmative answer to this general question was given in 1957 by Vladimir Arnold, then only nineteen years old and a student of Andrey Kolmogorov. Kolmogorov had shown in the previous year that any function of several variables can be constructed with a finite number of three-variable functions. Arnold th
https://en.wikipedia.org/wiki/Hilbert%27s%20fifteenth%20problem
Hilbert's fifteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. The problem is to put Schubert's enumerative calculus on a rigorous foundation. Introduction Schubert calculus is the intersection theory of the 19th century, together with applications to enumerative geometry. Justifying this calculus was the content of Hilbert's 15th problem, and was also the major topic of the 20 century algebraic geometry. In the course of securing the foundations of intersection theory, Van der Waerden and André Weil related the problem to the determination of the cohomology ring H*(G/P) of a flag manifold G/P, where G is a Lie group and P a parabolic subgroup of G. The additive structure of the ring H*(G/P) is given by the basis theorem of Schubert calculus due to Ehresmann, Chevalley, and Bernstein-Gel'fand-Gel'fand, stating that the classical Schubert classes on G/P form a free basis of the cohomology ring H*(G/P). The remaining problem of expanding products of Schubert classes as linear combination of basis elements was called the characteristic problem, by Schubert and regarded by him as "the main theoretic problem of enumerative geometry". While enumerative geometry made no connection with physics during the first century of its development, it has since emerged as a central element of string theory. Problem statement The entirety of the original problem statement is as follows: The problem consists in this: To establish rigorously and with an exact determination of the limits of their validity those geometrical numbers which Schubert especially has determined on the basis of the so-called principle of special position, or conservation of number, by means of the enumerative calculus developed by him. Although the algebra of today guarantees, in principle, the possibility of carrying out the processes of elimination, yet for the proof of the theorems of enumerative geometry decidedly more is requisite, name
https://en.wikipedia.org/wiki/Hilbert%27s%20seventeenth%20problem
Hilbert's seventeenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It concerns the expression of positive definite rational functions as sums of quotients of squares. The original question may be reformulated as: Given a multivariate polynomial that takes only non-negative values over the reals, can it be represented as a sum of squares of rational functions? Hilbert's question can be restricted to homogeneous polynomials of even degree, since a polynomial of odd degree changes sign, and the homogenization of a polynomial takes only nonnegative values if and only if the same is true for the polynomial. Motivation The formulation of the question takes into account that there are non-negative polynomials, for example which cannot be represented as a sum of squares of other polynomials. In 1888, Hilbert showed that every non-negative homogeneous polynomial in n variables and degree 2d can be represented as sum of squares of other polynomials if and only if either (a) n = 2 or (b) 2d = 2 or (c) n = 3 and 2d = 4. Hilbert's proof did not exhibit any explicit counterexample: only in 1967 the first explicit counterexample was constructed by Motzkin. Furthermore, if the polynomial has a degree 2d greater than two, there are significantly many more non-negative polynomials that cannot be expressed as sums of squares. The following table summarizes in which cases every non-negative homogeneous polynomial (or a polynomial of even degree) can be represented as a sum of squares: Solution and generalizations The particular case of n = 2 was already solved by Hilbert in 1893. The general problem was solved in the affirmative, in 1927, by Emil Artin, for positive semidefinite functions over the reals or more generally real-closed fields. An algorithmic solution was found by Charles Delzell in 1984. A result of Albrecht Pfister shows that a positive semidefinite form in n variables can be expressed as a s
https://en.wikipedia.org/wiki/Hilbert%27s%20eighteenth%20problem
Hilbert's eighteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by mathematician David Hilbert. It asks three separate questions about lattices and sphere packing in Euclidean space. Symmetry groups in dimensions The first part of the problem asks whether there are only finitely many essentially different space groups in -dimensional Euclidean space. This was answered affirmatively by Bieberbach. Anisohedral tiling in 3 dimensions The second part of the problem asks whether there exists a polyhedron which tiles 3-dimensional Euclidean space but is not the fundamental region of any space group; that is, which tiles but does not admit an isohedral (tile-transitive) tiling. Such tiles are now known as anisohedral. In asking the problem in three dimensions, Hilbert was probably assuming that no such tile exists in two dimensions; this assumption later turned out to be incorrect. The first such tile in three dimensions was found by Karl Reinhardt in 1928. The first example in two dimensions was found by Heesch in 1935. The related einstein problem asks for a shape that can tile space but not with an infinite cyclic group of symmetries. Sphere packing The third part of the problem asks for the densest sphere packing or packing of other specified shapes. Although it expressly includes shapes other than spheres, it is generally taken as equivalent to the Kepler conjecture. In 1998, American mathematician Thomas Callister Hales gave a computer-aided proof of the Kepler conjecture. It shows that the most space-efficient way to pack spheres is in a pyramid shape. Notes References 18 Tessellation
https://en.wikipedia.org/wiki/Hilbert%27s%20nineteenth%20problem
Hilbert's nineteenth problem is one of the 23 Hilbert problems, set out in a list compiled in 1900 by David Hilbert. It asks whether the solutions of regular problems in the calculus of variations are always analytic. Informally, and perhaps less directly, since Hilbert's concept of a "regular variational problem" identifies precisely a variational problem whose Euler–Lagrange equation is an elliptic partial differential equation with analytic coefficients, Hilbert's nineteenth problem, despite its seemingly technical statement, simply asks whether, in this class of partial differential equations, any solution function inherits the relatively simple and well understood structure from the solved equation. Hilbert's nineteenth problem was solved independently in the late 1950s by Ennio De Giorgi and John Forbes Nash, Jr. History The origins of the problem David Hilbert presented the now called Hilbert's nineteenth problem in his speech at the second International Congress of Mathematicians. In he states that, in his opinion, one of the most remarkable facts of the theory of analytic functions is that there exist classes of partial differential equations which admit only such kind of functions as solutions, adducing Laplace's equation, Liouville's equation, the minimal surface equation and a class of linear partial differential equations studied by Émile Picard as examples. He then notes the fact that most of the partial differential equations sharing this property are the Euler–Lagrange equation of a well defined kind of variational problem, featuring the following three properties: , , is an analytic function of all its arguments and . Hilbert calls this kind of variational problem a "regular variational problem": property means that such kind of variational problems are minimum problems, property is the ellipticity condition on the Euler–Lagrange equations associated to the given functional, while property is a simple regularity assumption the function . H
https://en.wikipedia.org/wiki/Hilbert%27s%20twentieth%20problem
Hilbert's twentieth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It asks whether all boundary value problems can be solved (that is, do variational problems with certain boundary conditions have solutions). Introduction Hilbert noted that there existed methods for solving partial differential equations where the function's values were given at the boundary, but the problem asked for methods for solving partial differential equations with more complicated conditions on the boundary (e.g., involving derivatives of the function), or for solving calculus of variation problems in more than 1 dimension (for example, minimal surface problems or minimal curvature problems) Problem statement The original problem statement in its entirety is as follows: An important problem closely connected with the foregoing [referring to Hilbert's nineteenth problem] is the question concerning the existence of solutions of partial differential equations when the values on the boundary of the region are prescribed. This problem is solved in the main by the keen methods of H. A. Schwarz, C. Neumann, and Poincaré for the differential equation of the potential. These methods, however, seem to be generally not capable of direct extension to the case where along the boundary there are prescribed either the differential coefficients or any relations between these and the values of the function. Nor can they be extended immediately to the case where the inquiry is not for potential surfaces but, say, for surfaces of least area, or surfaces of constant positive gaussian curvature, which are to pass through a prescribed twisted curve or to stretch over a given ring surface. It is my conviction that it will be possible to prove these existence theorems by means of a general principle whose nature is indicated by Dirichlet's principle. This general principle will then perhaps enable us to approach the question: Has not every regular varia
https://en.wikipedia.org/wiki/Rosetta%20Stone%20%28software%29
Rosetta Stone Language Learning is proprietary, computer-assisted language learning (CALL) software published by Rosetta Stone Inc, part of the IXL Learning family of products. The software uses images, text, and sound to teach words and grammar by spaced repetition, without translation. Rosetta Stone calls its approach Dynamic Immersion. The software's name and logo allude to the ancient stone slab of the same name on which the Decree of Memphis is inscribed in three writing systems. IXL Learning acquired Rosetta Stone in March 2021. Dynamic Immersion In a Rosetta Stone Language Learning exercise, the learner pairs sound or text to one of several images. The number of images per screen varies. For example, the software shows the learner four photographs. A native speaker makes a statement that describes one of the photographs, and the statement is printed on the screen; the learner chooses the photograph that the speaker described. In another variation, the learner completes a textual description of a photograph. In writing exercises, the software provides an on-screen keyboard for the user to type characters that are not in the Latin alphabet or accents that may not be in their native language. Grammar lessons cover grammatical tense and grammatical mood. In grammar lessons, the program firstly shows the learner several examples of a grammatical concept, and in some levels, the word or words the learner should focus on are highlighted. Then the learner is given a sentence with several options for a word or phrase, and the learner chooses the correct option. If the learner has a microphone, the software will evaluate word pronunciation using the embedded speech recognition engine, TrueAccent. Each unit contains reviews of the content in those lessons, and each unit concludes with a Milestone activity, which is a simulated conversation that covers the content of the unit. Scoring The program immediately informs the learner whether the answer is right or wro
https://en.wikipedia.org/wiki/Flight%20envelope
In aerodynamics, the flight envelope, service envelope, or performance envelope of an aircraft or spacecraft refers to the capabilities of a design in terms of airspeed and load factor or atmospheric density, often simplified to altitude. The term is somewhat loosely applied, and can also refer to other measurements such as maneuverability. When a plane is pushed, for instance by diving it at high speeds, it is said to be flown "outside the envelope", something considered rather dangerous. Flight envelope is one of a number of related terms that are all used in a similar fashion. It is perhaps the most common term because it is the oldest, first being used in the early days of test flying. It is closely related to more modern terms known as extra power and a doghouse plot which are different ways of describing a flight envelope. In addition, the term has been widened in scope outside the field of engineering, to refer to the strict limits in which an event will take place or more generally to the predictable behavior of a given phenomenon or situation, and hence, its "flight envelope". Extra power Extra power, or specific excess power, is a very basic method of determining an aircraft's flight envelope. It is easily calculated, but as a downside does not tell very much about the actual performance of the aircraft at different altitudes. Choosing any particular set of parameters will generate the needed power for a particular aircraft for those conditions. For instance a Cessna 150 at altitude and speed needs about to fly straight and level. The C150 is normally equipped with a engine, so in this particular case the plane has of extra power. In overall terms this is very little extra power, 60% of the engine's output is already used up just keeping the plane in the air. The leftover 40 hp is all that the aircraft has to maneuver with, meaning it can climb, turn, or speed up only a small amount. To put this in perspective, the C150 could not maintain a 2g (2
https://en.wikipedia.org/wiki/Cambria%20%28typeface%29
Cambria is a transitional serif typeface commissioned by Microsoft and distributed with Windows and Office. It was designed by Dutch typeface designer Jelle Bosma in 2004, with input from Steve Matteson and Robin Nicholas. It is intended as a serif font that is suitable for body text, that is very readable printed small or displayed on a low-resolution screen and has even spacing and proportions. It is part of the ClearType Font Collection, a suite of fonts from various designers released with Windows Vista. All start with the letter C to reflect that they were designed to work well with Microsoft's ClearType text rendering system, a text rendering engine designed to make text clearer to read on LCD monitors. The other fonts in the same group are Calibri, Candara, Consolas, Constantia and Corbel. Design Diagonal and vertical hairlines and serifs are relatively strong, while horizontal serifs are small and intend to emphasize stroke endings rather than stand out themselves. This principle is most noticeable in the italics where the lowercase characters are subdued in style. It is somewhat more condensed than average for a font of its kind. A profile of Bosma for the Monotype website commented: "One of the defining features of the typeface is its contrast between heavy vertical serifs and hairlines—which keep the font sturdy, and ensures the design is preserved at small sizes—and its relatively thin horizontals, which ensure the typeface remains crisp when used at larger sizes." Bosma describes it as a "transitional slab-serif hybrid." Many aspects of the design are somewhat blocky to render well on screen, and full stops are square rather than round. Designers have recommended avoiding using it in printed text because of this: designer Matthew Butterick described it as too monotonous to be attractive on paper. Bosma compared it to optical sizes of fonts designed to be printed small: "The design is a bit like an old metal type font. In those days sizes had their ow
https://en.wikipedia.org/wiki/Lanczos%20resampling
Lanczos filtering and Lanczos resampling are two applications of a mathematical formula. It can be used as a low-pass filter or used to smoothly interpolate the value of a digital signal between its samples. In the latter case, it maps each sample of the given signal to a translated and scaled copy of the Lanczos kernel, which is a sinc function windowed by the central lobe of a second, longer, sinc function. The sum of these translated and scaled kernels is then evaluated at the desired points. Lanczos resampling is typically used to increase the sampling rate of a digital signal, or to shift it by a fraction of the sampling interval. It is often used also for multivariate interpolation, for example to resize or rotate a digital image. It has been considered the "best compromise" among several simple filters for this purpose. The filter is named after its inventor, Cornelius Lanczos (). Definition Lanczos kernel The effect of each input sample on the interpolated values is defined by the filter's reconstruction kernel , called the Lanczos kernel. It is the normalized sinc function , windowed (multiplied) by the Lanczos window, or sinc window, which is the central lobe of a horizontally stretched sinc function for . Equivalently, The parameter is a positive integer, typically 2 or 3, which determines the size of the kernel. The Lanczos kernel has lobes: a positive one at the center, and alternating negative and positive lobes on each side. Interpolation formula Given a one-dimensional signal with samples , for integer values of , the value interpolated at an arbitrary real argument is obtained by the discrete convolution of those samples with the Lanczos kernel: where is the filter size parameter, and is the floor function. The bounds of this sum are such that the kernel is zero outside of them. Properties As long as the parameter is a positive integer, the Lanczos kernel is continuous everywhere, and its derivative is defined and con
https://en.wikipedia.org/wiki/WRSP-TV
WRSP-TV (channel 55) is a television station licensed to Springfield, Illinois, United States, affiliated with the Fox network. It is owned by GOCOM Media, LLC, alongside Decatur-licensed CW affiliate WBUI (channel 23). GOCOM maintains joint sales and shared services agreements (JSA/SSA) with the Sinclair Broadcast Group, owner of Springfield-licensed ABC affiliate WICS, channel 20 (and its semi-satellite, Champaign-licensed WICD, channel 15), for the provision of certain services. WRSP's transmitter is located west of Mechanicsburg, in unincorporated Sangamon County; the station shares studios with WBUI and WICS on East Cook Street in Springfield's Eastside. However, WBUI also operates an advertising sales office on South Main Street/US 51 in downtown Decatur. WCCU (channel 27) in Urbana–Champaign operates as a semi-satellite of WRSP for the eastern portion of the Central Illinois market, including Danville. As such, it clears all network and syndicated programming from its parent but airs separate local commercial inserts and legal identifications. WCCU's transmitter is located northeast of Homer, along the Vermilion–Champaign county line; it shares studios with WICD on South Country Fair Drive in downtown Champaign. History What is now WRSP signed on June 1, 1979, as WBHW, a religious independent (the call letters stood for "We Believe His Word"). It aired an analog signal on UHF channel 55 and was built by the Windmill Broadcasting Company, which had received the construction permit in September 1978. It was the first new commercial station in the market (not counting satellite stations) since WCIA launched back in 1953. On November 24, 1982, it was sold to new owners who changed the call letters to WRSP-TV and turned it into the area's first general entertainment independent. In the winter of 1985, WRSP announced it would join the upstart Fox network the following year. As part of the agreement, on February 19, 1986, it added full-time satellite WCCU in Urba
https://en.wikipedia.org/wiki/Geodesic%20deviation
In general relativity, if two objects are set in motion along two initially parallel trajectories, the presence of a tidal gravitational force will cause the trajectories to bend towards or away from each other, producing a relative acceleration between the objects. Mathematically, the tidal force in general relativity is described by the Riemann curvature tensor, and the trajectory of an object solely under the influence of gravity is called a geodesic. The geodesic deviation equation relates the Riemann curvature tensor to the relative acceleration of two neighboring geodesics. In differential geometry, the geodesic deviation equation is more commonly known as the Jacobi equation. Mathematical definition To quantify geodesic deviation, one begins by setting up a family of closely spaced geodesics indexed by a continuous variable s and parametrized by an affine parameter τ. That is, for each fixed s, the curve swept out by γs(τ) as τ varies is a geodesic. When considering the geodesic of a massive object, it is often convenient to choose τ to be the object's proper time. If xμ(s, τ) are the coordinates of the geodesic γs(τ), then the tangent vector of this geodesic is If τ is the proper time, then Tμ is the four-velocity of the object traveling along the geodesic. One can also define a deviation vector, which is the displacement of two objects travelling along two infinitesimally separated geodesics: The relative acceleration Aμ of the two objects is defined, roughly, as the second derivative of the separation vector Xμ as the objects advance along their respective geodesics. Specifically, Aμ is found by taking the directional covariant derivative of X along T twice: The geodesic deviation equation relates Aμ, Tμ, Xμ, and the Riemann tensor Rμνρσ: An alternate notation for the directional covariant derivative is , so the geodesic deviation equation may also be written as The geodesic deviation equation can be derived from the second variation of the point
https://en.wikipedia.org/wiki/Evidence%20of%20common%20descent
Evidence of common descent of living organisms has been discovered by scientists researching in a variety of disciplines over many decades, demonstrating that all life on Earth comes from a single ancestor. This forms an important part of the evidence on which evolutionary theory rests, demonstrates that evolution does occur, and illustrates the processes that created Earth's biodiversity. It supports the modern evolutionary synthesis—the current scientific theory that explains how and why life changes over time. Evolutionary biologists document evidence of common descent, all the way back to the last universal common ancestor, by developing testable predictions, testing hypotheses, and constructing theories that illustrate and describe its causes. Comparison of the DNA genetic sequences of organisms has revealed that organisms that are phylogenetically close have a higher degree of DNA sequence similarity than organisms that are phylogenetically distant. Genetic fragments such as pseudogenes, regions of DNA that are orthologous to a gene in a related organism, but are no longer active and appear to be undergoing a steady process of degeneration from cumulative mutations support common descent alongside the universal biochemical organization and molecular variance patterns found in all organisms. Additional genetic information conclusively supports the relatedness of life and has allowed scientists (since the discovery of DNA) to develop phylogenetic trees: a construction of organisms' evolutionary relatedness. It has also led to the development of molecular clock techniques to date taxon divergence times and to calibrate these with the fossil record. Fossils are important for estimating when various lineages developed in geologic time. As fossilization is an uncommon occurrence, usually requiring hard body parts and death near a site where sediments are being deposited, the fossil record only provides sparse and intermittent information about the evolution of lif
https://en.wikipedia.org/wiki/Language%20construct
In computer programming, a language construct is "a syntactically allowable part of a program that may be formed from one or more lexical tokens in accordance with the rules of the programming language", as defined by in the ISO/IEC 2382 standard (ISO/IEC JTC 1). A term is defined as a "linguistic construct in a conceptual schema language that refers to an entity". Although the term "language construct" may often used as a synonym for control structure, other kinds of logical constructs of a computer program include variables, expressions, functions, or modules. Control flow statements (such as conditionals, foreach loops, while loops, etc) are language constructs, not functions. So while (true) is a language construct, while add(10) is a function call. Examples of language constructs In PHP print is a language construct. <?php print 'Hello world'; ?> is the same as: <?php print('Hello world'); ?> Programming constructs In Java a class is written in this format:public class MyClass { //Code . . . . . . } In C++ a class is written in this format:class MyCPlusPlusClass { //Code . . . . }; References
https://en.wikipedia.org/wiki/Merge%20%28version%20control%29
In version control, merging (also called integration) is a fundamental operation that reconciles multiple changes made to a version-controlled collection of files. Most often, it is necessary when a file is modified on two independent branches and subsequently merged. The result is a single collection of files that contains both sets of changes. In some cases, the merge can be performed automatically, because there is sufficient history information to reconstruct the changes, and the changes do not conflict. In other cases, a person must decide exactly what the resulting files should contain. Many revision control software tools include merge capabilities. Types of merges There are two types of merges: unstructured and structured. Unstructured merge Unstructured merge operates on raw text, typically using lines of text as atomic units. This is what Unix tools (diff/patch) and CVS tools (SVN, Git) use. This is limited, as a line of text does not represent the structure of source code. Structured merge Structured merge tools, or AST merge, turn the source code into a fully resolved AST. This allows for a fine-grained merge that avoid spurious conflicts. Workflow Automatic merging is what version control software does when it reconciles changes that have happened simultaneously (in a logical sense). Also, other pieces of software deploy automatic merging if they allow for editing the same content simultaneously. For instance, Wikipedia allows two people to edit the same article at the same time; when the latter contributor saves, their changes are merged into the article instead of overwriting the previous set of changes. Manual merging is what people have to resort to (possibly assisted by merging tools) when they have to reconcile files that differ. For instance, if two systems have slightly differing versions of a configuration file and a user wants to have the good stuff in both, this can usually be achieved by merging the configuration files by hand, pi
https://en.wikipedia.org/wiki/Hunting%20hypothesis
In paleoanthropology, the hunting hypothesis is the hypothesis that human evolution was primarily influenced by the activity of hunting for relatively large and fast animals, and that the activity of hunting distinguished human ancestors from other hominins. While it is undisputed that early humans were hunters, the importance of this fact for the final steps in the emergence of the genus Homo out of earlier australopithecines, with its bipedalism and production of stone tools (from about 2.5 million years ago), and eventually also control of fire (from about 1.5 million years ago), is emphasized in the "hunting hypothesis", and de-emphasized in scenarios that stress the omnivore status of humans as their recipe for success, and social interaction, including mating behaviour as essential in the emergence of language and culture. Advocates of the hunting hypothesis tend to believe that tool use and toolmaking essential to effective hunting were an extremely important part of human evolution, and trace the origin of language and religion to a hunting context. As societal evidence David Buss cites that modern tribal population deploy hunting as their primary way of acquiring food. The Aka pygmies in the Central African Republic spend 56% of their quest for nourishment hunting, 27% gathering, and 17% processing food. Additionally, the !Kung in Botswana retain 40% of their calories from hunting and this percentage varies from 20% to 90% depending on the season. For physical evidence Buss first looks to the guts of humans and apes. The human gut consists mainly of the small intestines, which are responsible for the rapid breakdown of proteins and absorption of nutrients. The ape's gut is primarily colon, which indicates a vegetarian diet. This structural difference supports the hunting hypothesis in being an evolutionary branching point between modern humans and modern primates. Buss also cites human teeth in that fossilized human teeth have a thin enamel coating w
https://en.wikipedia.org/wiki/Arithmetical%20set
In mathematical logic, an arithmetical set (or arithmetic set) is a set of natural numbers that can be defined by a formula of first-order Peano arithmetic. The arithmetical sets are classified by the arithmetical hierarchy. The definition can be extended to an arbitrary countable set A (e.g. the set of n-tuples of integers, the set of rational numbers, the set of formulas in some formal language, etc.) by using Gödel numbers to represent elements of the set and declaring a subset of A to be arithmetical if the set of corresponding Gödel numbers is arithmetical. A function is called arithmetically definable if the graph of is an arithmetical set. A real number is called arithmetical if the set of all smaller rational numbers is arithmetical. A complex number is called arithmetical if its real and imaginary parts are both arithmetical. Formal definition A set X of natural numbers is arithmetical or arithmetically definable if there is a first-order formula φ(n) in the language of Peano arithmetic such that each number n is in X if and only if φ(n) holds in the standard model of arithmetic. Similarly, a k-ary relation is arithmetical if there is a formula such that holds for all k-tuples of natural numbers. A function is called arithmetical if its graph is an arithmetical (k+1)-ary relation. A set A is said to be arithmetical in a set B if A is definable by an arithmetical formula that has B as a set parameter. Examples The set of all prime numbers is arithmetical. Every recursively enumerable set is arithmetical. Every computable function is arithmetically definable. The set encoding the halting problem is arithmetical. Chaitin's constant Ω is an arithmetical real number. Tarski's indefinability theorem shows that the (Gödel numbers of the) set of true formulas of first-order arithmetic is not arithmetically definable. Properties The complement of an arithmetical set is an arithmetical set. The Turing jump of an arithmetical set is an
https://en.wikipedia.org/wiki/Water%20detector
A water detector is an electronic device that is designed to detect the presence of water for purposes such as to provide an alert in time to allow the prevention of water leakage. A common design is a small cable or device that lies flat on a floor and relies on the electrical conductivity of water to decrease the resistance across two contacts. The device then sounds an audible alarm together with providing onward signaling in the presence of enough water to bridge the contacts. These are useful in a normally occupied area near any infrastructure that has the potential to leak water, such as HVAC, water pipes, drain pipes, vending machines, dehumidifiers, or water tanks. Water leak detection Water leak detection is an expression more commonly used for larger, integrated systems installed in modern buildings or those containing valuable artifacts, materials or other critical assets where early notification of a potentially damaging leak would be beneficial. In particular, water leak detection has become a necessity in data centers, trading floors, banks, archives and other mission-critical infrastructure. The water leak detection industry is small and specialized with only a few manufacturers operating worldwide. The original application was in the void created by "computer room" floors in the days of large main-frame computer systems. These use a modular, raised floor based around a structural "floor tile" usually 600 mm square and supported at the corners by pedestals. The void created gave easy access and routing for the mass of power, networking and other interconnecting cables associated with larger computer systems - processors, drives, routers etc. mainframe computers also generated large amounts of heat so a void under the floor could also be used as a plenum to distribute and diffuse chilled air around the computer room. The void therefore was likely to have chilled water pipes running through it along with the drains for condensates associated with ref
https://en.wikipedia.org/wiki/Zero-copy
"Zero-copy" describes computer operations in which the CPU does not perform the task of copying data from one memory area to another or in which unnecessary data copies are avoided. This is frequently used to save CPU cycles and memory bandwidth in many time consuming tasks, such as when transmitting a file at high speed over a network, etc., thus improving the performance of programs (processes) executed by a computer. Principle Zero-copy programming techniques can be used when exchanging data within a user space process (i.e. between two or more threads, etc.) and/or between two or more processes (see also producer–consumer problem) and/or when data has to be accessed / copied / moved inside kernel space or between a user space process and kernel space portions of operating systems (OS). Usually when a user space process has to execute system operations like reading or writing data from/to a device (i.e. a disk, a NIC, etc.) through their high level software interfaces or like moving data from one device to another, etc., it has to perform one or more system calls that are then executed in kernel space by the operating system. If data has to be copied or moved from source to destination and both are located inside kernel space (i.e. two files, a file and a network card, etc.) then unnecessary data copies, from kernel space to user space and from user space to kernel space, can be avoided by using special (zero-copy) system calls, usually available in most recent versions of popular operating systems. Zero-copy versions of operating system elements, such as device drivers, file systems, network protocol stacks, etc., greatly increase the performance of certain application programs (that become processes when executed) and more efficiently utilize system resources. Performance is enhanced by allowing the CPU to move on to other tasks while data copies / processing proceed in parallel in another part of the machine. Also, zero-copy operations reduce the number
https://en.wikipedia.org/wiki/Optical%20storage
Optical storage refers to a class of data storage systems that use light to read or write data to an underlying optical media. Although a number of optical formats have been used over time, the most common examples are optical disks like the compact disc (CD) and DVD. Reading and writing methods have also varied over time, but most modern systems use lasers as the light source and use it both for reading and writing to the discs. Britannica notes that it "uses low-power laser beams to record and retrieve digital (binary) data." Overview Optical storage is the storage of data on an optically readable medium. Data is recorded by making marks in a pattern that can be read back with the aid of light, usually a beam of laser light precisely focused on a spinning optical disc. An older example of optical storage that does not require the use of computers, is microform. There are other means of optically storing data and new methods are in development. An optical disc drive is a device in a computer that can read CD-ROMs or other optical discs, such as DVDs and Blu-ray discs. Optical storage differs from other data storage techniques that make use of other technologies such as magnetism, such as floppy disks and hard disks, or semiconductors, such as flash memory. Optical storage in the form of discs grants the ability to record onto a compact disc in real time. Compact discs held many advantages over audio tape players, such as higher sound quality and the ability to play back digital sound. Optical storage also gained importance for its green qualities and its efficiency with high energies. Optical storage can range from a single drive reading a single CD-ROM to multiple drives reading multiple discs such as an optical jukebox. Single CDs (compact discs) can hold around 700 MB (megabytes) and optical jukeboxes can hold much more. Single-layer DVDs can hold 4.7 GB, while dual-layered can hold 8.5 GB. This can be doubled to 9.4 GB and 17 GB by making the DVDs double-si
https://en.wikipedia.org/wiki/Deterministic%20encryption
A deterministic encryption scheme (as opposed to a probabilistic encryption scheme) is a cryptosystem which always produces the same ciphertext for a given plaintext and key, even over separate executions of the encryption algorithm. Examples of deterministic encryption algorithms include RSA cryptosystem (without encryption padding), and many block ciphers when used in ECB mode or with a constant initialization vector. Leakage Deterministic encryption can leak information to an eavesdropper, who may recognize known ciphertexts. For example, when an adversary learns that a given ciphertext corresponds to some interesting message, they can learn something every time that ciphertext is transmitted. To gain information about the meaning of various ciphertexts, an adversary might perform a statistical analysis of messages transmitted over an encrypted channel, or attempt to correlate ciphertexts with observed actions (e.g., noting that a given ciphertext is always received immediately before a submarine dive). This concern is particularly serious in the case of public key cryptography, where any party can encrypt chosen messages using a public encryption key. In this case, the adversary can build a large "dictionary" of useful plaintext/ciphertext pairs, then observe the encrypted channel for matching ciphertexts. Applications While deterministic encryption schemes can never be semantically secure, they have some advantages over probabilistic schemes. Database searching of encrypted data One primary motivation for the use of deterministic encryption is the efficient searching of encrypted data. Suppose a client wants to outsource a database to a possibly untrusted database service provider. If each entry is encrypted using a public-key cryptosystem, anyone can add to the database, and only the distinguished "receiver" who has the private key can decrypt the database entries. If, however, the receiver wants to search for a specific record in the databa
https://en.wikipedia.org/wiki/DreamHost
DreamHost is a Los Angeles-based web hosting provider and domain name registrar. It is owned by New Dream Network, LLC, founded in 1996 by Dallas Bethune, Josh Jones, Michael Rodriguez and Sage Weil, undergraduate students at Harvey Mudd College in Claremont, California, and registered in 1997 by Michael Rodriguez. DreamHost began hosting customers' sites in 1997. In May 2012, DreamHost spun off Inktank. Inktank is a professional services and support company for the open source Ceph file system. In November 2014, DreamHost spun off Akanda, an open source network virtualization project. As of February 2016, Dreamhost employs about 200 people and has close to 400,000 customers. Web hosting DreamHost's shared, VPS, and dedicated hosting network consists of Apache, nginx and lighttpd web servers running on the Ubuntu operating system. DreamHost also offers cloud storage and computing services for entrepreneurs and developers, launched in 2012. The control panel for users to manage all services is a custom application designed in-house, and includes integrated billing and a support ticket system. DreamHost's staff contribute to an official blog and a customer support wiki. DreamHost does not offer call-in phone support, but customers can pay extra to request callbacks from support staff. Furthermore, a live chat option is available for all accounts when the level of support emails is low. This option is always available for customers that already pay the monthly fee for callbacks. The company hosts in excess of one million domains. File hosting In 2006, the company began a beta version file hosting service they called "Files Forever". The company stated that existing customers could store files "forever" after paying a one-time storage fee, and redistribute or sell them with DreamHost handling the transactions. As of November 2012, this service was no longer offered to new customers. In April 2013, DreamHost mentioned that the Files Forever service had been discontinu
https://en.wikipedia.org/wiki/Temperature%20measurement
Temperature measurement (also known as thermometry) describes the process of measuring a current local temperature for immediate or later evaluation. Datasets consisting of repeated standardized measurements can be used to assess temperature trends. History Attempts at standardized temperature measurement prior to the 17th century were crude at best. For instance in 170 AD, physician Claudius Galenus mixed equal portions of ice and boiling water to create a "neutral" temperature standard. The modern scientific field has its origins in the works by Florentine scientists in the 1600s including Galileo constructing devices able to measure relative change in temperature, but subject also to confounding with atmospheric pressure changes. These early devices were called thermoscopes. The first sealed thermometer was constructed in 1654 by the Grand Duke of Tuscany, Ferdinand II. The development of today's thermometers and temperature scales began in the early 18th century, when Gabriel Fahrenheit produced a mercury thermometer and scale, both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use, alongside the Celsius and Kelvin scales. Technologies Many methods have been developed for measuring temperature. Most of these rely on measuring some physical property of a working material that varies with temperature. One of the most common devices for measuring temperature is the glass thermometer. This consists of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature increase causes the fluid to expand, so the temperature can be determined by measuring the volume of the fluid. Such thermometers are usually calibrated so that one can read the temperature simply by observing the level of the fluid in the thermometer. Another type of thermometer that is not really used much in practice, but is important from a theoretical standpoint, is the gas thermometer. Other important devices for measuring temperature inc
https://en.wikipedia.org/wiki/University%20of%20Michigan%20Library
The University of Michigan Library is the academic library system of the University of Michigan. The university's 38 constituent and affiliated libraries together make it the second largest research library by number of volumes in the United States. As of 2019–20, the University Library contained more than 14,543,814 volumes, while all campus library systems combined held more than 16,025,996 volumes. As of the 2019–2020 fiscal year, the Library also held 221,979 serials, and over 4,239,355 annual visits. Founded in 1838, the University Library is the university's main library and is housed in 12 buildings with more than 20 libraries, among the most significant of which are the Shapiro Undergraduate Library, Hatcher Graduate Library, Special Collections Library, and Taubman Health Sciences Library. However, several U-M libraries are independent of the University Library: the Bentley Historical Library, the William L. Clements Library, the Gerald R. Ford Library, the Kresge Business Administration Library of the Ross School of Business, and the Law Library of the University of Michigan Law School. The University Library is also separate from the libraries of the University of Michigan–Dearborn (Mardigian Library) and the University of Michigan–Flint (Frances Willson Thompson Library). The University of Michigan was the original home of the JSTOR database, which contains about 750,000 digitized pages from the entire pre-1990 backfile of ten journals of history and economics. In December 2004, the University of Michigan announced a book digitization program in collaboration with Google (known as Michigan Digitization Project), which is both revolutionary and controversial. Books scanned by Google are included in HathiTrust, a digital library created by a partnership of major research institutions. As of March 2014, the following collections had been digitized: Art, Architecture and Engineering Library; Bentley Historical Library; Buhr Building (large portions); Dent
https://en.wikipedia.org/wiki/Fasciation
Fasciation (pronounced , from the Latin root meaning "band" or "stripe"), also known as cresting, is a relatively rare condition of abnormal growth in vascular plants in which the apical meristem (growing tip), which normally is concentrated around a single point and produces approximately cylindrical tissue, instead becomes elongated perpendicularly to the direction of growth, thus producing flattened, ribbon-like, crested (or "cristate"), or elaborately contorted tissue. Fasciation may also cause plant parts to increase in weight and volume in some instances. The phenomenon may occur in the stem, root, fruit, or flower head. Some plants are grown and prized aesthetically for their development of fasciation. Any occurrence of fasciation has several possible causes, including hormonal, genetic, bacterial, fungal, viral and environmental causes. Cause Fasciation can be caused by hormonal imbalances in the meristematic cells of plants, which are cells where growth can occur. Fasciation can also be caused by random genetic mutation. Bacterial and viral infections can also cause fasciation. The bacterial phytopathogen Rhodococcus fascians has been demonstrated as one cause of fasciation, such as in sweet pea (Lathyrus odoratus) plants, but many fasciated plants have tested negative for the bacteria in studies, hence bacterial infection is not an exclusive causation. Additional environmental factors that can cause fasciation include fungi, mite or insect attack and exposure to chemicals. General damage to a plant's growing tip and exposure to cold and frost can also cause fasciation. Some plants, such as peas and cockscomb Celosia, may inherit the trait. Genetic fasciation is not contagious, but infectious fasciation can be spread from infected plants to others from contact with wounds on infected plants, and from water that carries the bacteria to other plants. Occurrence Although fasciation is rare overall, it has been observed in over 100 vascular plant families
https://en.wikipedia.org/wiki/Certified%20Server%20Validation
Certified Server Validation (CSV) is a technical method of email authentication intended to fight spam. Its focus is the SMTP HELO-identity of mail transfer agents. Purpose CSV was designed to address the problems of MARID and the ASRG, as defined in detail as the intent of Lightweight MTA Authentication Protocol (LMAP) in an expired ASRG draft. As of January 3, 2007, all Internet Drafts have expired and the mailing list has been closed down since there had been no traffic for 6 months. Principles of operation CSV considers two questions at the start of each SMTP session: Does a domain's management authorize this MTA to be sending email? Do reputable independent accreditation services consider that domain's policies and practices sufficient for controlling email abuse? CSV answers these questions as follows: to validate an SMTP session from an unknown sending SMTP client using CSV, the receiving SMTP server: Obtains the remote IP address of the TCP connection. Extracts the domain name from the HELO command sent by the SMTP client. Queries DNS to confirm the domain name is authorized for use by the IP (CSA). Asks a reputable Accreditation Service if it has a good reputation (DNA). Determines the level of trust to give to the sending SMTP client, based on the results of (3) and (4) If the level of trust is high enough, process all email from that session in the traditional manner, delivering or forwarding without the need for further validation. If the level of trust is too low, return an error showing the reason for not trusting the sending SMTP client. If the level of trust is in between, document the result in a header in each email delivered or forwarded, and/or perform additional checks. If the answers to both of the questions at the top of this article are 'Yes', then receivers can expect the email received to be email they want. Mail sources are motivated to make the answers yes, and it's easy for them to do so (unless their email flow is so toxic that
https://en.wikipedia.org/wiki/Gene%20mapping
Gene mapping or genome mapping describes the methods used to identify the location of a gene on a chromosome and the distances between genes. Gene mapping can also describe the distances between different sites within a gene. The essence of all genome mapping is to place a collection of molecular markers onto their respective positions on the genome. Molecular markers come in all forms. Genes can be viewed as one special type of genetic markers in the construction of genome maps, and mapped the same way as any other markers. In some areas of study, gene mapping contributes to the creation of new recombinants within an organism. Gene maps help describe the spatial arrangement of genes on a chromosome. Genes are designated to a specific location on a chromosome known as the locus and can be used as molecular markers to find the distance between other genes on a chromosome. Maps provide researchers with the opportunity to predict the inheritance patterns of specific traits, which can eventually lead to a better understanding of disease-linked traits. The genetic basis to gene maps is to provide an outline that can potentially help researchers carry out DNA sequencing. A gene map helps point out the relative positions of genes and allows researchers to locate regions of interest in the genome. Genes can then be identified quickly and sequenced quickly. Two approaches to generating gene maps (gene mapping) include physical mapping and genetic mapping. Physical mapping utilizes molecular biology techniques to inspect chromosomes. These techniques consequently allow researchers to observe chromosomes directly so that a map may be constructed with relative gene positions. Genetic mapping on the other hand uses genetic techniques to indirectly find association between genes. Techniques can include cross-breeding (hybrid) experiments and examining pedigrees. These technique allow for maps to be constructed so that relative positions of genes and other important sequences
https://en.wikipedia.org/wiki/Razer%20Inc.
Razer Inc., (stylized as RΛZΞR), is an American-Singaporean multinational technology company that designs, develops, and sells consumer electronics, financial services, and gaming hardware. The company was founded in 1998 by Min-Liang Tan and Robert "RazerGuy" Krakoff. It is dual headquartered in the one-north subzone of Queenstown, Singapore, and Irvine, California, US. History Razer began as a San Diego, California-based subsidiary of kärna LLC in 1998, created to develop and market a high-end computer gaming mouse, the Boomslang, targeted to computer gamers. Kärna ceased operations in 2000 due to poor financial issues. The current iteration of Razer was founded in 2005 by Min-Liang Tan, a Singaporean NUS graduate, and Robert Krakoff after they procured the rights to the Razer brand following a large investment from Hong Kong tycoon Li Ka-shing and Singaporean holding company Temasek Holdings. Razer bought the software assets of the Android-based microconsole Ouya from its parent company Ouya Inc. on 27 July 2015, while the hardware was discontinued. Ouya's technical team joined Razer's team in developing their own microconsole, which was called the Forge TV. It was discontinued in 2016. In October 2016, Razer purchased THX from Creative Technology according to THX CEO Ty Ahmad-Taylor. In January 2017, Razer bought manufacturer Nextbit, the startup behind the Robin smartphone. Shortly after in November that, Razer unveiled the Razer Phone, its first smartphone whose design is based on that of the Robin. In July 2017, Razer filed to go public through an IPO in Hong Kong. In October, it was confirmed that Razer plans to offer 1,063,600,000 shares at a range of $0.38–$0.51. On 14 November, Razer was officially listed on Hong Kong stock exchange under the stock code 1337, a reference to leet speak commonly used by gamers. Razer's IPO closed 18% up on the first day of trading and was the 2nd most successful IPO of 2017 in Hong Kong. In April 2018, Razer announce
https://en.wikipedia.org/wiki/Jackscrew
A jackscrew, or screw jack, is a type of jack that is operated by turning a leadscrew. It is commonly used to lift moderately and heavy weights, such as vehicles; to raise and lower the horizontal stabilizers of aircraft; and as adjustable supports for heavy loads, such as the foundations of houses. Description A screw jack consists of a heavy-duty vertical screw with a load table mounted on its top, which screws into a threaded hole in a stationary support frame with a wide base resting on the ground. A rotating collar on the head of the screw has holes into which the handle, a metal bar, fits. When the handle is turned clockwise, the screw moves further out of the base, lifting the load resting on the load table. In order to support large load forces, the screw is usually formed with Acme threads. Advantages An advantage of jackscrews over some other types of jack is that they are self-locking, which means when the rotational force on the screw is removed, it will remain motionless where it was left and will not rotate backwards, regardless of how much load it is supporting. This makes them inherently safer than hydraulic jacks, for example, which will move backwards under load if the force on the hydraulic actuator is accidentally released. Mechanical advantage The ideal mechanical advantage of a screw jack, the ratio of the force the jack exerts on the load to the input force on the lever ignoring friction is where is the force the jack exerts on the load. is the rotational force exerted on the handle of the jack is the length of the jack handle, from the screw axis to where the force is applied is the lead of the screw. The screw jack consists of two simple machines in series; the long operating handle serves as a lever whose output force turns the screw. So the mechanical advantage is increased by a longer handle as well as a finer screw thread. However, most screw jacks have large amounts of friction which increase the input force necessary, so th
https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg%20dithering
Floyd–Steinberg dithering is an image dithering algorithm first published in 1976 by Robert W. Floyd and Louis Steinberg. It is commonly used by image manipulation software, for example when an image is converted into GIF format that is restricted to a maximum of 256 colors. Implementation The algorithm achieves dithering using error diffusion, meaning it pushes (adds) the residual quantization error of a pixel onto its neighboring pixels, to be dealt with later. It spreads the debt out according to the distribution (shown as a map of the neighboring pixels): The pixel indicated with a star (*) indicates the pixel currently being scanned, and the blank pixels are the previously-scanned pixels. The algorithm scans the image from left to right, top to bottom, quantizing pixel values one by one. Each time, the quantization error is transferred to the neighboring pixels, while not affecting the pixels that already have been quantized. Hence, if a number of pixels have been rounded downwards, it becomes more likely that the next pixel is rounded upwards, such that on average, the quantization error is close to zero. The diffusion coefficients have the property that if the original pixel values are exactly halfway in between the nearest available colors, the dithered result is a checkerboard pattern. For example, 50% grey data could be dithered as a black-and-white checkerboard pattern. For optimal dithering, the counting of quantization errors should be in sufficient accuracy to prevent rounding errors from affecting the result. In some implementations, the horizontal direction of scan alternates between lines; this is called "serpentine scanning" or boustrophedon transform dithering. The algorithm described above is in the following pseudocode. This works for any approximately linear encoding of pixel values, such as 8-bit integers, 16-bit integers or real numbers in the range [0, 1]. for each y from top to bottom do for each x from left to right do
https://en.wikipedia.org/wiki/Terminology%20server
A terminology server is a piece of software providing a range of terminology-related software services through an applications programming interface to its client applications. Typical terminology services might include: Matching an arbitrary, user-defined text entry string (or regular expression) against a fixed internal list of natural language expressions, possibly using word equivalent, alternate spelling, abbreviation or part-of-speech substitution tables, and other lexical resources, to increase the recall and precision of the matching algorithm Retrieving any asserted associations between a fixed list of terminology expressions in one language and translations in another natural language Retrieving any asserted associations between a fixed list of terminology expressions, and entities in a concept system or ontology (information science) Retrieving any asserted or inferrable semantic links between concepts in a concept system or ontology (information science), particularly subsumption (Is-a) relationships Retrieving any directly asserted, or the best approximate indirectly inferrable, associations between concepts in an ontology and entities in one or more external resources (e.g. libraries of images, decision support rules or statistical classifications) See also Clinical terminology server References Servers (computing)
https://en.wikipedia.org/wiki/De%20Bruijn%20graph
In graph theory, an -dimensional De Bruijn graph of symbols is a directed graph representing overlaps between sequences of symbols. It has vertices, consisting of all possible sequences of the given symbols; the same symbol may appear multiple times in a sequence. For a set of symbols the set of vertices is: If one of the vertices can be expressed as another vertex by shifting all its symbols by one place to the left and adding a new symbol at the end of this vertex, then the latter has a directed edge to the former vertex. Thus the set of arcs (that is, directed edges) is Although De Bruijn graphs are named after Nicolaas Govert de Bruijn, they were invented independently by both De Bruijn and I. J. Good. Much earlier, Camille Flye Sainte-Marie implicitly used their properties. Properties If , then the condition for any two vertices forming an edge holds vacuously, and hence all the vertices are connected, forming a total of edges. Each vertex has exactly incoming and outgoing edges. Each -dimensional De Bruijn graph is the line digraph of the De Bruijn graph with the same set of symbols. Each De Bruijn graph is Eulerian and Hamiltonian. The Euler cycles and Hamiltonian cycles of these graphs (equivalent to each other via the line graph construction) are De Bruijn sequences. The line graph construction of the three smallest binary De Bruijn graphs is depicted below. As can be seen in the illustration, each vertex of the -dimensional De Bruijn graph corresponds to an edge of the De Bruijn graph, and each edge in the -dimensional De Bruijn graph corresponds to a two-edge path in the De Bruijn graph. Dynamical systems Binary De Bruijn graphs can be drawn in such a way that they resemble objects from the theory of dynamical systems, such as the Lorenz attractor: This analogy can be made rigorous: the -dimensional -symbol De Bruijn graph is a model of the Bernoulli map The Bernoulli map (also called the map for ) is an ergodic dynamical system,
https://en.wikipedia.org/wiki/Microformat
Microformats (μF) are a set of defined HTML classes created to serve as consistent and descriptive metadata about an element, designating it as representing a certain type of data (such as contact information, geographic coordinates, events, blog posts, products, recipes, etc.). They allow software to process the information reliably by having set classes refer to a specific type of data rather than being arbitrary. Microformats emerged around 2005 and were predominantly designed for use by search engines, web syndication and aggregators such as RSS. Although the content of web pages has been capable of some "automated processing" since the inception of the web, such processing is difficult because the markup elements used to display information on the web do not describe what the information means. Microformats can bridge this gap by attaching semantics, and thereby obviating other, more complicated, methods of automated processing, such as natural language processing or screen scraping. The use, adoption and processing of microformats enables data items to be indexed, searched for, saved or cross-referenced, so that information can be reused or combined. , microformats allow the encoding and extraction of event details, contact information, social relationships and similar information. Microformats2 abbreviated as mf2 is the updated version of microformats. Mf2 provides a more easy way of interpreting HTML(hypertext Markup Language) structured syntax and vocabularies than the earlier ways that made use of RDFa and microdata. Background Microformats emerged around 2005 as part of a grassroots movement to make recognizable data items (such as events, contact details or geographical locations) capable of automated processing by software, as well as directly readable by end-users. Link-based microformats emerged first. These include vote links that express opinions of the linked page, which search engines can tally into instant polls. CommerceNet, a nonprofit or
https://en.wikipedia.org/wiki/Ambient%20network
Ambient networks is a network integration design that seeks to solve problems relating to switching between networks to maintain contact with the outside world. This project aims to develop a network software-driven infrastructure that will run on top of all current or future network physical infrastructures to provide a way for devices to connect to each other, and through each other to the outside world. The concept of Ambient Networks comes from the IST Ambient Network project, which was a research project sponsored by the European Commission within the Sixth Framework Programme (FP6). The Ambient Networks Project Ambient Networks was a collaborative project within the European Union's Sixth Framework Programme that investigates future communications systems beyond fixed and 3rd generation mobile networks. It is part of the Wireless World Initiative. The project worked at a new concept called Ambient Networking, to provide suitable mobile networking technology for the future mobile and wireless communications environment. Ambient Networks aimed to provide a unified networking concept that can adapt to the very heterogeneous environment of different radio technologies and service and network environments. Special focus was put on facilitating both competition and cooperation of various market players by defining interfaces, which allow the instant negotiation of agreements. This approach went beyond interworking of well-defined protocols and was expected to have a long-term effect on the business landscape in the wireless world. Central to the project was the concept of composition of networks, an approach to address the dynamic nature of the target environment, based on an open framework for network control functionality, which can be extended with new capabilities as well as operating over existing connectivity infrastructure. Phase 1 of the project (2004–2005) laid the conceptual foundations. The Deliverable D1-5 "Ambient Networks Framework Architecture" su
https://en.wikipedia.org/wiki/WiCell
WiCell Research Institute is a scientific research institute in Madison, Wisconsin that focuses on stem cell research. Independently governed and supported as a 501(c)(3) organization, WiCell operates as an affiliate of the Wisconsin Alumni Research Foundation and works to advance stem cell research at the University of Wisconsin–Madison and beyond. History Established in 1998 to develop stem cell technology, WiCell Research Institute is a nonprofit organization that creates and distributes human pluripotent stem cell lines worldwide. WiCell also provides cytogenetic and technical services, establishes scientific protocols and supports basic research on the UW-Madison campus. WiCell serves as home to the Wisconsin International Stem Cell Bank. This stem cell repository stores, characterizes and provides access to stem cell lines for use in research and clinical development. The cell bank originally stored the first five human Embryonic stem cell lines derived by Dr. James Thomson of UW–Madison. It currently houses human embryonic stem cell lines, induced pluripotent stem cell lines, clinical grade cell lines developed in accordance with Good Manufacturing Practices (GMP) and differentiated cell lines including neural progenitor cells. To support continued progress in the field and help unlock the therapeutic potential of stem cells, in 2005 WiCell began providing cytogenetic services and quality control testing services. These services allow scientists to identify genetic abnormalities in cells or changes in stem cell colonies that might affect research results. Organization Chartered with a mission to support scientific investigation and research at UW–Madison, WiCell collaborates with faculty members and provides support with stem cell research projects. The institute established its cytogenetic laboratory to meet the growing needs of academic and commercial researchers to monitor genetic stability in stem cell cultures. Facilities WiCell maintains its stem c
https://en.wikipedia.org/wiki/Nomen%20nudum
In taxonomy, a nomen nudum ('naked name'; plural nomina nuda) is a designation which looks exactly like a scientific name of an organism, and may have originally been intended to be one, but it has not been published with an adequate description. This makes it a "bare" or "naked" name, which cannot be accepted as it stands. A largely equivalent but much less frequently used term is nomen tantum ("name only"). Sometimes, "nomina nuda" is erroneously considered a synonym for the term "unavailable names". However, not all unavailable names are nomina nuda. In zoology According to the rules of zoological nomenclature a nomen nudum is unavailable; the glossary of the International Code of Zoological Nomenclature gives this definition: And among the rules of that same Zoological Code: In botany According to the rules of botanical nomenclature a nomen nudum is not validly published. The glossary of the International Code of Nomenclature for algae, fungi, and plants gives this definition: The requirements for the diagnosis or description are covered by articles 32, 36, 41, 42, and 44. Nomina nuda that were published before 1 January 1959 can be used to establish a cultivar name. For example, Veronica sutherlandii, a nomen nudum, has been used as the basis for Hebe pinguifolia 'Sutherlandii'. See also Glossary of scientific naming Unavailable name Nomen dubium References Botanical nomenclature Zoological nomenclature Latin biological phrases
https://en.wikipedia.org/wiki/Nomen%20dubium
In binomial nomenclature, a nomen dubium (Latin for "doubtful name", plural nomina dubia) is a scientific name that is of unknown or doubtful application. Zoology In case of a nomen dubium, it may be impossible to determine whether a specimen belongs to that group or not. This may happen if the original type series (i. e. holotype, isotype, syntype or paratype) is lost or destroyed. The zoological and botanical codes allow for a new type specimen, or neotype, to be chosen in this case. A name may also be considered a nomen dubium if its name-bearing type is fragmentary or lacking important diagnostic features (this is often the case for species known only as fossils). To preserve stability of names, the International Code of Zoological Nomenclature allows a new type specimen, or neotype, to be chosen for a nomen dubium in this case. 75.5. Replacement of unidentifiable name-bearing type by a neotype. When an author considers that the taxonomic identity of a nominal species-group taxon cannot be determined from its existing name-bearing type (i.e. its name is a nomen dubium), and stability or universality are threatened thereby, the author may request the Commission to set aside under its plenary power [Art. 81] the existing name-bearing type and designate a neotype. For example, the crocodile-like archosaurian reptile Parasuchus hislopi Lydekker, 1885 was described based on a premaxillary rostrum (part of the snout), but this is no longer sufficient to distinguish Parasuchus from its close relatives. This made the name Parasuchus hislopi a nomen dubium. In 2001 a paleontologist proposed that a new type specimen, a complete skeleton, be designated. The International Commission on Zoological Nomenclature considered the case and agreed in 2003 to replace the original type specimen with the proposed neotype. Bacteriology In bacteriological nomenclature, nomina dubia may be placed on the list of rejected names by the Judicial Commission. The meaning of these names
https://en.wikipedia.org/wiki/Guillotine%20cutting
Guillotine cutting is the process of producing small rectangular items of fixed dimensions from a given large rectangular sheet, using only guillotine-cuts. A guillotine-cut (also called an edge-to-edge cut) is a straight bisecting line going from one edge of an existing rectangle to the opposite edge, similarly to a paper guillotine. Guillotine cutting is particularly common in the glass industry. Glass sheets are scored along horizontal and vertical lines, and then broken along these lines to obtain smaller panels. It is also useful for cutting steel plates, cutting of wood sheets to make furniture, and cutting of cardboard into boxes. There are various optimization problems related to guillotine cutting, such as: maximize the total area of the produced pieces, or their total value; minimize the amount of waste (unused parts) of the large sheet, or the total number of sheets. They have been studied in combinatorial geometry, operations research and industrial engineering. A related but different problem is guillotine partition. In that problem, the dimensions of the small rectangles are not fixed in advance. The challenge comes from the fact that the original sheet might not be rectangular - it can be any rectilinear polygon. In particular, it might contain holes (representing defects in the raw material). The optimization goal is usually to minimize the number of small rectangles, or minimize the total length of the cuts. Terminology and assumptions The following terms and notations are often used in the literature on guillotine cutting. The large rectangle, also called the stock sheet, is the raw rectangular sheet which should be cut. It is characterized by its width W0 and height H0, which are the primary inputs to the problem The small rectangles, also called items, are the required outputs of the cutting. They are characterized by their width wi and height hi and for i in 1,...,m, where m is the number of rectangles. Often, it is allowed to have sever
https://en.wikipedia.org/wiki/Myrmecophyte
Myrmecophytes (; literally "ant-plant") are plants that live in a mutualistic association with a colony of ants. There are over 100 different genera of myrmecophytes. These plants possess structural adaptations that provide ants with food and/or shelter. These specialized structures include domatia, food bodies, and extrafloral nectaries. In exchange for food and shelter, ants aid the myrmecophyte in pollination, seed dispersal, gathering of essential nutrients, and/or defense. Specifically, domatia adapted to ants may be called myrmecodomatia. Mutualism Myrmecophytes share a mutualistic relationship with ants, benefiting both the plants and ants. This association may be either facultative or obligate. Obligate In obligate mutualisms, both of the organisms involved are interdependent; they cannot survive on their own. An example of this type of mutualism can be found in the plant genus Macaranga. All species of this genus provide food for ants in various forms, but only the obligate species produce domatia. Some of the most common species of myrmecophytic Macaranga interact with ants in the genus Crematogaster. C. borneensis have been found to be completely dependent on its partner plant, not being able to survive without the provided nesting spaces and food bodies. In laboratory tests, the worker ants did not survive away from the plants, and in their natural habitat they were never found anywhere else. Facultative Facultative mutualism is a type of relationship where the survival of both parties (plant and ants, in this instance) is not dependent upon the interaction. Both organisms can survive without the other species. Facultative mutualisms most often occur in plants that have extrafloral nectaries but no other specialized structures for the ants. These non-exclusive nectaries allow a variety of animal species to interact with the plant. Facultative relationships can also develop between non-native plant and ant species, where co-evolution
https://en.wikipedia.org/wiki/Leecher%20%28computing%29
In computing and specifically in Internet slang, a leech is one who benefits, usually deliberately, from others' information or effort but does not offer anything in return, or makes only token offerings in an attempt to avoid being called a leech. In economics, this type of behavior is called "free riding" and is associated with the free rider problem. The term originated in the bulletin board system era, when it referred to users that would download files and upload nothing in return. Depending on context, leeching does not necessarily refer to illegal use of computer resources, but often instead to greedy use according to etiquette: to wit, using too much of what is freely given without contributing a reasonable amount back to the community that provides it. The word is also used without any pejorative connotations, simply meaning to download large sets of information: for example the offline reader Leech, the Usenet newsreader NewsLeecher, the audio recording software SoundLeech, or LeechPOP, a utility to download attachments from POP3 mailboxes. The name derives from the leech, an animal that sucks blood and then tries to leave unnoticed. Other terms are used, such as "freeloader", "mooch" and "sponge", but leech is the most commonly used. Examples Wi-Fi leeches attach to open wireless networks without the owner's knowledge in order to access the Internet. One example of this is someone who connects to a café's free wireless service from their car in the parking lot in order to download large amounts of data. Piggybacking is a term used to describe this phenomenon. Direct linking (or hot-linking) is a form of bandwidth leeching that occurs when placing an unauthorized linked object, often an image, from one site in a web page belonging to a second site (the leech). In most P2P-networks, leeching can be defined as behavior consisting of downloading more data, over time, than the individual is uploading to other clients, thus draining speed from the net
https://en.wikipedia.org/wiki/Enom
Enom, Inc. is a domain name registrar and Web hosting company that also sells other products closely tied to domain names, such as SSL certificates, e-mail services, and Website building software. As of May 2016, it manages over 15 million domains. Company history Enom was founded in 1997 in Kirkland, Washington operating as a wholesale business, allowing resellers to sell domains and other services under their own branding. Enom also operates retail sites enomcentral.com and bulkregister.com. In May 2006, Enom was one of the original businesses that were acquired to form privately held Demand Media, headquartered in Santa Monica, California. Within Demand Media, Enom operated as a domain name registrar and as the registrar platform for its media properties, until separating from Demand Media as a brand of Rightside Group, Ltd in 2014. In July 2006, Enom bought out competitor BulkRegister. Prior to its purchase, BulkRegister was a member-supported service where clients were not resellers, but companies large enough to pay an annual membership fee to acquire low registration fees on their domain name registrations, due to the volume they potentially register. With this acquisition, Enom rose to become the second largest domain name registrar. Enom maintained BulkRegister as a separate service until Tucows discontinued it after acquiring Enom. In June 2016, Enom officially launched its revitalized retail experience in a major series of improvements to its developer platform. The changes affected over 14 million domains handled through Enom's channel of partners and resellers, as well as directly through the company's retail interface. Which boasts a revitalized aesthetic including color palette and updated logo. Enom has a team in Electronic Sports (CSGO, BF3, SMITE) This team has 20 world championships on its records 15 as 1st team and 5 as 2nd team. In January 2017, Enom was sold to Canadian domain seller Tucows for US$83.5M. In January 2022, Enom experience
https://en.wikipedia.org/wiki/Jacobi%27s%20formula
In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix A in terms of the adjugate of A and the derivative of A. If is a differentiable map from the real numbers to matrices, then where is the trace of the matrix . (The latter equality only holds if A(t) is invertible.) As a special case, Equivalently, if stands for the differential of , the general formula is The formula is named after the mathematician Carl Gustav Jacob Jacobi. Derivation Via Matrix Computation We first prove a preliminary lemma: Lemma. Let A and B be a pair of square matrices of the same dimension n. Then Proof. The product AB of the pair of matrices has components Replacing the matrix A by its transpose AT is equivalent to permuting the indices of its components: The result follows by taking the trace of both sides: Theorem. (Jacobi's formula) For any differentiable map A from the real numbers to n × n matrices, Proof. Laplace's formula for the determinant of a matrix A can be stated as Notice that the summation is performed over some arbitrary row i of the matrix. The determinant of A can be considered to be a function of the elements of A: so that, by the chain rule, its differential is This summation is performed over all n×n elements of the matrix. To find ∂F/∂Aij consider that on the right hand side of Laplace's formula, the index i can be chosen at will. (In order to optimize calculations: Any other choice would eventually yield the same result, but it could be much harder). In particular, it can be chosen to match the first index of ∂ / ∂Aij: Thus, by the product rule, Now, if an element of a matrix Aij and a cofactor adjT(A)ik of element Aik lie on the same row (or column), then the cofactor will not be a function of Aij, because the cofactor of Aik is expressed in terms of elements not in its own row (nor column). Thus, so All the elements of A are independent of each other, i.e. where δ is the Kronecker delta, so
https://en.wikipedia.org/wiki/Change%20order
In project management, change orders are also called variations or variation orders. Any modification or change to works agreed in the contract is treated as a variation. Types These modifications can be divided into three main categories Addition to the work agreed in the contract. Omission of work agreed in the contract. Substitution or alteration of work agreed in the contract. Purpose A change order is work that is added to or deleted from the original scope of work of a contract. Depending on the magnitude of the change, it may or may not alter the original contract amount and/or completion date. A change order may force a new project to handle significant changes to the current project. Change orders are common to most projects, and very common with large projects. After the original scope (or contract) is formed, complete with the total price to be paid and the specific work to be completed, a client may decide that the original plans do not best represent his or her definition for the finished project. Accordingly, the client will suggest an alternate approach. Causes and resolution Common causes for change orders to be created are: The project's work was incorrectly estimated The customer or project team discovers obstacles or possible efficiencies that require them to deviate from the original plan The customer or project team are inefficient or incapable of completing their required deliverables within budget, and additional money, time, or resources must be added to the project During the course of the project, additional features or options are perceived and requested. If the contractor has to add work items to the original scope of work at a later time in order to achieve the customer's demands, a fair price for the work items and fees must be added for the materials and labor. A project manager then typically generates a change order that describes the new work to be done (or not done in some cases), and the price to be paid for this ne
https://en.wikipedia.org/wiki/Road%20roller
A road roller (sometimes called a roller-compactor, or just roller) is a compactor-type engineering vehicle used to compact soil, gravel, concrete, or asphalt in the construction of roads and foundations. Similar rollers are used also at landfills or in agriculture. Road rollers are frequently referred to as steamrollers, regardless of their method of propulsion. History The first road rollers were horse-drawn, and were probably borrowed farm implements (see Roller). Since the effectiveness of a roller depends to a large extent on its weight, self-powered vehicles replaced horse-drawn rollers from the mid-19th century. The first such vehicles were steam rollers. Single-cylinder steam rollers were generally used for base compaction and run with high engine revs with low gearing to promote bounce and vibration from the crankshaft through to the rolls in much the same way as a vibrating roller. The double cylinder or compound steam rollers became popular from around 1910 onwards and were used mainly for the rolling of hot-laid surfaces due to their smoother running engines, but both cylinder types are capable of rolling the finished surface. Steam rollers were often dedicated to a task by their gearing as the slower engines were for base compaction whereas the higher geared models were often referred to as "chip chasers" which followed the hot tar and chip laying machines. Some road companies in the US used steamrollers through the 1950s. In the UK some remained in service until the early 1970s. As internal combustion engines improved during the 20th century, kerosene-, gasoline- (petrol), and diesel-powered rollers gradually replaced their steam-powered counterparts. The first internal-combustion powered road rollers were similar to the steam rollers they replaced. They used similar mechanisms to transmit power from the engine to the wheels, typically large, exposed spur gears. Some users disliked them in their infancy, as the engines of the era were typically di
https://en.wikipedia.org/wiki/Carte%20Bleue
Carte Bleue () was a major debit card payment system operating in France. Unlike Visa Electron or Maestro debit cards, Carte Bleue transactions worked without requiring authorization from the cardholder's bank. In many situations, the card worked like a credit card but without fees for the cardholder. The system has now been integrated into a wider scheme called CB or carte bancaire ("banking card"). All Carte Bleue cards were part of CB, but not all CB cards were Carte Bleue. The system was national, and pure Carte Bleue cards did not operate outside France. However, it is possible and commonplace to get a CB Visa card that operates outside France. Carte Bleue was, technically speaking, the local Visa affiliate. Carte Bleue started in 1967, associating six French banks: BNP, CCF, Crédit du Nord, CIC, Crédit Lyonnais, and Société Générale. Combined Visa cards have existed since 1973 under the name Carte Bleue Internationale, changing to Carte Bleue Visa in 1976. From 1992 on, all Cartes Bleues / CB have been smart cards. When using a Carte Bleue at a French merchant, the PIN of the card must be used, and a microchip on the card verifies and authenticates the transaction. Only some very limited transactions, such as motorway tolls or parking fees, are paid without PIN. Since automatic teller machines also check for the PIN, this measure strongly reduces the incentive to steal Cartes Bleues, since the cards are essentially useless without the PIN (though one may try using the card number for mail-order or e-retailing). Foreign cards without microchips can still be used at French merchants if they accept them, with the usual procedure of swiping the magnetic stripe and signing the receipt. In 2000, Serge Humpich, after failing to convince the makers of a serious flaw he had found two years before, purchased some metro tickets to prove it. He sent the proof to Groupement des Cartes Bancaires. They then initiated criminal action against him, and he was convicted and
https://en.wikipedia.org/wiki/Ethyl%20cinnamate
Ethyl cinnamate is the ester of cinnamic acid and ethanol. It is present in the essential oil of cinnamon. Pure ethyl cinnamate has a "fruity and balsamic odor, reminiscent of cinnamon with an amber note". The p-methoxy derivative is reported to be a monoamine oxidase inhibitor. It can be synthesized by the esterification reaction involving ethanol and cinnamic acid in the presence of sulfuric acid. List of plants that contain the chemical Kaempferia galanga References Ethyl esters Phenyl compounds Food additives Flavors Phenylpropanoids
https://en.wikipedia.org/wiki/Thermal%20runaway
Thermal runaway describes a process that is accelerated by increased temperature, in turn releasing energy that further increases temperature. Thermal runaway occurs in situations where an increase in temperature changes the conditions in a way that causes a further increase in temperature, often leading to a destructive result. It is a kind of uncontrolled positive feedback. In chemistry (and chemical engineering), thermal runaway is associated with strongly exothermic reactions that are accelerated by temperature rise. In electrical engineering, thermal runaway is typically associated with increased current flow and power dissipation. Thermal runaway can occur in civil engineering, notably when the heat released by large amounts of curing concrete is not controlled. In astrophysics, runaway nuclear fusion reactions in stars can lead to nova and several types of supernova explosions, and also occur as a less dramatic event in the normal evolution of solar-mass stars, the "helium flash". Chemical engineering Chemical reactions involving thermal runaway are also called thermal explosions in chemical engineering, or runaway reactions in organic chemistry. It is a process by which an exothermic reaction goes out of control: the reaction rate increases due to an increase in temperature, causing a further increase in temperature and hence a further rapid increase in the reaction rate. This has contributed to industrial chemical accidents, most notably the 1947 Texas City disaster from overheated ammonium nitrate in a ship's hold, and the 1976 explosion of zoalene, in a drier, at King's Lynn. Frank-Kamenetskii theory provides a simplified analytical model for thermal explosion. Chain branching is an additional positive feedback mechanism which may also cause temperature to skyrocket because of rapidly increasing reaction rate. Chemical reactions are either endothermic or exothermic, as expressed by their change in enthalpy. Many reactions are highly exothermic, so ma
https://en.wikipedia.org/wiki/Broadcast%20reference%20monitor
A video reference monitor, also called a broadcast reference monitor or just reference monitor, is a specialized display device similar to a television set, used to monitor the output of a video-generating device, such as playout from a video server, IRD, video camera, VCR, or DVD player. It may or may not have professional audio monitoring capability. Unlike a television set, a video monitor has no tuner and, as such, is unable independently to tune into an over-the-air broadcast like a television receiver. One common use of video monitors is in television stations, television studios, production trucks and in outside broadcast vehicles, where broadcast engineers use them for confidence checking of analog signal and digital signals throughout the system. They can also be used for color grading if calibrated, during post-production. Common display types for video monitors Cathode ray tube Liquid crystal display Plasma display OLED Common monitoring formats for security Composite video S-Video Broadcast reference monitor Broadcast reference monitors must be used for video compliance at television or television studio facilities, because they do not perform any video enhancements and try to produce as accurate an image as possible. For quality control purposes, it is necessary for a broadcast reference monitor to produce (reasonably) consistent images from facility to facility, to reveal any flaws in the material, and also not to introduce any image artifacts (such as aliasing) that is not in the source material. Broadcast monitors will try to avoid post processing such as a video scaler, line doubling and any image enhancements such as dynamic contrast. However, display technologies with fixed pixel structures (e.g. LCD, plasma) must perform image scaling when displaying SD signals as the signal contains non-square pixels while the display has square pixels. LCDs and plasmas are also inherently progressive displays and may need to perform deinterlacing
https://en.wikipedia.org/wiki/Philips%20circle%20pattern
The Philips circle pattern (also referred to as the Philips pattern or PTV Circle pattern) refers to a family of related electronically generated complex television station colour test cards. The content and layout of the original colour circle pattern was designed by Danish engineer (1939–2011) in the Philips TV & Test Equipment laboratory in Brøndby Municipality near Copenhagen under supervision of chief engineer Erik Helmer Nielsen in 1966–67, largely building on their previous work with the monochrome PM5540 pattern. The first piece of equipment, the PM5544 colour pattern generator, which generates the pattern, was made by Finn Hendil and his group in 1968–69. The same team would also develop the Spanish TVE colour test card in 1973. Since the widespread introduction of the original PM5544 from the early-1970s, the Philips Pattern has become one of the most commonly used test cards, with only the SMPTE and EBU colour bars as well as the BBC's Test Card F coming close to its usage. The Philips circle pattern was later incorporated into other test pattern generators from Philips itself, as well as test pattern generators from various other manufacturers. Equipment from Philips and succeeding companies which generate the circle pattern are the PM5544, PM5534, PM5535, PM5644, PT5210, PT5230 and PT5300. Other related (non circle pattern) test card generators by Philips are the PM5400 (TV serviceman) family, PM5515/16/18, PM5519, PM5520 (monochrome), PM5522 (PAL), PM5540 (monochrome), PM5547, PM5552 and PM5631. Operation Rather than previous test card approaches that worked by a live camera or monoscope filming a printed card, the Philips PM5544 generates the test patterns fully using electronic circuits, with separate paths for Y, R-Y and B-Y colour components (), allowing engineers to reliably test and adjust transmitters and receivers for signal disturbances and colour separation, for instance for PAL broadcasts. In simple terms, the displayed pattern provid
https://en.wikipedia.org/wiki/Strategic%20dominance
In game theory, strategic dominance (commonly called simply dominance) occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play. Terminology When a player tries to choose the "best" strategy among a multitude of options, that player may compare two strategies A and B to see which one is better. The result of the comparison is one of: B is equivalent to A: choosing B always gives the same outcome as choosing A, no matter what the other players do. B strictly dominates A: choosing B always gives a better outcome than choosing A, no matter what the other players do. B weakly dominates A: choosing B always gives at least as good an outcome as choosing A, no matter what the other players do, and there is at least one set of opponents' action for which B gives a better outcome than A. (Notice that if B strictly dominates A, then B weakly dominates A. Therefore, we can say "B dominates A" as synonymous of "B weakly dominates A".) B and A are intransitive: B and A are not equivalent, and B neither dominates, nor is dominated by, A. Choosing A is better in some cases, while choosing B is better in other cases, depending on exactly how the opponent chooses to play. For example, B is "throw rock" while A is "throw scissors" in Rock, Paper, Scissors. B is weakly dominated by A: there is at least one set of opponents' actions for which B gives a worse outcome than A, while all other sets of opponents' actions give A the same payoff as B. (Strategy A weakly dominates B). B is strictly dominated by A: choosing B always gives a worse outcome than choosing A, no matter what the other player(s) do. (Strategy A strictly dominates B). This notion can be generalized beyond the comparison of two s
https://en.wikipedia.org/wiki/Joint%20constraints
Joint constraints are rotational constraints on the joints of an artificial system. They are used in an inverse kinematics chain, in fields including 3D animation or robotics. Joint constraints can be implemented in a number of ways, but the most common method is to limit rotation about the X, Y and Z axis independently. An elbow, for instance, could be represented by limiting rotation on X and Z axis to 0 degrees, and constraining the Y-axis rotation to 130 degrees. To simulate joint constraints more accurately, dot-products can be used with an independent axis to repulse the child bones orientation from the unreachable axis. Limiting the orientation of the child bone to a border of vectors tangent to the surface of the joint, repulsing the child bone away from the border, can also be useful in the precise restriction of shoulder movement. References Computer graphics 3D computer graphics Computational physics Robot kinematics Anatomical simulation
https://en.wikipedia.org/wiki/Eisenstein%20integer
In mathematics, the Eisenstein integers (named after Gotthold Eisenstein), occasionally also known as Eulerian integers (after Leonhard Euler), are the complex numbers of the form where and are integers and is a primitive (hence non-real) cube root of unity. The Eisenstein integers form a triangular lattice in the complex plane, in contrast with the Gaussian integers, which form a square lattice in the complex plane. The Eisenstein integers are a countably infinite set. Properties The Eisenstein integers form a commutative ring of algebraic integers in the algebraic number field – the third cyclotomic field. To see that the Eisenstein integers are algebraic integers note that each is a root of the monic polynomial In particular, satisfies the equation The product of two Eisenstein integers and is given explicitly by The 2-norm of an Eisenstein integer is just its squared modulus, and is given by which is clearly a positive ordinary (rational) integer. Also, the complex conjugate of satisfies The group of units in this ring is the cyclic group formed by the sixth roots of unity in the complex plane: , the Eisenstein integers of norm . Euclidean domain The ring of Eisenstein integers forms a Euclidean domain whose norm is given by the square modulus, as above: A division algorithm, applied to any dividend and divisor , gives a quotient and a remainder smaller than the divisor, satisfying: Here, , , , are all Eisenstein integers. This algorithm implies the Euclidean algorithm, which proves Euclid's lemma and the unique factorization of Eisenstein integers into Eisenstein primes. One division algorithm is as follows. First perform the division in the field of complex numbers, and write the quotient in terms of : for rational . Then obtain the Eisenstein integer quotient by rounding the rational coefficients to the nearest integer: Here may denote any of the standard rounding-to-integer functions. The reason this satisfi
https://en.wikipedia.org/wiki/One-dimensional%20symmetry%20group
A one-dimensional symmetry group is a mathematical group that describes symmetries in one dimension (1D). A pattern in 1D can be represented as a function f(x) for, say, the color at position x. The only nontrivial point group in 1D is a simple reflection. It can be represented by the simplest Coxeter group, A1, [ ], or Coxeter-Dynkin diagram . Affine symmetry groups represent translation. Isometries which leave the function unchanged are translations x + a with a such that f(x + a) = f(x) and reflections a − x with a such that f(a − x) = f(x). The reflections can be represented by the affine Coxeter group [∞], or Coxeter-Dynkin diagram representing two reflections, and the translational symmetry as [∞]+, or Coxeter-Dynkin diagram as the composite of two reflections. Point group For a pattern without translational symmetry there are the following possibilities (1D point groups): the symmetry group is the trivial group (no symmetry) the symmetry group is one of the groups each consisting of the identity and reflection in a point (isomorphic to Z2) Discrete symmetry groups These affine symmetries can be considered limiting cases of the 2D dihedral and cyclic groups: Translational symmetry Consider all patterns in 1D which have translational symmetry, i.e., functions f(x) such that for some a > 0, f(x + a) = f(x) for all x. For these patterns, the values of a for which this property holds form a group. We first consider patterns for which the group is discrete, i.e., for which the positive values in the group have a minimum. By rescaling we make this minimum value 1. Such patterns fall in two categories, the two 1D space groups or line groups. In the simpler case the only isometries of R which map the pattern to itself are translations; this applies, e.g., for the pattern − −−− − −−− − −−− − −−− Each isometry can be characterized by an integer, namely plus or minus the translation distance. Therefore the symmetry group is Z. In the other case, amon
https://en.wikipedia.org/wiki/The%20Advanced%20Visualizer
The Advanced Visualizer (TAV), a 3D graphics software package, was the flagship product of Wavefront Technologies from the 1980s until the 1990s. History A software package famous for its use in the production of numerous Oscar-winning movies such as The Abyss, Terminator 2: Judgment Day and Jurassic Park. Alias|Wavefront Merger This was widely seen as the result of Microsoft purchasing Softimage in an attempt to take over the 3D computer graphics market. Silicon Graphics responded by purchasing Alias Systems Corporation, and their two major competitors, Wavefront, and the French company TDI (Thomson Digital Images) for their Explore, IPR, and GUI technologies. Thus SGI created the super-company Alias|Wavefront. Wavefront's programmers continued to reside in California but the management of the company was carried out in Toronto, Canada. Autodesk Era In 1996 Alias|Wavefront announced the release of Maya which incorporated aspects of all 3 software suites. Wavefront was renamed to Alias Technologies and acquired by Autodesk in 2005. Some of the technology under Autodesk's ownership is still sold today as part of Maya. Architecture In contrast to many modern day (2011) computer graphics animation software, TAV was a set of independent programs that each focused on one aspect of image synthesis as opposed to a monolithic product. The collection of these smaller programs formed the entire suite based on simple interchange of mostly ASCII file formats such as OBJ. The major components of the TAV software suite included: Model, Paint, Dynamation, Kinemation, Preview, and fcheck. Composer was also available as an add-on for compositing of imagery. Many primitive utility programs such as graphics conversion were included in the toolkit and were frequently employed for batch processing via shell scripts. The modular nature allowed these loosely coupled lightweight programs to start-up quickly with relatively small memory footprints. It was not uncommon to run several
https://en.wikipedia.org/wiki/Jericho%20Forum
The Jericho Forum was an international group working to define and promote de-perimeterisation. It was initiated by David Lacey from the Royal Mail, and grew out of a loose affiliation of interested corporate CISOs (Chief Information Security Officers), discussing the topic from the summer of 2003, after an initial meeting hosted by Cisco, but was officially founded in January 2004. It declared success, and merged with The Open Group industry consortium's Security Forum in 2014. The problem It was created because the founding members claimed that no one else was appropriately discussing the problems surrounding de-perimeterisation. They felt the need to create a forum to define and solve consistently such issues. One of the earlier outputs of the group is a position paper entitled the Jericho Forum Commandments which are a set of principles that describe how best to survive in a de-perimeterised world. Membership The Jericho Forum consisted of "user members" and "vendor members". Originally, only user members were allowed to stand for election. In December 2008 this was relaxed, allowing either vendor or user members to be eligible for election. The day-to-day management was provided by the Open Group. While the Jericho Forum had its foundations in the UK, nearly all the initial members worked for corporates and had global responsibilities, and involvement grew to Europe, North America and Asia Pacific. Results After the initial focus on defining the problem, de-perimeterisation, the Forum then moved onto focussing on defining the solution, which it delivered in the publication of the Collaboration Oriented Architecture (COA) paper and COA Framework paper. The next focus of the Jericho Forum was "Securely Collaborating in Clouds", which involves applying the COA concepts to the emerging Cloud Computing paradigm. The basic premise is that a collaborative approach is essential to gain most value from "the cloud". Much of this work was transferred to the Cloud
https://en.wikipedia.org/wiki/Droste%20effect
The Droste effect (), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows. The effect is named after Droste, a Dutch brand of cocoa, with an image designed by Jan Misset in 1904. The Droste effect has since been used in the packaging of a variety of products. Apart from advertising, the effect is also seen in the Dutch artist M. C. Escher's 1956 lithograph Print Gallery, which portrays a gallery that depicts itself. The effect has been widely used on the covers of comic books, mainly in the 1940s. Effect Origins The Droste effect is named after the image on the tins and boxes of Droste cocoa powder which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image, designed by Jan Misset. This familiar image was introduced in 1904 and maintained for decades with slight variations from 1912 by artists including Adolphe Mouron. The poet and columnist Nico Scheepmaker introduced wider usage of the term in the late 1970s. Mathematics The appearance is recursive: the smaller version contains an even smaller version of the picture, and so on. Only in theory could this go on forever, as fractals do; practically, it continues only as long as the resolution of the picture allows, which is relatively short, since each iteration geometrically reduces the picture's size. Medieval art The Droste effect was anticipated by Giotto early in the 14th century, in his Stefaneschi Triptych. The altarpiece portrays in its centre panel Cardinal Giacomo Gaetani Stefaneschi offering the triptych itself to St. Peter. There are also several examples from medieval times of books featuring images containing the book itself or window panels in churches depicting miniature copies of the w
https://en.wikipedia.org/wiki/Heat%20shock%20response
The heat shock response (HSR) is a cell stress response that increases the number of molecular chaperones to combat the negative effects on proteins caused by stressors such as increased temperatures, oxidative stress, and heavy metals. In a normal cell, proteostasis (protein homeostasis) must be maintained because proteins are the main functional units of the cell. Many proteins take on a defined configuration in a process known as protein folding in order to perform their biological functions. If these structures are altered, critical processes could be affected, leading to cell damage or death. The heat shock response can be employed under stress to induce the expression of heat shock proteins (HSP), many of which are molecular chaperones, that help prevent or reverse protein misfolding and provide an environment for proper folding. Protein folding is already challenging due to the crowded intracellular space where aberrant interactions can arise; it becomes more difficult when environmental stressors can denature proteins and cause even more non-native folding to occur. If the work by molecular chaperones is not enough to prevent incorrect folding, the protein may be degraded by the proteasome or autophagy to remove any potentially toxic aggregates. Misfolded proteins, if left unchecked, can lead to aggregation that prevents the protein from moving into its proper conformation and eventually leads to plaque formation, which may be seen in various diseases. Heat shock proteins induced by the HSR can help prevent protein aggregation that is associated with common neurodegenerative diseases such as Alzheimer's, Huntington's, or Parkinson's disease. Induction of the heat shock response With the introduction of environmental stressors, the cell must be able to maintain proteostasis. Acute or chronic subjection to these harmful conditions elicits a cytoprotective response to promote stability to the proteome. HSPs (e.g. HSP70, HSP90, HSP60, etc.) are present under
https://en.wikipedia.org/wiki/River%20Raid
River Raid is a vertically scrolling shooter designed and programmed by Carol Shaw and published by Activision in 1982 for the Atari 2600 video game console. Over a million game cartridges were sold. Activision later ported the title to the Atari 5200, ColecoVision, and Intellivision consoles, as well as to the Commodore 64, IBM PCjr, MSX, ZX Spectrum, and Atari 8-bit family. Shaw did the Atari 8-bit and Atari 5200 ports herself. Activision published a less successful sequel in 1988 without Shaw's involvement. Gameplay Viewed from a top-down perspective, the player flies a fighter jet over the River of No Return in a raid behind enemy lines. The player's jet can only move left and right—it cannot maneuver up and down the screen—but it can accelerate and decelerate. The player's jet crashes if it collides with the riverbank or an enemy craft, or if the jet runs out of fuel. Assuming fuel can be replenished, and if the player evades damage, gameplay is essentially unlimited. The player scores points for shooting enemy tankers (30 points), helicopters (60 points), fuel depots (80 points), jets (100 points), and bridges (500 points). The jet refuels when it flies over a fuel depot. A bridge marks the end of a game level. Non-Atari 2600 ports of the game add hot air balloons that are worth 60 points when shot as well as tanks along the sides of the river that shoot at the player's jet. Destroying bridges also serve as the game's checkpoints. If the player crashes the plane they will start their next jet at the last destroyed bridge. Development For its time, River Raid provided an inordinate amount of non-random, repeating terrain despite constrictive computer memory limits. For the Atari 2600 the game with its program code and graphics had to fit into a 4 KB ROM. The game program does not actually store the sequence of terrain and other objects. Instead, a procedural generation algorithm employing a linear-feedback shift register with a hard-coded starting value
https://en.wikipedia.org/wiki/Systems%20integrator
A systems integrator (or system integrator) is a person or company that specializes in bringing together component subsystems into a whole and ensuring that those subsystems function together, a practice known as system integration. They also solve problems of automation. Systems integrators may work in many fields but the term is generally used in the information technology (IT) field such as computer networking, the defense industry, the mass media, enterprise application integration, business process management or manual computer programming. Data quality issues are an important part of the work of systems integrators. Required skills A system integration engineer needs a broad range of skills and is likely to be defined by a breadth of knowledge rather than a depth of knowledge. These skills are likely to include software, systems and enterprise architecture, software and hardware engineering, interface protocols, and general problem solving skills. It is likely that the problems to be solved have not been solved before except in the broadest sense. They are likely to include new and challenging problems with an input from a broad range of engineers where the system integration engineer "pulls it all together." Performance technology integration Systems integrators generally have to be good at matching clients needs with existing products. An inductive reasoning aptitude is useful for quickly understanding how to operate a system or a GUI. A systems integrator will tend to benefit from being a generalist, knowing a lot about a large number of products. Systems integration includes a substantial amount of diagnostic and troubleshooting work. The ability to research existing products and software components is also helpful. Creation of these information systems may include designing or building customized prototypes or concepts. In the defense industry In the defense industry, the job of 'Systems Integration' engineer is growing in importance as defense sys
https://en.wikipedia.org/wiki/M%C3%BCllerian%20mimicry
Müllerian mimicry is a natural phenomenon in which two or more well-defended species, often foul-tasting and sharing common predators, have come to mimic each other's honest warning signals, to their mutual benefit. The benefit to Müllerian mimics is that predators only need one unpleasant encounter with one member of a set of Müllerian mimics, and thereafter avoid all similar coloration, whether or not it belongs to the same species as the initial encounter. It is named after the German naturalist Fritz Müller, who first proposed the concept in 1878, supporting his theory with the first mathematical model of frequency-dependent selection, one of the first such models anywhere in biology. Müllerian mimicry was first identified in tropical butterflies that shared colourful wing patterns, but it is found in many groups of insects such as bumblebees, and other animals such as poison frogs and coral snakes. The mimicry need not be visual; for example, many snakes share auditory warning signals. Similarly, the defences involved are not limited to toxicity; anything that tends to deter predators, such as foul taste, sharp spines, or defensive behaviour can make a species unprofitable enough to predators to allow Müllerian mimicry to develop. Once a pair of Müllerian mimics has formed, other mimics may join them by advergent evolution (one species changing to conform to the appearance of the pair, rather than mutual convergence), forming mimicry rings. Large rings are found for example in velvet ants. Since the frequency of mimics is positively correlated with survivability, rarer mimics are likely to adapt to resemble commoner models, favouring both advergence and larger Müllerian mimicry rings. Where mimics are not strongly protected by venom or other defences, honest Müllerian mimicry becomes, by degrees, the better-known bluffing of Batesian mimicry. History Origins Müllerian mimicry was proposed by the German zoologist and naturalist Fritz Müller (1821–1897). An
https://en.wikipedia.org/wiki/Ethnobiology
Ethnobiology is the scientific study of the way living things are treated or used by different human cultures. It studies the dynamic relationships between people, biota, and environments, from the distant past to the immediate present. "People-biota-environment" interactions around the world are documented and studied through time, across cultures, and across disciplines in a search for valid, reliable answers to two 'defining' questions: "How and in what ways do human societies use nature, and how and in what ways do human societies view nature?" History Beginnings (15th century–19th century) Biologists have been interested in local biological knowledge since the time Europeans started colonising the world, from the 15th century onwards. Paul Sillitoe wrote that: Local biological knowledge, collected and sampled over these early centuries significantly informed the early development of modern biology: during the 17th century Georg Eberhard Rumphius benefited from local biological knowledge in producing his catalogue, "Herbarium Amboinense", covering more than 1,200 species of the plants in Indonesia; during the 18th century, Carl Linnaeus relied upon Rumphius's work, and also corresponded with other people all around the world when developing the biological classification scheme that now underlies the arrangement of much of the accumulated knowledge of the biological sciences. during the 19th century, Charles Darwin, the 'father' of evolutionary theory, on his Voyage of the Beagle took interest in the local biological knowledge of peoples he encountered. Phase I (1900s–1940s) Ethnobiology itself, as a distinctive practice, only emerged during the 20th century as part of the records then being made about other peoples, and other cultures. As a practice, it was nearly always ancillary to other pursuits when documenting others' languages, folklore, and natural resource use. Roy Ellen commented that: This 'first phase' in the development of ethnobiology as a
https://en.wikipedia.org/wiki/Folk%20biology
Folk biology (or folkbiology) is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into obvious species-like groups. The relationship between a folk taxonomy and a scientific classification can assist in understanding how evolutionary theory deals with the apparent constancy of "common species" and the organic processes centering on them. From the vantage of evolutionary psychology, such natural systems are arguably routine "habits of mind", a sort of heuristic used to make sense of the natural world. References External links Scott Atran (1999) Folk Biology (PDF), in Robert Wilson and Frank Keil, Ed. The MIT Encyclopedia of the Cognitive Sciences, pages 316-317. MIT Press. Branches of biology Ethnobiology Evolutionary psychology Scientific folklore
https://en.wikipedia.org/wiki/Microsoft%20Cluster%20Server
Microsoft Cluster Server (MSCS) is a computer program that allows server computers to work together as a computer cluster, to provide failover and increased availability of applications, or parallel calculating power in case of high-performance computing (HPC) clusters (as in supercomputing). Microsoft has three technologies for clustering: Microsoft Cluster Service (MSCS, a HA clustering service), Component Load Balancing (CLB) (part of Application Center 2000), and Network Load Balancing Services (NLB). With the release of Windows Server 2008 the MSCS service was renamed to Windows Server Failover Clustering (WSFC), and the Component Load Balancing (CLB) feature became deprecated. Prior to Windows Server 2008, clustering required (per Microsoft KBs) that all nodes in the clusters to be as identical as possible from hardware, drivers, firmware, all the way to software. After Windows Server 2008 however, Microsoft modified the requirements to state that only the operating system needs to be of the same level (such as patch level). Background Cluster Server was codenamed "Wolfpack" during its development. Windows NT Server 4.0, Enterprise Edition was the first version of Windows to include the MSCS software. The software has since been updated with each new server release. The cluster software evaluates the resources of servers in the cluster and chooses which are used based on criteria set in the administration module. In June 2006, Microsoft released Windows Compute Cluster Server 2003, the first high-performance computing (HPC) cluster technology offering from Microsoft. History During Microsoft's first attempt at development of a cluster server Microsoft, originally priced at $10,000, ran into problems causing the software to fail because of buggy software causing fail-over forcing the workload from two servers to a single server. This results in poor allocation of resources, poor performance of the servers, and very poor reviews from analysts. The announc