source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Serotype
A serotype or serovar is a distinct variation within a species of bacteria or virus or among immune cells of different individuals. These microorganisms, viruses, or cells are classified together based on their surface antigens, allowing the epidemiologic classification of organisms to the subspecies level. A group of serovars with common antigens is called a serogroup or sometimes serocomplex. Serotyping often plays an essential role in determining species and subspecies. The Salmonella genus of bacteria, for example, has been determined to have over 2600 serotypes. Vibrio cholerae, the species of bacteria that causes cholera, has over 200 serotypes, based on cell antigens. Only two of them have been observed to produce the potent enterotoxin that results in cholera: O1 and O139. Serotypes were discovered by the American microbiologist Rebecca Lancefield in 1933. Role in organ transplantation The immune system is capable of discerning a cell as being 'self' or 'non-self' according to that cell's serotype. In humans, that serotype is largely determined by human leukocyte antigen (HLA), the human version of the major histocompatibility complex. Cells determined to be non-self are usually recognized by the immune system as foreign, causing an immune response, such as hemagglutination. Serotypes differ widely between individuals; therefore, if cells from one human (or animal) are introduced into another random human, those cells are often determined to be non-self because they do not match the self-serotype. For this reason, transplants between genetically non-identical humans often induce a problematic immune response in the recipient, leading to transplant rejection. In some situations, this effect can be reduced by serotyping both recipient and potential donors to determine the closest HLA match. Human leukocyte antigens Serotyping of Salmonella The Kauffman–White classification scheme is the basis for naming the manifold serovars of Salmonella. To date, more
https://en.wikipedia.org/wiki/Formula%20Student
Formula Student is a student engineering competition held annually all over the world. Student teams from around the world design, build, test, and race a small-scale formula style racing car. The cars are judged on a number of criteria as listed below. It is run by the Institution of Mechanical Engineers and uses the same rules as the original Formula SAE with supplementary regulations. Ambassadors of Formula Student include David Brabham, Paddy Lowe, Willem Toet, Leena Gade, Dallas Campbell, Mike Gascoyne, and James Allison. Formula Student partnered with Racing Pride in 2019 to support greater inclusivity across the British motorsport industry for LGBT+ fans, employees and drivers. Class definitions There are two entry classes in Formula Student, designed to allow progressive learning. Formula Student Class (formerly Class 1) This is the main event, where teams compete with the cars they have designed and built. Teams are judged across 6 categories and must pass a rigorous inspection by judges before being allowed to compete for the dynamic events. There are usually 100-120 teams in this class. Concept Class (formerly Class 2) This is a concept class for teams who only have a project and plan for a Class 1 car. It can include any parts or work that has been completed in the project so far but this is not necessary. Teams are judged on business presentation, cost and design. Schools can enter both FS Class and Concept Class cars, allowing Concept Class to be used for inexperienced students to practise their development in advance of a full Formula Student Class entry. Class 1A (pre-2012) This was an alternative fueled class with the emphasis placed upon the environmental impact of racing. A car from the previous year's Class 1 entry could be re-entered and re-engineered allowing the students to concentrate on the low carbon aspect of the competition without having to redesign a new chassis and ancillaries. Cars in Class 1A were judged in the same events alo
https://en.wikipedia.org/wiki/Wireless%20Transport%20Layer%20Security
Wireless Transport Layer Security (WTLS) is a security protocol, part of the Wireless Application Protocol (WAP) stack. It sits between the WTP and WDP layers in the WAP communications stack. Overview WTLS is derived from TLS. WTLS uses similar semantics adapted for a low bandwidth mobile device. The main changes are: Compressed data structures — Where possible packet sizes are reduced by using bit-fields, discarding redundancy and truncating some cryptographic elements. New certificate format — WTLS defines a compressed certificate format. This broadly follows the X.509 v3 certificate structure, but uses smaller data structures. Packet based design — TLS is designed for use over a data stream. WTLS adapts that design to be more appropriate on a packet based network. A significant amount of the design is based on a requirement that it be possible to use a packet network such as SMS as a data transport. WTLS has been superseded in the WAP Wireless Application Protocol 2.0 standard by the End-to-end Transport Layer Security Specification. Security WTLS uses cryptographic algorithms and in common with TLS allows negotiation of cryptographic suites between client and server. Algorithms An incomplete list: Key Exchange and Signature RSA Elliptic Curve Cryptography (ECC) Symmetric Encryption DES Triple DES RC5 Message Digest MD5 SHA1 Security criticisms Encryption/Decryption at the gateway — in the WAP architecture the content is typically stored on the server as uncompressed WML (an XML DTD). That content is retrieved by the gateway using HTTP and compressed into WBXML, in order to perform that compression the gateway must be able to handle the WML in cleartext, so even if there is encryption between the client and the gateway (using WTLS) and between the gateway and the originating server (using HTTPS) the gateway acts as a man-in-the-middle. This gateway architecture serves a number of purposes: transcoding between HTML and WML; content provide
https://en.wikipedia.org/wiki/Keystone%20Kapers
Keystone Kapers is a platform game developed by Garry Kitchen for Activision and published for the Atari 2600 in April 1983. It was ported to the Atari 5200, Atari 8-bit family, ColecoVision, and in 1984, MSX. Inspired by Mack Sennett's slapstick Keystone Cops series of silent films, the object of the game is for Officer Keystone Kelly to catch Harry Hooligan before he can escape from a department store. Gameplay The game takes place in side-view of a three-story department store and its roof. The store is eight times wider than the screen; reaching the edge of the screen flips to the next section of the store. A mini-map at the bottom of the screen provides an overall view of the store and characters. Officer Kelly begins the game in the lower right on the first floor. The joystick moves Kelly left and right. Vertical movement between floors is accomplished by escalators at the ends of the map and a central elevator. Harry Hooligan starts the game in the center of the second floor. He immediately begins running to the right to reach the elevator to the third floor. Hooligan continues moving up the floor. If he succeeds, then he escapes. This trip takes 50 seconds, and a timer at the top of the screen counts down the remaining time. Kelly runs significantly faster than Hooligan and can normally catch him in that time in a straight run with no penalties. Kelly can use the elevator to get ahead of Hooligan, causing Hooligan to reverse direction and start back down through the store. He can jump down between levels at either end of the map, something Kelly cannot do because the escalators only go up for Kelly. Kelly has to use the elevator carefully, or risk being stuck on a higher floor while the timer runs out. Slowing Kelly's progress are obstacles like radios, beach balls, and shopping carts, which can be jumped over or ducked under by pushing up or down on the joystick. Hitting any of these objects causes a nine-second penalty. In later levels, flying toy bi
https://en.wikipedia.org/wiki/Doujin%20soft
is software created by Japanese hobbyists or hobbyist groups (referred to as "circles"), more for fun than for profit. The term includes digital , which are essentially the Japanese equivalent of independent video games or fangames (the term "doujin game" also includes things like doujin-made board games and card games). Doujin soft is considered part of doujin katsudou, for which it accounts for 5% of all doujin works altogether (as of 2015). Doujin soft began with microcomputers in Japan, and spread to platforms such as the MSX and X68000. Since the 1990's, however, they have primarily been made for Microsoft Windows. Most doujin soft sales occur at doujin conventions such as Comiket, with several that deal with doujin soft or doujin games exclusively such as Freedom Game (which further only allows games distributed for free) and Digital Games Expo. There is also a growing number of specialized internet sites that sell doujin soft. Additionally, more doujin games have been sold as downloads on consoles and PC stores such as Steam in recent years, through publishers such as Mediascape picking them up. Digital doujin games Doujin video games, like doujin soft, began with microcomputers in Japan, such as the PC-98 and PC-88, and spread to platforms such as the MSX, FM Towns and X68000. From the 90's to 00's however, they were primarily exclusive to Microsoft Windows. In recent years, more doujin games have been released on mobile platforms and home consoles, as well as other operating systems like macOS and Linux. Though doujin games used to primarily be for home computers, more doujin games have been made available on gaming consoles in recent years. There are also doujin groups that develop software for retro consoles such as the Game Boy and Game Gear. Like fangames, doujin games frequently use characters from existing games, anime, or manga ("niji sousaku"). These unauthorized uses of characters are generally ignored and accepted by the copyright holders, an
https://en.wikipedia.org/wiki/Robustness%20principle
In computing, the robustness principle is a design guideline for software that states: "be conservative in what you do, be liberal in what you accept from others". It is often reworded as: "be conservative in what you send, be liberal in what you accept". The principle is also known as Postel's law, after Jon Postel, who used the wording in an early specification of TCP. In other words, programs that send messages to other machines (or to other programs on the same machine) should conform completely to the specifications, but programs that receive messages should accept non-conformant input as long as the meaning is clear. Among programmers, to produce compatible functions, the principle is also known in the form be contravariant in the input type and covariant in the output type. Interpretation RFC 1122 (1989) expanded on Postel's principle by recommending that programmers "assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect". Protocols should allow for the addition of new codes for existing fields in future versions of protocols by accepting messages with unknown codes (possibly logging them). Programmers should avoid sending messages with "legal but obscure protocol features" that might expose deficiencies in receivers, and design their code "not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility". Criticism In 2001, Marshall Rose characterized several deployment problems when applying Postel's principle in the design of a new application protocol. For example, a defective implementation that sends non-conforming messages might be used only with implementations that tolerate those deviations from the specification until, possibly several years later, it is connected with a less tolerant application that rejects its messages. In such a situation, identifying the problem is often
https://en.wikipedia.org/wiki/Time%20reversibility
A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed. A deterministic process is time-reversible if the time-reversed process satisfies the same dynamic equations as the original process; in other words, the equations are invariant or symmetrical under a change in the sign of time. A stochastic process is reversible if the statistical properties of the process are the same as the statistical properties for time-reversed data from the same process. Mathematics In mathematics, a dynamical system is time-reversible if the forward evolution is one-to-one, so that for every state there exists a transformation (an involution) π which gives a one-to-one mapping between the time-reversed evolution of any one state and the forward-time evolution of another corresponding state, given by the operator equation: Any time-independent structures (e.g. critical points or attractors) which the dynamics give rise to must therefore either be self-symmetrical or have symmetrical images under the involution π. Physics In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry). In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present, reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry. Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process. Note, however, that the fundamental laws that underlie the thermodynamic processes are all time-reversible (classical laws of motion and laws of electrodynamics), which means that
https://en.wikipedia.org/wiki/Puzzle%20jewelry
A piece of puzzle jewelry is a puzzle which can be worn by a person as jewelry. These puzzles can be both fully mechanically functional and aesthetically pleasing as pieces of wearable jewelry. Examples of available puzzle jewelry The following list implies that a small version of the cited puzzle is available with suitable design and finish to be worn as jewelry. Puzzle ring, often made in Turkey having four interconnected rings See also Puzzle box References Types of jewellery Mechanical puzzles
https://en.wikipedia.org/wiki/Puzzle%20ring
A puzzle ring is a jewelry ring made up of multiple interconnected bands, which is a type of mechanical puzzle most likely developed as an elaboration of the European gimmal ring. The puzzle ring is also sometimes called a "Turkish wedding ring" or "harem ring." According to popular legend, the ring would be given by the husband as a wedding ring, because if the wife removed it (presumably to commit adultery), the bands of the ring would fall apart, and she would be unable to reassemble it before its absence would be noticed. However, a puzzle ring can be easily removed without the bands falling apart. In Sweden, Norway and Finland, puzzle rings are often carried by military veterans (in Norway the rings are often called the "Lebanon ring" after military people have served in the United Nations Interim Forces in Lebanon UNIFIL), where the number of rings correspond to the number of tours made, starting at 4 rings for 1 tour (mostly for Sweden, not usually for Norwegian veterans). In Finland you can use the ring if you have served more than 6 months. References Mechanical puzzles Rings (jewellery)
https://en.wikipedia.org/wiki/Axial%20ratio
Axial ratio, for any structure or shape with two or more axes, is the ratio of the length (or magnitude) of those axes to each other - the longer axis divided by the shorter. In chemistry or materials science, the axial ratio (symbol P) is used to describe rigid rod-like molecules. It is defined as the length of the rod divided by the rod diameter. In physics, the axial ratio describes electromagnetic radiation with elliptical, or circular, polarization. The axial ratio is the ratio of the magnitudes of the major and minor axis defined by the electric field vector. See also Aspect ratio Degree of polarization Ratios Polymer physics
https://en.wikipedia.org/wiki/Hyperstructure
Hyperstructures are algebraic structures equipped with at least one multi-valued operation, called a hyperoperation. The largest classes of the hyperstructures are the ones called – structures. A hyperoperation on a nonempty set is a mapping from to the nonempty power set , meaning the set of all nonempty subsets of , i.e. For we define and is a semihypergroup if is an associative hyperoperation, i.e. for all Furthermore, a hypergroup is a semihypergroup , where the reproduction axiom is valid, i.e. for all References AHA (Algebraic Hyperstructures & Applications). A scientific group at Democritus University of Thrace, School of Education, Greece. aha.eled.duth.gr Applications of Hyperstructure Theory, Piergiulio Corsini, Violeta Leoreanu, Springer, 2003, , Functional Equations on Hypergroups, László, Székelyhidi, World Scientific Publishing, 2012, Abstract algebra
https://en.wikipedia.org/wiki/Friis%20formulas%20for%20noise
Friis formula or Friis's formula (sometimes Friis' formula), named after Danish-American electrical engineer Harald T. Friis, is either of two formulas used in telecommunications engineering to calculate the signal-to-noise ratio of a multistage amplifier. One relates to noise factor while the other relates to noise temperature. The Friis formula for noise factor Friis's formula is used to calculate the total noise factor of a cascade of stages, each with its own noise factor and power gain (assuming that the impedances are matched at each stage). The total noise factor can then be used to calculate the total noise figure. The total noise factor is given as where and are the noise factor and available power gain, respectively, of the i-th stage, and n is the number of stages. Both magnitudes are expressed as ratios, not in decibels. Consequences An important consequence of this formula is that the overall noise figure of a radio receiver is primarily established by the noise figure of its first amplifying stage. Subsequent stages have a diminishing effect on signal-to-noise ratio. For this reason, the first stage amplifier in a receiver is often called the low-noise amplifier (LNA). The overall receiver noise "factor" is then where is the overall noise factor of the subsequent stages. According to the equation, the overall noise factor, , is dominated by the noise factor of the LNA, , if the gain is sufficiently high. The resultant Noise Figure expressed in dB is: Derivation For a derivation of Friis' formula for the case of three cascaded amplifiers () consider the image below. A source outputs a signal of power and noise of power . Therefore the SNR at the input of the receiver chain is . The signal of power gets amplified by all three amplifiers. Thus the signal power at the output of the third amplifier is . The noise power at the output of the amplifier chain consists of four parts: The amplified noise of the source () The output referred noise o
https://en.wikipedia.org/wiki/Sum-free%20sequence
In mathematics, a sum-free sequence is an increasing sequence of positive integers, such that no term can be represented as a sum of any subset of the preceding elements of the sequence. This differs from a sum-free set, where only pairs of sums must be avoided, but where those sums may come from the whole set rather than just the preceding terms. Example The powers of two, 1, 2, 4, 8, 16, ... form a sum-free sequence: each term in the sequence is one more than the sum of all preceding terms, and so cannot be represented as a sum of preceding terms. Sums of reciprocals A set of integers is said to be small if the sum of its reciprocals converges to a finite value. For instance, by the prime number theorem, the prime numbers are not small. proved that every sum-free sequence is small, and asked how large the sum of reciprocals could be. For instance, the sum of the reciprocals of the powers of two (a geometric series) is two. If denotes the maximum sum of reciprocals of a sum-free sequence, then through subsequent research it is known that . Density It follows from the fact that sum-free sequences are small that they have zero Schnirelmann density; that is, if is defined to be the number of sequence elements that are less than or equal to , then . showed that for every sum-free sequence there exists an unbounded sequence of numbers for which where is the golden ratio, and he exhibited a sum-free sequence for which, for all values of , , subsequently improved to by Deshouillers, Erdős and Melfi in 1999 and to by Luczak and Schoen in 2000, who also proved that the exponent 1/2 cannot be further improved. Notes References . . . . . . . . Additive combinatorics Integer sequences
https://en.wikipedia.org/wiki/Relocation%20%28computing%29
Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses. Prior to the advent of multiprocess systems, and still in many embedded systems, the addresses for objects were absolute starting at a known location, often zero. Since multiprocessing systems dynamically link and switch between programs it became necessary to be able to relocate objects using position-independent code. A linker usually performs relocation in conjunction with symbol resolution, the process of searching files and libraries to replace symbolic references or names of libraries with actual usable addresses in memory before running a program. Relocation is typically done by the linker at link time, but it can also be done at load time by a relocating loader, or at run time by the running program itself. Some architectures avoid relocation entirely by deferring address assignment to run time; as, for example, in stack machines with zero address arithmetic or in some segmented architectures where every compilation unit is loaded into a separate segment. Segmentation Object files are segmented into various memory segment types. Example segments include code segment (.text), initialized data segment (.data), uninitialized data segment (.bss), or others. Relocation table The relocation table is a list of pointers created by the translator (a compiler or assembler) and stored in the object or executable file. Each entry in the table, or "fixup", is a pointer to an absolute address in the object code that must be changed when the loader relocates the program so that it will refer to the correct location. Fixups are designed to support relocation of the program as a complete unit. In some cases, each fixup in the table is itself relative to a base address of zero, so the fixups themselves must be changed as the loader moves through the table. In some architectures a fixup that crosses ce
https://en.wikipedia.org/wiki/Shuttle%20%28weaving%29
A shuttle is a tool designed to neatly and compactly store a holder that carries the thread of the weft yarn while weaving with a loom. Shuttles are thrown or passed back and forth through the shed, between the yarn threads of the warp in order to weave in the weft. The simplest shuttles, known as "stick shuttles", are made from a flat, narrow piece of wood with notches on the ends to hold the weft yarn. More complicated shuttles incorporate bobbins or pirns. In the United States, shuttles are often made of wood from the flowering dogwood, because it is hard, resists splintering, and can be polished to a very smooth finish. In the United Kingdom shuttles were usually made of boxwood, cornel, or persimmon. Flying shuttle Shuttles were originally passed back and forth by hand. However, John Kay invented a loom in 1733 that incorporated a flying shuttle. This shuttle could be thrown through the warp, which allowed much wider cloth to be woven much more quickly and made the development of machine looms much simpler. Though air-jet and water-jet looms are common in large operations, many companies still use flying shuttle looms. This is due in large part to their being easier to maintain than the more modern looms. In modern flying shuttle looms, the shuttle itself is made of rounded steel, with a hook in the back which carries the filler, or "pick." Health issues The act of "kissing the shuttle", in which weavers used their mouths to pull thread through the eye of a shuttle when the pirn was replaced, contributed to the spread of disease. Gallery References Chandler, Deborah (1995). Learning to Weave, Loveland, Colorado: Interweave Press LLC. External links Pak Shuttle Company (Pvt) Ltd. Heraldic charges Weaving equipment
https://en.wikipedia.org/wiki/DIMES
DIMES (Distributed Internet Measurements & Simulations) was a subproject of the EVERGROW Integrated Project in the EU Information Society Technologies, Future and Emerging Technologies programme. It studied the structure and topology of the Internet to obtain map and annotate it with delay, loss and link capacity. DIMES used measurements by software agents downloaded by volunteers and installed on their privately owned machines. Once installed at the agent operates at a very low rate so as to have minimal impact on the machine performance and on its network connection. DIMES intended to explore relationships between the data gathered on the Internet's growth with geographical and socio-economic data, in particular for fast developing countries, to see if they can provide a measure of economic development and societal openness. The project published period maps at several aggregation levels on the web. , over 12500 agents were installed by over 5500 users residing in about 95 nations, and in several hundreds of ASes. The project collected over 2.2 billion measurements. See also ETOMIC List of volunteer computing projects Network mapping References External links The DIMES home page DIMES agent source code Science Magazine: Data-Bots Chart the Internet (Subscription) Visualizing Internet Topology at a Macroscopic Scale - CAIDA M-PASM (Multiple Perspectives of Autonomous System Mapping ) Internet architecture Volunteer computing projects
https://en.wikipedia.org/wiki/Nokia%20770%20Internet%20Tablet
The Nokia 770 Internet Tablet is a wireless Internet appliance from Nokia, originally announced at the LinuxWorld Summit in New York City on 25 May 2005. It is designed for wireless Internet browsing and email functions and includes software such as Internet radio, an RSS news reader, ebook reader, image viewer and media players for selected types of media. The device went on sale in Europe on 3 November 2005, at a suggested retail price of €349 to €369 (£245 in the United Kingdom). In the United States, the device became available for purchase through Nokia USA's web site on 14 November 2005 for $359.99. On 8 January 2007, Nokia announced the Nokia N800, the successor to the 770. In July 2007, the price for the Nokia 770 fell to under US$150 / 150 EUR / 100 GBP. Specifications Dimensions: 141×79×19 mm (5.5×3.1×0.7 in) Weight: 230 g (8.1 oz) with protective cover or 185 g (6.5 oz) without. Processor: Texas Instruments OMAP 1710 CPU running at 252 MHz. It combines the ARM architecture of the ARM926TEJ core subsystem with a Texas Instruments TMS320C55x digital signal processor. Memory: 64 MB (64 × 220 bytes) of DDR RAM, and 128 MB of internal flash memory, of which about 64 MB should be available to the user. Option for extended virtual memory (RS-MMC up to 1 GB (2 GB after flash upgrade)). Display and resolution: 4.1 inches, 800×480 pixels at 225 pixels per inch with up to 65,536 colors Connectivity: WLAN (IEEE 802.11b/g), Bluetooth 1.2, dial-up access, USB (both user-mode, and non-powered host-mode) Expansion: RS-MMC (both RS-MMC and DV-RS-MMC cards are supported). Audio: speaker and a microphone The device was manufactured in Estonia and Germany. Maemo The 770, like all Nokia Internet Tablets, runs Maemo, which is similar to many handheld operating systems and provides a "Home" screen—the central point from which all applications and settings are accessed. The home screen is divided into areas for launching applications, a menu bar, and a large custo
https://en.wikipedia.org/wiki/Decoding%20methods
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel. Notation is considered a binary code with the length ; shall be elements of ; and is the distance between those elements. Ideal observer decoding One may be given the message , then ideal observer decoding generates the codeword . The process results in this solution: For example, a person can choose the codeword that is most likely to be received as the message after transmission. Decoding conventions Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include: Request that the codeword be resent automatic repeat-request. Choose any random codeword from the set of most likely codewords which is nearer to that. If another code follows, mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them Maximum likelihood decoding Given a received vector maximum likelihood decoding picks a codeword that maximizes , that is, the codeword that maximizes the probability that was received, given that was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding. In fact, by Bayes Theorem, Upon fixing , is restructured and is constant as all codewords are equally likely to be sent. Therefore, is maximised as a function of the variable precisely when is maximised, and the claim follows. As with ideal observer decoding, a convention must be agreed to for non-unique decoding. The maximum likelihood decoding problem can also be modeled as an integer programming problem. The
https://en.wikipedia.org/wiki/6bone
The 6bone was a testbed for Internet Protocol version 6; it was an outgrowth of the IETF IPng project that created the IPv6 protocols intended to eventually replace the current Internet network layer protocols known as IPv4. The 6bone was started outside the official IETF process at the March 1996 IETF meetings, and became a worldwide informal collaborative project, with eventual oversight from the "NGtrans" (IPv6 Transition) Working Group of the IETF. The original mission of the 6bone was to establish a network to foster the development, testing, and deployment of IPv6 using a model to be based upon the experiences from the Mbone, hence the name "6bone". The 6bone started as a virtual network (using IPv6 over IPv4 tunneling/encapsulation) operating over the IPv4-based Internet to support IPv6 transport, and slowly added native links specifically for IPv6 transport. Although the initial 6bone focus was on testing of standards and implementations, the eventual focus became more on testing of transition and operational procedures, as well as actual IPv6 network usage. The 6bone operated under the IPv6 Testing Address Allocation (see ), which specified the 3FFE::/16 IPv6 prefix for 6bone testing purposes. At its peak in mid-2003, over 150 6bone top level 3FFE::/16 network prefixes were routed, interconnecting over 1000 sites in more than 50 countries. When it became obvious that the availability of IPv6 top level production prefixes was assured, and that commercial and private IPv6 networks were being operated outside the 6bone using these prefixes, a plan was developed to phase out the 6bone (see ). The phaseout plan called for a halt to new 6bone prefix allocations on 1 January 2004 and the complete cessation of 6bone operation and routing over the 6bone testing prefixes on 6 June 2006. Addresses within the 6bone testing prefix have now reverted to the IANA. Related RFCs IPv6 Testing Address Allocation 6bone (IPv6 Testing Address Allocation) Phaseout Ext
https://en.wikipedia.org/wiki/Distributed%20object
In distributed computing, distributed objects are objects (in the sense of object-oriented programming) that are distributed across different address spaces, either in different processes on the same computer, or even in multiple computers connected via a network, but which work together by sharing data and invoking methods. This often involves location transparency, where remote objects appear the same as local objects. The main method of distributed object communication is with remote method invocation, generally by message-passing: one object sends a message to another object in a remote machine or process to perform some task. The results are sent back to the calling object. Distributed objects were popular in the late 1990s and early 2000s, but have since fallen out of favor. The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing, such as replicated objects or live distributed objects. Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states, and that respond to requests in a coordinated manner. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol, perhaps resulting in only a weak consistency between their local states. Live distributed objects can also be defined as running instances of distributed multi-party protocols, viewed from the object-oriented perspective as entities that have a distinct identity, and that can encapsulate distributed state and behavior. See also Internet protocol suite. Local vs. distributed objects Local and distributed objects differ in many respects. Here are s
https://en.wikipedia.org/wiki/Local%20Management%20Interface
Local Management Interface (LMI) is a term for some signaling standards used in networks, namely Frame Relay and Carrier Ethernet. Frame Relay LMI is a set of signalling standards between routers and Frame Relay switches. Communication takes place between a router and the first Frame Relay switch to which it is connected. Information about keepalives, global addressing, IP multicast and the status of virtual circuits is commonly exchanged using LMI. There are three standards for LMI: Using DLCI 0: ANSI's T1.617 Annex D standard ITU-T's Q.933 Annex A standard Using DLCI 1023: The "Gang of Four" standard, developed by Cisco, DEC, StrataCom and Nortel Carrier Ethernet Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices. References External links Additional information on Frame Relay LMI Computer networking Frame Relay
https://en.wikipedia.org/wiki/In-memory%20database
An in-memory database (IMDB, or main memory database system (MMDB) or memory resident database) is a database management system that primarily relies on main memory for computer data storage. It is contrasted with database management systems that employ a disk storage mechanism. In-memory databases are faster than disk-optimized databases because disk access is slower than memory access and the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disk. Applications where response time is critical, such as those running telecommunications network equipment and mobile advertising networks, often use main-memory databases. IMDBs have gained much traction, especially in the data analytics space, starting in the mid-2000s – mainly due to multi-core processors that can address large memory and due to less expensive RAM. A potential technical hurdle with in-memory data storage is the volatility of RAM. Specifically in the event of a power loss, intentional or otherwise, data stored in volatile RAM is lost. With the introduction of non-volatile random-access memory technology, in-memory databases will be able to run at full speed and maintain data in the event of power failure. ACID support In its simplest form, main memory databases store data on volatile memory devices. These devices lose all stored information when the device loses power or is reset. In this case, IMDBs can be said to lack support for the "durability" portion of the ACID (atomicity, consistency, isolation, durability) properties. Volatile memory-based IMDBs can, and often do, support the other three ACID properties of atomicity, consistency and isolation. Many IMDBs have added durability via the following mechanisms: Snapshot files, or, checkpoint images, which record the state of the database at a given moment in time. The system typically
https://en.wikipedia.org/wiki/Architecture%20of%20Windows%20NT
The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant multitasking operating system, which has been designed to work with uniprocessor and symmetrical multiprocessor (SMP)-based computers. To process input/output (I/O) requests, it uses packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows XP, Microsoft began making 64-bit versions of Windows available; before this, there were only 32-bit versions of these operating systems. Programs and subsystems in user mode are limited in terms of what system resources they have access to, while the kernel mode has unrestricted access to the system memory and external devices. Kernel mode in Windows NT has full access to the hardware and system resources of the computer. The Windows NT kernel is a hybrid kernel; the architecture comprises a simple kernel, hardware abstraction layer (HAL), drivers, and a range of services (collectively named Executive), which all exist in kernel mode. User mode in Windows NT is made of subsystems capable of passing I/O requests to the appropriate kernel mode device drivers by using the I/O manager. The user mode layer of Windows NT is made up of the "Environment subsystems", which run applications written for many different types of operating systems, and the "Integral subsystem", which operates system-specific functions on behalf of environment subsystems. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to. The Executive interfaces, with all the user mode subsystems, deal with I/O, object management, security and process management. The kernel sits between the hardware abstraction layer and the Executive to provide multiprocessor synchronization, thread and interrupt scheduling and dispatching, an
https://en.wikipedia.org/wiki/WeatherBug
WeatherBug is a brand based in New York City, that provides location-based advertising to businesses. WeatherBug consists of a mobile app reporting live and forecast data on hyperlocal weather to consumer users. History Originally owned by Automated Weather Source, the WeatherBug brand was founded by Bob Marshall and other partners in 1993. It started in the education market by selling weather tracking stations and educational software to public and private schools and then used the data from the stations on their website. Later, the company began partnering with TV stations so that broadcasters could use WeatherBug's local data and camera shots in their weather reports. In 2000, the WeatherBug desktop application and website were launched. Later, the company launched WeatherBug and WeatherBug Elite as smartphone apps for iOS and Android, which won an APPY app design award in 2013. The company also sells a lightning tracking safety system that is used by schools and parks in southern Florida and elsewhere. The company used lightning detection sensors throughout Guinea in Africa to track storms as they develop and has more than 50 lightning detection sensors in Brazil. Earth Networks received The Award for Outstanding Services to Meteorology by a Corporation in 2014 from the American Meteorological Society for "developing innovative lightning detection data products that improve severe-storm monitoring and warnings." WeatherBug announced in 2004 it had been certified to display the TRUSTe privacy seal on its website. In 2005, Microsoft AntiSpyware flagged the application as a low-risk spyware threat. According to the company, the desktop application is not spyware because it is incapable of tracking users' overall Web use or deciphering anything on their hard drive. In early 2011, AWS Convergence Technologies, Inc. (formerly Automated Weather Source) changed its name to Earth Networks, Inc. In April 2013, WeatherBug was the second most popular weather informat
https://en.wikipedia.org/wiki/Henderson%20limit
The Henderson limit is the X-ray dose (energy per unit mass) a cryo-cooled crystal can absorb before the diffraction pattern decays to half of its original intensity. Its value is defined as 2 × 107 Gy (J/kg). Decay of diffraction patterns with increasing X-ray dose Although the process is still not fully understood, diffraction patterns of crystals typically decay with X-ray exposure due to a number of processes which non-uniformly and irreversibly modify molecules that compose the crystal. These modifications induce disorder and thus decrease the intensity of Bragg diffraction. The processes behind these modifications include primary damage via the photo electric effect, covalent modification by free radicals, oxidation (methionine residues), reduction (disulfide bonds) and decarboxylation (glutamate, aspartate residues). Practical significance Although generalizable, the limit is defined in the context of biomolecular X-ray crystallography, where a typical experiment consists of exposing a single frozen crystal of a macromolecule (generally protein, DNA or RNA) to an intense X-ray beam. The beams that are diffracted are then analyzed towards obtaining an atomically resolved model of the crystal. Such decay presents itself as a problem for crystallographers who require that the diffraction intensities decay as little as possible, to maximize the signal to noise ratio in order to determine accurate atomic models that describe the crystal. References Molecular biology
https://en.wikipedia.org/wiki/Puzzle%20box
A puzzle box (also called a secret box or trick box) is a box that can be opened only by solving a puzzle. Some require only a simple move and others a series of discoveries. Modern puzzle boxes developed from furniture and jewelry boxes with secret compartments and hidden openings, known since the Renaissance. Puzzle boxes produced for entertainment first appeared in Victorian England in the 19th century and as tourist souvenirs in the Interlaken region in Switzerland and in the Hakone region of Japan at the end of the 19th and the beginning of the 20th century. Boxes with secret openings appeared as souvenirs at other tourist destinations during the early 20th century, including the Amalfi Coast, Madeira, and Sri Lanka, though these were mostly 'one-trick' traditions. Chinese cricket boxes represent another example of intricate boxes with secret openings. Interest in puzzle boxes subsided during and after the two World Wars. The art was revived in the 1980s by three pioneers of this genre: Akio Kamei in Japan, Trevor Wood in England, and Frank Chambers in Ireland. There are currently a number of artists producing puzzle boxes, including the Karakuri group in Japan set up by Akio Kamei, US puzzle box specialists Robert Yarger and Kagen Sound, as well as a number of other designers and puzzle makers who produce puzzle boxes across the globe. Clive Barker's horror novella The Hellbound Heart (later adapted into a film, Hellraiser, followed by numerous original sequels) centers on the fictional Lemarchand's box, a puzzle box which opens the gates to another dimension when manipulated. See also Mechanical puzzle References External links Mechanical puzzles
https://en.wikipedia.org/wiki/Greek%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. In these contexts, the capital letters and the small letters represent distinct and unrelated entities. Those Greek letters which have the same form as Latin letters are rarely used: capital A, B, E, Z, H, I, K, M, N, O, P, T, Y, X. Small ι, ο and υ are also rarely used, since they closely resemble the Latin letters i, o and u. Sometimes, font variants of Greek letters are used as distinct symbols in mathematics, in particular for ε/ϵ and π/ϖ. The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used. The Bayer designation naming scheme for stars typically uses the first Greek letter, α, for the brightest star in each constellation, and runs through the alphabet before switching to Latin letters. In mathematical finance, the Greeks are the variables denoted by Greek letters used to describe the risk of certain investments. Typography The Greek letter forms used in mathematics are often different from those used in Greek-language text: they are designed to be used in isolation, not connected to other letters, and some use variant forms which are not normally used in current Greek typography. The OpenType font format has the feature tag "mgrk" ("Mathematical Greek") to identify a glyph as representing a Greek letter to be used in mathematical (as opposed to Greek language) contexts. The table below shows a comparison of Greek letters rendered in TeX and HTML. The font used in the TeX rendering is an italic style. This is in line with the convention that variables should be italicized. As Greek letters are more often than not used as variables in mathematical formulas, a Greek letter appearing similar to the TeX rendering is more likely to be encountered in works involving mathematics. Concepts represented by a Greek letter Αα (alpha) repr
https://en.wikipedia.org/wiki/Indel
Indel (insertion-deletion) is a molecular biology term for an insertion or deletion of bases in the genome of an organism. Indels ≥ 50 bases in length are classified as structural variants. In coding regions of the genome, unless the length of an indel is a multiple of 3, it will produce a frameshift mutation. For example, a common microindel which results in a frameshift causes Bloom syndrome in the Jewish or Japanese population. Indels can be contrasted with a point mutation. An indel inserts or deletes nucleotides from a sequence, while a point mutation is a form of substitution that replaces one of the nucleotides without changing the overall number in the DNA. Indels can also be contrasted with Tandem Base Mutations (TBM), which may result from fundamentally different mechanisms. A TBM is defined as a substitution at adjacent nucleotides (primarily substitutions at two adjacent nucleotides, but substitutions at three adjacent nucleotides have been observed). Indels, being either insertions, or deletions, can be used as genetic markers in natural populations, especially in phylogenetic studies. It has been shown that genomic regions with multiple indels can also be used for species-identification procedures. An indel change of a single base pair in the coding part of an mRNA results in a frameshift during mRNA translation that could lead to an inappropriate (premature) stop codon in a different frame. Indels that are not multiples of 3 are particularly uncommon in coding regions but relatively common in non-coding regions. There are approximately 192-280 frameshifting indels in each person. Indels are likely to represent between 16% and 25% of all sequence polymorphisms in humans. In fact, in most known genomes, including humans, indel frequency tends to be markedly lower than that of single nucleotide polymorphisms (SNP), except near highly repetitive regions, including homopolymers and microsatellites. The term "indel" has been co-opted in recent years by
https://en.wikipedia.org/wiki/Lists%20of%20microcomputers
For an overview of microcomputers of different kinds, see the following lists of microcomputers: List of early microcomputers List of home computers List of home computers by video hardware Lists of computer hardware
https://en.wikipedia.org/wiki/Ettercap%20%28software%29
Ettercap is a free and open source network security tool for man-in-the-middle attacks on a LAN. It can be used for computer network protocol analysis and security auditing. It runs on various Unix-like operating systems including Linux, Mac OS X, BSD and Solaris, and on Microsoft Windows. It is capable of intercepting traffic on a network segment, capturing passwords, and conducting active eavesdropping against a number of common protocols. Its original developers later founded Hacking Team. Functionality Ettercap works by putting the network interface into promiscuous mode and by ARP poisoning the target machines. Thereby it can act as a 'man in the middle' and unleash various attacks on the victims. Ettercap has plugin support so that the features can be extended by adding new plugins. Features Ettercap supports active and passive dissection of many protocols (including ciphered ones) and provides many features for network and host analysis. Ettercap offers four modes of operation: IP-based: packets are filtered based on IP source and destination. MAC-based: packets are filtered based on MAC address, useful for sniffing connections through a gateway. ARP-based: uses ARP poisoning to sniff on a switched LAN between two hosts (full-duplex). PublicARP-based: uses ARP poisoning to sniff on a switched LAN from a victim host to all other hosts (half-duplex). In addition, the software also offers the following features: Character injection into an established connection: characters can be injected into a server (emulating commands) or to a client (emulating replies) while maintaining a live connection. SSH1 support: the sniffing of a username and password, and even the data of an SSH1 connection. Ettercap is the first software capable of sniffing an SSH connection in full duplex. HTTPS support: the sniffing of HTTP SSL secured data—even when the connection is made through a proxy. Remote traffic through a GRE tunnel: the sniffing of remote traffic through a
https://en.wikipedia.org/wiki/Phi%20Sigma%20Rho
Phi Sigma Rho (; also known as Phi Rho or PSR) is a social sorority for individuals who identify as female or non-binary in science, technology, engineering, and mathematics. The sorority was founded in 1984 at Purdue University. It has since expanded to more than 40 colleges across the United States. History Phi Sigma Rho was founded on September 24, 1984, at Purdue University by Rashmi Khanna and Abby McDonald. Khanna and McDonald were unable to participate in traditional sorority rush due to the demands of the sororities and their engineering program, so they decided to start a new sorority that would take their academic program's demands into consideration. The Alpha chapter at Purdue University was founded with ten charter members: Gail Bonney, Anita Chatterjea, Ann Cullinan, Pam Kabbes, Rashmi Khanna, Abby McDonald, Christine Mooney, Tina Kershner, Michelle Self, and Kathy Vargo. Phi Sigma Rho accepts students pursuing degrees in science, technology, engineering, and mathematics who identify as female or who identify as non-binary. The sorority made the decision to include non-binary students in all chapters in the summer of 2021. Phi Sigma Rho has grown more than 40 chapters nationally. Its headquarters is located in Northville, Michigan. Its online magazine is The Key. Symbols The colors of Phi Sigma Rho are wine red and silver. The sorority's flower is the orchid, and its jewel is the pearl. Its mascot is Sigmand the penguin. Its motto is "together we build the future." Objectives The objectives of Phi Sigma Rho are: To foster and provide the broadening experience of sorority living with its social and moral challenges and responsibilities for the individual and the chapter. To develop the highest standard of personal integrity and character. To promote academic excellence and support personal achievement, while providing a social balance. To aid the individual in the transition from academic to the professional community. To maintain soro
https://en.wikipedia.org/wiki/Roman%20dodecahedron
A Roman dodecahedron or Gallo-Roman dodecahedron is a small hollow object made of copper alloy which has been cast into a regular dodecahedral shape: twelve flat pentagonal faces, each face having a circular hole of varying diameter in the middle, the holes connecting to the hollow center. Roman dodecahedra date from the 2nd to 4th centuries AD and their purpose remains unknown. They rarely show signs of wear, and do not have any inscribed numbers or letters. History The first dodecahedron was found in 1739. Since then, at least 116 similar objects have been found in Italy, Austria, Belgium, France, Germany, Hungary, Luxembourg, the Netherlands, Switzerland and the United Kingdom. Instances range in size from . A Roman icosahedron has also been discovered after having long been misclassified as a dodecahedron. This icosahedron was excavated near Arloff in Germany and is currently on display in the Rheinisches Landesmuseum in Bonn. Purpose No mention of dodecahedrons has been found in contemporary accounts or pictures of the time. Speculative uses include as a survey instrument for estimating distances to (or sizes of) distant objects, though this is questioned as there are no markings to indicate that they would be a mathematical instrument; as spool knitting devices for making gloves, though the earliest known reference to spool knitting is from 1535; as part of a child's toy, or for decorative purposes. Contemporary craftspeople and YouTubers have successfully created knit gloves using 3-D printed reconstructions of the dodecohedrons, demonstrating their functionality for this purpose, and suggesting this as the historic use case. Several dodecahedra were found in coin hoards, providing evidence that their owners either considered them valuable objects, or believed their only use was connected with coins. It has been suggested that they might have been religious artifacts, or even fortune-telling devices. This latter speculation is based on the fact that mo
https://en.wikipedia.org/wiki/Link%20budget
A link budget is an accounting of all of the power gains and losses that a communication signal experiences in a telecommunication system; from a transmitter, through a communication medium such as radio waves, cable, waveguide, or optical fiber, to the receiver. It is an equation giving the received power from the transmitter power, after the attenuation of the transmitted signal due to propagation, as well as the antenna gains and feedline and other losses, and amplification of the signal in the receiver or any repeaters it passes through. A link budget is a design aid, calculated during the design of a communication system to determine the received power, to ensure that the information is received intelligibly with an adequate signal-to-noise ratio. Randomly varying channel gains such as fading are taken into account by adding some margin depending on the anticipated severity of its effects. The amount of margin required can be reduced by the use of mitigating techniques such as antenna diversity or MIMO (Multiple Input Multiple Output). A simple link budget equation looks like this: Received power (dBm) = transmitted power (dBm) + gains (dB) − losses (dB) Power levels are expressed in (dBm), Power gains and losses are expressed in decibels (dB), which is a logarithmic measurement, so adding decibels is equivalent to multiplying the actual power ratios. In radio systems For a line-of-sight radio system, the primary source of loss is the decrease of the signal power as it spreads over an increasing area while it propagates, proportional to the square of the distance (geometric spreading). Transmitting antennas can be Omnidirectional, Directional, or Sectorial, depending on the way in which the antenna power is oriented. An omnidirectional antenna will distribute the power equally in every direction of a plane, so the radiation pattern has the shape of a sphere squeezed between two parallel flat surfaces. They are widely used in many applications, for in
https://en.wikipedia.org/wiki/Renewal%20theory
Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times. A renewal process has asymptotic properties analogous to the strong law of large numbers and central limit theorem. The renewal function (expected number of arrivals) and reward function (expected reward value) are of key importance in renewal theory. The renewal function satisfies a recursive integral equation, the renewal equation. The key renewal equation gives the limiting value of the convolution of with a suitable non-negative function. The superposition of renewal processes can be studied as a special case of Markov renewal processes. Applications include calculating the best strategy for replacing worn-out machinery in a factory and comparing the long-term benefits of different insurance policies. The inspection paradox relates to the fact that observing a renewal interval at time t gives an interval with average value larger than that of an average renewal interval. Renewal processes Introduction The renewal process is a generalization of the Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent exponentially distributed holding times at each integer before advancing to the next integer, . In a renewal process, the holding times need not have an exponential distribution; rather, the holding times may have any distribution on the positive numbers, so long as the holding times are independent and identically distributed (IID) and have finite mean. Formal definition Let be a sequence of positive independen
https://en.wikipedia.org/wiki/Sound%20transmission%20class
Sound Transmission Class (or STC) is an integer rating of how well a building partition attenuates airborne sound. In the US, it is widely used to rate interior partitions, ceilings, floors, doors, windows and exterior wall configurations. Outside the US, the ISO Sound Reduction Index (SRI) is used. The STC rating very roughly reflects the decibel reduction of noise that a partition can provide. The STC is useful for evaluating annoyance due to speech sounds, but not music or machinery noise as these sources contain more low frequency energy than speech. There are many ways to improve the sound transmission class of a partition, though the two most basic principles are adding mass and increasing the overall thickness. In general, the sound transmission class of a double wythe wall (e.g. two 4"-thick block walls separated by a 2" airspace) is greater than a single wall of equivalent mass (e.g. homogeneous 8" block wall). Definition The STC or sound transmission class is a single number method of rating how well wall partitions reduce sound transmission. The STC provides a standardized way to compare products such as doors and windows made by competing manufacturers. A higher number indicates more effective sound insulation than a lower number. The STC is a standardized rating provided by ASTM E413 based on laboratory measurements performed in accordance with ASRM E90. ASTM E413 can also be used to determine similar ratings from field measurements performed in accordance with ASTM E336. Sound Isolation and Sound Insulation are used interchangeably, though the term "Insulation" is preferred outside the US. The term "sound proofing" is typically avoided in architectural acoustics as it is a misnomer and connotes inaudibility. Subjective correlation Through research, acousticians have developed tables that pair a given STC rating with a subjective experience. The table below is used to determine the degree of sound isolation provided by typical multi-family construc
https://en.wikipedia.org/wiki/Hammerhead%20Networks
Hammerhead Networks was a computer networking company based in Billerica, Massachusetts. It produced software solutions for the delivery of Internet Protocol service features. History It was founded in April 2000 by Eddie Sullivan, who also served as its CEO. It was acquired by Cisco Systems on May 1, 2002, in a stock transaction worth up to US$173M. Cisco had previously owned a minority interest. It had 85 employees at the time of the acquisition. References Defunct software companies of the United States Networking companies of the United States Cisco Systems acquisitions Software companies based in Massachusetts Companies based in Billerica, Massachusetts Software companies established in 2000 Software companies disestablished in 2002 Defunct companies based in Massachusetts
https://en.wikipedia.org/wiki/Logos%20Dictionary
Logos Dictionary is a large multilingual online dictionary provided by Logos Group, a European translation company. It was started in 1995, and as of 2005 contains over 7 million terms in over 200 languages, some of them minority languages as Breton, Leonese, Scots or Venetian. The dictionary offers a variety of search options, and requires free registration in order to add or update translations. There is also a Logos Dictionary for Children where mainly words for kids can be found, and a quote citation in several languages. See also Endangered languages External links Logos dictionary Online dictionaries Leonese language Breton language Scots language Venetian language
https://en.wikipedia.org/wiki/HyperWRT
HyperWRT is a GPL firmware project for the Linksys WRT54G and WRT54GS wireless routers based on the stock Linksys firmware. The original goal of the HyperWRT project was to add a set of features—such as power boost—to the latest Linux-based Linksys firmware, extending its possibilities but staying close to the official firmware. Over time, it continued to be updated with newer Linksys firmware, and added many more features typically found in enterprise routing equipment. HyperWRT is no longer maintained and has been succeeded by Tomato. History The original HyperWRT project was started in 2004 by Timothy Jans (a.k.a. Avenger 2.0), with continued development into early 2005. Another programmer called Rupan then continued HyperWRT development by integrating newer Linksys code as it was released. Later in 2005, two developers called tofu and Thibor picked up HyperWRT development with HyperWRT +tofu for the WRT54G and HyperWRT Thibor for the WRT54GS. Both developers frequently collaborated and added features from each other's releases, and both developed WRTSL54GS versions of their firmware. After February 2006, tofu discontinued development and his code was incorporated into HyperWRT Thibor. HyperWRT Thibor15c (July 2006) was the last version of HyperWRT and was compatible with the WRT54G (v1-v4), WRT54GL (v1-v1.1), WRT54GS (v1-v4), and WRTSL54GS (later unfinished beta 17rc3 released February 2008). See also List of wireless router firmware projects References External links Linksys GPL Code Center – Source code for stock Linksys firmware, previous versions were base for HyperWRT Rupan HyperWRT – Updated HyperWRT for WRT54G (no longer maintained) HyperWRT +tofu – Updated HyperWRT for WRT54G/WRT54GL and WRTSL54GS (no longer maintained) HyperWRT – Updated HyperWRT for WRT54G/WRT54GL and WRTSL54GS (no longer maintained) Custom firmware Free routing software Wireless access points
https://en.wikipedia.org/wiki/Moyal%20product
In mathematics, the Moyal product (after José Enrique Moyal; also called the star product or Weyl–Groenewold product, after Hermann Weyl and Hilbrand J. Groenewold) is an example of a phase-space star product. It is an associative, non-commutative product, , on the functions on , equipped with its Poisson bracket (with a generalization to symplectic manifolds, described below). It is a special case of the -product of the "algebra of symbols" of a universal enveloping algebra. Historical comments The Moyal product is named after José Enrique Moyal, but is also sometimes called the Weyl–Groenewold product as it was introduced by H. J. Groenewold in his 1946 doctoral dissertation, in a trenchant appreciation of the Weyl correspondence. Moyal actually appears not to know about the product in his celebrated article and was crucially lacking it in his legendary correspondence with Dirac, as illustrated in his biography. The popular naming after Moyal appears to have emerged only in the 1970s, in homage to his flat phase-space quantization picture. Definition The product for smooth functions and on takes the form where each is a certain bidifferential operator of order characterized by the following properties (see below for an explicit formula): Deformation of the pointwise product — implicit in the formula above. Deformation of the Poisson bracket, called Moyal bracket. The 1 of the undeformed algebra is also the identity in the new algebra. The complex conjugate is an antilinear antiautomorphism. Note that, if one wishes to take functions valued in the real numbers, then an alternative version eliminates the in the second condition and eliminates the fourth condition. If one restricts to polynomial functions, the above algebra is isomorphic to the Weyl algebra , and the two offer alternative realizations of the Weyl map of the space of polynomials in variables (or the symmetric algebra of a vector space of dimension ). To provide an explicit formu
https://en.wikipedia.org/wiki/Critical%20period
In developmental psychology and developmental biology, a critical period is a maturational stage in the lifespan of an organism during which the nervous system is especially sensitive to certain environmental stimuli. If, for some reason, the organism does not receive the appropriate stimulus during this "critical period" to learn a given skill or trait, it may be difficult, ultimately less successful, or even impossible, to develop certain associated functions later in life. Functions that are indispensable to an organism's survival, such as vision, are particularly likely to develop during critical periods. "Critical period" also relates to the ability to acquire one's first language. Researchers found that people who passed the "critical period" would not acquire their first language fluently. Some researchers differentiate between 'strong critical periods' and 'weak critical periods' (a.k.a. 'sensitive' periods) — defining 'weak critical periods' / 'sensitive periods' as more extended periods, after which learning is still possible. Other researchers consider these the same phenomenon. For example, the critical period for the development of a human child's binocular vision is thought to be between three and eight months, with sensitivity to damage extending up to at least three years of age. Further critical periods have been identified for the development of hearing and the vestibular system. Strong versus weak critical periods Examples of strong critical periods include monocular deprivation, filial imprinting, monaural occlusion, and Prefrontal Synthesis acquisition. These traits cannot be acquired after the end of the critical period. Examples of weak critical periods include phoneme tuning, grammar processing, articulation control, vocabulary acquisition, music training, auditory processing, sport training, and many other traits that can be significantly improved by training at any age. Critical period mechanisms Critical period opening Critical per
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare
Windows Live OneCare (previously Windows OneCare Live, codenamed A1) was a computer security and performance enhancement service developed by Microsoft for Windows. A core technology of OneCare was the multi-platform RAV (Reliable Anti-virus), which Microsoft purchased from GeCAD Software Srl in 2003, but subsequently discontinued. The software was available as an annual paid subscription, which could be used on up to three computers. On 18 November 2008, Microsoft announced that Windows Live OneCare would be discontinued on 30 June 2009 and will instead be offering users a new free anti-malware suite called Microsoft Security Essentials to be available before then. However, virus definitions and support for OneCare would continue until a subscription expires. In the end-of-life announcement, Microsoft noted that Windows Live OneCare would not be upgraded to work with Windows 7 and would also not work in Windows XP Mode. History Windows Live OneCare entered a beta state in the summer of 2005. The managed beta program was launched before the public beta, and was located on BetaPlace, Microsoft's former beta delivery system. On 31 May 2006, Windows Live OneCare made its official debut in retail stores in the United States. The beta version of Windows Live OneCare 1.5 was released in early October 2006 by Microsoft. Version 1.5 was released to manufacturing on 3 January 2007 and was made available to the public on 30 January 2007. On 4 July 2007, beta testing started for version 2.0, and the final version was released on 16 November 2007. Microsoft acquired Komoku on 20 March 2008 and merged its computer security software into Windows Live OneCare. Windows Live OneCare 2.5 (build 2.5.2900.28) final was released on 3 July 2008. On the same day, Microsoft also released Windows Live OneCare for Server 2.5. Features Windows Live OneCare features integrated anti-virus, personal firewall, and backup utilities, and a tune-up utility with the integrated functionality o
https://en.wikipedia.org/wiki/Account%20aggregation
Account aggregation sometimes also known as financial data aggregation is a method that involves compiling information from different accounts, which may include bank accounts, credit card accounts, investment accounts, and other consumer or business accounts, into a single place. This may be provided through connecting via an API to the financial institution or provided through "screen scraping" where a user provides the requisite account-access information for an automated system to gather and compile the information into a single page. The security of the account access details as well as the financial information is key to users having confidence in the service. The database either resides in a web-based application or in client-side software. While such services are primarily designed to aggregate financial information, they sometimes also display other things such as the contents of e-mail boxes and news headlines. Account Aggregator System Account aggregator system is a data-sharing system, which helps lenders to conduct an easy and speedy assessment of the creditworthiness of the borrower. Components of Account Aggregator system The Account Aggregator system essentially has three important components – Financial Information Provider (FIP) Financial Information User (FIU) Account Aggregators Financial Information Providers has the necessary data about the customer, which it provides to the Financial Information Users. The Financial Information Provider can be a bank, a Non-Banking Financial Company (NBFC), mutual fund, insurance repository, pension fund repository, or even your wealth supervisor. The account aggregators act as the intermediary by collecting data from FIPs that hold the customer’s financial data and share that with FIUs such as lending banks/agencies that provide financial services. History The ideas around account aggregation first emerged in the mid 1990s when banks started releasing Internet banking applications. In the late
https://en.wikipedia.org/wiki/Heat%20capacity%20ratio
In thermal physics and thermodynamics, the heat capacity ratio, also known as the adiabatic index, the ratio of specific heats, or Laplace's coefficient, is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume (). It is sometimes also known as the isentropic expansion factor and is denoted by (gamma) for an ideal gas or (kappa), the isentropic exponent for a real gas. The symbol is used by aerospace and chemical engineers. where is the heat capacity, the molar heat capacity (heat capacity per mole), and the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes and refer to constant-pressure and constant-volume conditions respectively. The heat capacity ratio is important for its applications in thermodynamical reversible processes, especially involving ideal gases; the speed of sound depends on this factor. Thought experiment To understand this relation, consider the following thought experiment. A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals , with representing the change in temperature. The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat (adiabatic expansion). Doing this work, air inside the cylinder will cool to below the target temperature. To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked
https://en.wikipedia.org/wiki/Yerevan%20TV%20Tower
Yerevan TV Tower (, Yerevani herustaashtarak) is a high lattice tower built in 1977 on Nork Hill near downtown Yerevan, Armenia. It is the tallest structure in the Caucasus, fourth-tallest tower in Western Asia (The Milad Tower in Tehran being the tallest), sixth-tallest free-standing lattice tower and thirty-eighth-tallest tower in the world. Construction In the late 1960s it was decided to replace the -high TV tower in Yerevan due to the insufficient capacity of the latter. The preparatory work on Yerevan TV tower and Tbilisi Tower, which also needed replacement, started simultaneously at the Ukrainian Institute for Steel Structures. The project leaders were Isaak Zatulovsky, Anatoli Perelmuter, Mark Grinberg, Yuri Shevernitsky, and Boris But. Tbilisi and Yerevan were the first among the Soviet capitals where towers were built. The same group later worked on the project of Kyiv TV Tower though it was finished earlier than in Yerevan). Tbilisi Tower is lower, lighter and slightly tilted compared to Yerevan Tower. Construction started in 1974 and finished 3 years later. The steel was shipped from Rustavi Metallurgical Plant in Georgia. The old tower was moved to Leninakan, current Gyumri, where it is still functional today. Structure The structure of the TV tower in Yerevan is virtually divided into three parts: base, body, and antenna. The base is a truss-steel tetrahedron that at the height of 71 metres becomes a closed-platform observation deck and technical offices. On the roof of this lower-tower basket are radio antennas. The triangular truss-steel structure continues to the height of 137 metres where a two-storey 18-metre structure in the shape of an inverted-truncated cone is situated. The lattice grid structure continues another 30 meters. In the centre of this structure there is a concrete vertical pipe structure with a diameter of 4.2 metres, in which, among other things, the lift-shaft is hidden. The pipe projecting from the basement continues as
https://en.wikipedia.org/wiki/Motion%20control
Motion control is a sub-field of automation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. Motion control systems are extensively used in a variety of fields for automation purposes, including precision engineering, micromanufacturing, biotechnology, and nanotechnology. The main components involved typically include a motion controller, an energy amplifier, and one or more prime movers or actuators. Motion control may be open loop or closed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems include stepper motor or fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System. Typically the position or velocity of machines are controlled using some type of device such as a hydraulic pump, linear actuator, or electric motor, generally a servo. Motion control is an important part of robotics and CNC machine tools, however in these instances it is more complex than when used with specialized machines, where the kinematics are usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile, semiconductor production, and assembly industries. Motion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of
https://en.wikipedia.org/wiki/Dependent%20type
In computer science and logic, a dependent type is a type whose definition depends on a value. It is an overlapping feature of type theory and type systems. In intuitionistic type theory, dependent types are used to encode logic's quantifiers like "for all" and "there exists". In functional programming languages like Agda, ATS, Coq, F*, Epigram, and Idris, dependent types help reduce bugs by enabling the programmer to assign types that further restrain the set of possible implementations. Two common examples of dependent types are dependent functions and dependent pairs. The return type of a dependent function may depend on the value (not just type) of one of its arguments. For instance, a function that takes a positive integer may return an array of length , where the array length is part of the type of the array. (Note that this is different from polymorphism and generic programming, both of which include the type as an argument.) A dependent pair may have a second value the type of which depends on the first value. Sticking with the array example, a dependent pair may be used to pair an array with its length in a type-safe way. Dependent types add complexity to a type system. Deciding the equality of dependent types in a program may require computations. If arbitrary values are allowed in dependent types, then deciding type equality may involve deciding whether two arbitrary programs produce the same result; hence the decidability of type checking may depend on the given type theory's semantics of equality, that is, whether the type theory is intensional or extensional. History In 1934, Haskell Curry noticed that the types used in typed lambda calculus, and in its combinatory logic counterpart, followed the same pattern as axioms in propositional logic. Going further, for every proof in the logic, there was a matching function (term) in the programming language. One of Curry's examples was the correspondence between simply typed lambda calculus and intuition
https://en.wikipedia.org/wiki/Elan%20Graphics
Elan Graphics is a computer graphics architecture for Silicon Graphics computer workstations. Elan Graphics was developed in 1991 and was available as a high-end graphics option on workstations released during the mid-1990s as part of the Express Graphics architectures family. Elan Graphics gives the workstation real-time 2D and 3D graphics rendering capability similar to that of even high-end PCs made over ten years after Elan's introduction, with the exception of texture mapping, which had to be performed in software. The Silicon Graphics Indigo Elan option Graphics systems consist of four GE7 Geometry Engines capable of a combined 128 MFLOPS and one RE3 Raster Engine. Together, they are capable of rendering 180K Z-buffered, lit, Gouraud-shaded triangles per second. The framebuffer has 56 bits per pixel, causing 12-bits per pixel (dithered RGB 4/4/4) to be used for a double-buffered, depth buffered, RGB layout. When double-buffering isn't required, it is possible to run in full 24-bit color. Similarly, when Z-buffering is not required, a double-buffered 24-bit RGB framebuffer configuration is possible. The Elan Graphics system also implemented hardware stencil buffering by allocating 4 bits from the Z-buffer to produce a combined 20-bit Z, 4-bit stencil buffer. Elan Graphics consists of five graphics subsystems: the HQ2 Command Engine, GE7 Geometry Subsystem, RE3 Raster Engine, VM2 framebuffer and VC1 Display Subsystem. Elan Graphics can produce resolutions up to 1280 x 1024 pixels with 24-bit color and can also process unencoded NTSC and PAL analog television signals. The Elan Graphics system is made up of five daughterboards that plug into the main workstation motherboard. The Elan Graphics architecture was superseded by SGI's Extreme Graphics architecture on Indigo2 models and eventually by the IMPACT graphics architecture in 1995. Features Subpixel positioning Advanced lighting models: Multiple colored light sources (up to 8) Ambient, diffuse, and spec
https://en.wikipedia.org/wiki/Navarro%20Networks
Navarro Networks, Inc., was a developer of Ethernet-based ASIC components based in Plano, Texas, in the United States. They produced a network processor for Ethernet and other applications. Navarro Networks was founded in 2000. Their CEO was Mark Bluhm, who was formerly a vice president at Cyrix. A group of nine employees left the Cyrix division of Via on March 21, 2000 to staff the company. The employee walkout had occurred just a day after Via announced that they would be spinning off the Cyrix division as a separate company. Cisco Systems announced their intent to acquire Navarro Networks in May 1, 2002; on the same day, Cisco also announced their bid to acquire Hammerhead Networks. The acquisition was completed in June that year, with Cisco dealing Navarro a stock swap worth $85 million. Most of the 25 employees of Navarro joined the Internet Systems Business Unit to enhance Cisco's internal ASIC capability in Ethernet switching platforms. References External links 2000 establishments in Texas 2002 disestablishments in Texas American companies established in 2000 American companies disestablished in 2002 Cisco Systems acquisitions Computer companies established in 2000 Computer companies disestablished in 2002 Defunct computer companies of the United States Defunct networking companies Networking hardware companies
https://en.wikipedia.org/wiki/List%20of%20reporting%20software
The following is a list of notable report generator software. Reporting software is used to generate human-readable reports from various data sources. Commercial software ActiveReports Actuate Corporation BOARD Business Objects Cognos BI Crystal Reports CyberQuery GoodData icCube I-net Crystal-Clear InetSoft Information Builders' FOCUS and WebFOCUS Jaspersoft Jedox List & Label Logi Analytics m-Power MATLAB MicroStrategy Navicat OBIEE Oracle Discoverer Oracle Reports Hyperion Oracle XML Publisher Parasoft DTP PolyAnalyst Power BI Plotly Proclarity QlikView RapidMiner Roambi RW3 Technologies SiSense Splunk SQL Server Reporting Services Stimulsoft Reports Style Report Tableau Targit Telerik Reporting TIBCO Text Control Windward Reports XLCubed Zoomdata Zoho Analytics (as part of the Zoho Office Suite) Free software BIRT Project D3.js JasperReports KNIME LibreOffice Base OpenOffice Base Pentaho See also Business intelligence software List of information graphics software References Reporting software
https://en.wikipedia.org/wiki/TrueOS
TrueOS (formerly PC-BSD or PCBSD) is a discontinued Unix-like, server-oriented operating system built upon the most recent releases of FreeBSD-CURRENT. Up to 2018 it aimed to be easy to install by using a graphical installation program, and easy and ready-to-use immediately by providing KDE SC, Lumina, LXDE, MATE, or Xfce as the desktop environment. In June 2018 the developers announced that since TrueOS had become the core OS to provide a basis for other projects, the graphical installer had been removed. Graphical end-user-orientated OSes formerly based on TrueOS were GhostBSD and Trident. TrueOS provided official binary Nvidia and Intel drivers for hardware acceleration and an optional 3D desktop interface through KWin, and Wine is ready-to-use for running Microsoft Windows software. TrueOS was also able to run Linux software in addition to FreeBSD Ports collection and it had its own .txz package manager. TrueOS supported OpenZFS and the installer offered disk encryption with geli. Development of TrueOS ended in 2020. History TrueOS was founded by FreeBSD professional Kris Moore in early 2005 as PC-BSD. In August 2006 it was voted the most beginner-friendly operating system by OSWeekly.com. The first beta of the PC-BSD consisted of only a GUI installer to get the user up and running with a FreeBSD 6 system with KDE3 pre-configured. This was a major innovation for the time as anyone wishing to install FreeBSD would have to manually tweak and run through a text installer. Kris Moore's goal was to make FreeBSD easy for everyone to use on the desktop and has since diverged even more in the direction of usability by including additional GUI administration tools and .pbi application installers. PC-BSD's application installer management involved a different approach to installing software than many other Unix-like operating systems, up to and including version 8.2, by means of the pbiDIR website. Instead of using the FreeBSD Ports tree directly (although it remain
https://en.wikipedia.org/wiki/Crowbar%20%28circuit%29
A crowbar circuit is an electrical circuit used for preventing an overvoltage or surge condition of a power supply unit from damaging the circuits attached to the power supply. It operates by putting a short circuit or low resistance path across the voltage output (Vo), like dropping a crowbar across the output terminals of the power supply. Crowbar circuits are frequently implemented using a thyristor, TRIAC, trisil or thyratron as the shorting device. Once triggered, they depend on the current-limiting circuitry of the power supply or, if that fails, the blowing of the line fuse or tripping the circuit breaker. The name is derived from having the same effect as throwing a crowbar over exposed power supply terminals to short the output. An example crowbar circuit is shown to the right. This particular circuit uses an LM431 adjustable zener regulator to control the gate of the TRIAC. The resistor divider of R1 and R2 provide the reference voltage for the LM431. The divider is set so that during normal operating conditions, the voltage across R2 is slightly lower than VREF of the LM431. Since this voltage is below the minimum reference voltage of the LM431, it remains off and very little current is conducted through the LM431. If the cathode resistor is sized accordingly, very little voltage will be dropped across it and the TRIAC gate terminal will be essentially at the same potential as MT1, keeping the TRIAC off. If the supply voltage increases, the voltage across R2 will exceed VREF and the LM431 cathode will begin to draw current. The voltage at the gate terminal will be pulled down, exceeding the gate trigger voltage of the TRIAC and latching it on. Overview A crowbar circuit is distinct from a clamp in pulling, once triggered, the voltage below the trigger level, usually close to ground voltage. A clamp prevents the voltage from exceeding a preset level. Thus, a crowbar will not automatically return to normal operation when the overvoltage condition is remo
https://en.wikipedia.org/wiki/Enoxolone
Enoxolone (INN, BAN; also known as glycyrrhetinic acid or glycyrrhetic acid) is a pentacyclic triterpenoid derivative of the beta-amyrin type obtained from the hydrolysis of glycyrrhizic acid, which was obtained from the herb liquorice. It is used in flavoring and it masks the bitter taste of drugs like aloe and quinine. It is effective in the treatment of peptic ulcer and also has expectorant (antitussive) properties. It has some additional pharmacological properties with possible antiviral, antifungal, antiprotozoal, and antibacterial activities. Mechanism of action Glycyrrhetinic acid inhibits the enzymes (15-hydroxyprostaglandin dehydrogenase and delta-13-prostaglandin) that metabolize the prostaglandins PGE-2 and PGF-2α to their respective, inactive 15-keto-13,14-dihydro metabolites. This increases prostaglandins in the digestive system. Prostaglandins inhibit gastric secretion, stimulate pancreatic secretion and mucous secretion in the intestines, and markedly increase intestinal motility. They also cause cell proliferation in the stomach. The effect on gastric acid secretion, and promotion of mucous secretion and cell proliferation shows why licorice has potential in treating peptic ulcers. Licorice should not be taken during pregnancy, because PGF-2α stimulates activity of the uterus during pregnancy and can cause abortion. The structure of glycyrrhetinic acid is similar to that of cortisone. Both molecules are flat and similar at positions 3 and 11. This might be the basis for licorice's anti-inflammatory action. 3-β-D-(Monoglucuronyl)-18-β-glycyrrhetinic acid, a metabolite of glycyrrhetinic acid, inhibits the conversion of 'active' cortisol to 'inactive' cortisone in the kidneys. This occurs via inhibition of the enzyme 11-β-hydroxysteroid dehydrogenase. As a result, cortisol levels become high within the collecting duct of the kidney. Cortisol has intrinsic mineralocorticoid properties (that is, it acts like aldosterone and increases sodium reabsorpt
https://en.wikipedia.org/wiki/Dependent%20ML
Dependent ML is an experimental functional programming language proposed by Hongwei Xi and Frank Pfenning. Dependent ML extends ML by a restricted notion of dependent types: types may be dependent on static indices of type Nat (natural numbers). Dependent ML employs a constraint theorem prover to decide a strong equational theory over the index expressions. DML's types are not dependent on runtime values - there is still a phase distinction between compilation and execution of the program. By restricting the generality of full dependent types type checking remains decidable, but type inference becomes undecidable. Dependent ML has been superseded by ATS and is no longer under active development. References Further reading David Aspinall and (2005). "Dependent Types". In Pierce, Benjamin C. (ed.) Advanced Topics in Types and Programming Languages. MIT Press. External links The home page of DML ML programming language family Declarative programming languages Functional languages Dependently typed languages Programming languages created in the 1990s Discontinued programming languages
https://en.wikipedia.org/wiki/Critical%20point%20%28thermodynamics%29
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field. Liquid–vapor critical point Overview For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one. The figure to the right shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point. The critical point of water occurs at and . In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor diele
https://en.wikipedia.org/wiki/Critical%20point%20%28mathematics%29
Critical point is a term used in many branches of mathematics. When dealing with functions of a real variable, a critical point is a point in the domain of the function where the function is either not differentiable or the derivative is equal to zero. Similarly, when dealing with complex variables, a critical point is a point in the function's domain where it is either not holomorphic or the derivative is equal to zero. Likewise, for a function of several real variables, a critical point is a value in its domain where the gradient is undefined or is equal to zero. The value of the function at a critical point is a critical value. This sort of definition extends to differentiable maps between and a critical point being, in this case, a point where the rank of the Jacobian matrix is not maximal. It extends further to differentiable maps between differentiable manifolds, as the points where the rank of the Jacobian matrix decreases. In this case, critical points are also called bifurcation points. In particular, if is a plane curve, defined by an implicit equation , the critical points of the projection onto the -axis, parallel to the -axis are the points where the tangent to are parallel to the -axis, that is the points where In other words, the critical points are those where the implicit function theorem does not apply. The notion of a critical point allows the mathematical description of an astronomical phenomenon that was unexplained before the time of Copernicus. A stationary point in the orbit of a planet is a point of the trajectory of the planet on the celestial sphere, where the motion of the planet seems to stop before restarting in the other direction. This occurs because of a critical point of the projection of the orbit into the ecliptic circle. Critical point of a single variable function A critical point of a function of a single real variable, , is a value in the domain of where is not differentiable or its derivative is 0 (i.e. A criti
https://en.wikipedia.org/wiki/Extreme%20Graphics
Extreme Graphics is a computer graphics architecture for Silicon Graphics computer workstations. Extreme Graphics was developed in 1993 and was available as a high-end graphics option on workstations such as the Indigo2, released during the mid-1990s. Extreme Graphics gives the workstation real-time 2D and 3D graphics rendering capability similar to that of even high-end PCs made many years after Extreme's introduction, with the exception of texture rendering which is performed in software. Extreme Graphics systems consist of eight Geometry Engines and two Raster Engines, twice as many units as the Elan/XZ graphics used in the Indy, Indigo, and Indigo2. The eight geometry engines are rated at 256 MFLOPS maximum, far faster than the MIPS R4400 CPU used in the workstation. Extreme Graphics consists of five graphics subsystems: the Command Engine, Geometry Subsystem, Raster Engine, framebuffer and Display Subsystem. Extreme Graphics can produce resolutions up to 1280 x 1024 pixels with 24-bit color and can also process unencoded NTSC and PAL analog television signals. It is reported by the PROM as GU1-Extreme. The Extreme Graphics architecture was superseded by SGI's IMPACT graphics architecture in 1995. External links Indigo2 and POWER Indigo2 Technical Report 2nd-hand Indigo2 Buyers' Guide Graphics chips SGI graphics
https://en.wikipedia.org/wiki/APC%20by%20Schneider%20Electric
APC by Schneider Electric (formerly American Power Conversion Corporation) is a manufacturer of uninterruptible power supplies (UPS), electronics peripherals, and data center products. In 2007, Schneider Electric acquired APC and combined it with MGE UPS Systems to form Schneider Electric's Critical Power & Cooling Services Business Unit, which recorded 2007 revenue of US$3.5 billion (EUR 2.4 billion) and employed 12,000 people worldwide. Until February 2007, when it was acquired, it had been a member of the S&P 500 list of the largest publicly traded companies in the United States. Schneider Electric, with 113,900 employees and operations in 102 countries, had 2008 annual sales of $26 billion (EUR 18.3 billion). In 2011, APC by Schneider Electric became a product brand only, while the company was rebranded as the IT Business Unit of Schneider Electric. History APC was founded in 1981 by three MIT Lincoln Lab electronic power engineers. Originally, the engineers focused on solar power research and development. When government funding for their research ended, APC shifted its focus to power protection by introducing its first UPS in 1984. Acquisition by Schneider Schneider Electric announced its acquisition of APC on October 30, 2006 and completed it on February 14, 2007. APC share-holders approved the deal on January 16, 2007. The European Union authorized the merger, provided that Schneider divest itself of the MGE UPS SYSTEMS global UPS business below 10kVA. Late in 2007 Eaton Powerware bought the MGE Office Protection Systems division of Schneider. Product lines The company focuses its efforts on four application areas: Home/home office Business networks Access provider networks Data centers and facilities Symmetra APC Symmetra LX is a line of uninterruptible power supply products, aimed at network and server applications. Symmetras come in power configurations ranging from 4 kVA to 16 kVA. Symmetras are built for use in a data center (in a 19-in
https://en.wikipedia.org/wiki/Pinnacle%20Studio
Pinnacle Studio is a video editing program originally developed by Pinnacle Systems as consumer-level software. Upon Pinnacle System's acquisition of Munich-based FAST Multimedia, Pinnacle integrated the professional code base of FAST's editing software, (since re-branded as Pinnacle Liquid) beginning with Pinnacle Studio version 10. It was acquired by Avid and later by Corel in July 2012. Pinnacle Studio allows users to author video content in Video CD, DVD-Video, AVCHD or Blu-ray format, add complementary menus and burn them to disc. In the second half of 2007, Pinnacle introduced VideoSpin, a shareware version of Studio with fewer features; it was discontinued in March 2009. Versions Since version 9, Studio has been sold in several editions: Studio, Studio Plus and Studio Ultimate, all of which are commercial software. There is some additional functionality in the Plus and Ultimate editions, notably a second video track. This allowed Overlay, A-B Edits, Chroma Key, and Picture-in-Picture. Pinnacle Studio 24 was released on August 11, 2020. This version included Unlimited tracks plus 4K video support, Multi-camera Editing, Enhanced Motion Tracking, Enhanced Video masking, and many advanced technical features. No support for HEVC (H.265) on AMD hardware. iOS In addition to the desktop versions of Pinnacle Studio,two versions of Pinnacle Studio also exists for iPad and iPhone - Pinnacle Studio for iOS and Pinnacle Studio Pro for iOS. Last one has additional features, for example, trim frame by frame using the Dual Viewer Precision Trimmer, export to cloud services (such as Dropbox, Google Drive and OneDrive). Last version of Pinnacle Studio for iOS (5.6.1) requires iOS 9.3 or later, iPad 2 or higher, iPhone 4s or higher, iPod Touch Series 5 or higher. Reception MacUser rated version Pinnacle Studio 4 for iOS as 4 out of 5, saying that it provides "a more fully featured movie editor than iMovie for iPad", but complained that the extra in-app purchase needed for
https://en.wikipedia.org/wiki/PCI-SIG
PCI-SIG, or Peripheral Component Interconnect Special Interest Group, is an electronics industry consortium responsible for specifying the Peripheral Component Interconnect (PCI), PCI-X, and PCI Express (PCIe) computer buses. It is based in Beaverton, Oregon. The PCI-SIG is distinct from the similarly named and adjacently-focused PCI Industrial Computer Manufacturers Group. It has produced the PCI, PCI-X and PCI Express specifications. As of 2022, the board of directors of the PCI-SIG has representatives from: AMD, ARM, Dell EMC, IBM, Intel, Synopsys, Keysight, NVIDIA, and Qualcomm. The chairman and president of the PCI-SIG is Al Yanes, a "Distinguished Engineer" from IBM. The executive director of the PCI-SIG is Reen Presnell, president of VTM Group. Formation The PCI Special Interest Group was formed in 1992, initially as a "compliance program" to help computer manufacturers implement the Intel specification. The organization became a nonprofit corporation, officially named "PCI-SIG" in the year 2000. Membership Membership of PCI-SIG is open to all of the microcomputer industry with a $4,000 annual fee. PCI-SIG has a membership of over 800 companies that develop differentiated, interoperable products based on its specifications. PCI-SIG specifications are available to members of the organization as free downloads. Non-members can purchase hard-copy specifications. See also PICMG References External links PCI-SIG Technology consortia Standards organizations in the United States Beaverton, Oregon Peripheral Component Interconnect
https://en.wikipedia.org/wiki/HyperTransport%20Consortium
The HyperTransport Consortium is an industry consortium responsible for specifying and promoting the computer bus technology called HyperTransport. Organizational form The Technical Working Group along with several Task Forces manage the HyperTransport specification and drive new developments. A Marketing Working Group promotes the use of the technology and the consortium. History It was founded in 2001 by Advanced Micro Devices, Alliance Semiconductor, Apple Computer, Broadcom Corporation, Cisco Systems, NVIDIA, PMC-Sierra, Sun Microsystems, and Transmeta. As of 2009 it has over 50 members. Executives As of 2009, Mike Uhler of AMD is the President of the Consortium, Mario Cavalli is the General Manager, Brian Holden of PMC-Sierra is both the Vice President and the Chair of the Technical Working Group, Deepika Sarai is the Treasurer. External links HyperTransport Consortium web site Technology Page Link Technical Specifications HTX and DUT Connector Specifications White Papers Technology consortia Computer buses
https://en.wikipedia.org/wiki/Link%20aggregation
In computer networking, link aggregation is the combining (aggregating) of multiple network connections in parallel by any of several methods. Link aggregation increases total throughput beyond what a single connection could sustain, and provides redundancy where all but one of the physical links may fail without losing connectivity. A link aggregation group (LAG) is the combined collection of physical ports. Other umbrella terms used to describe the concept include trunking, bundling, bonding, channeling or teaming. Implementation may follow vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet, defined in IEEE 802.1AX or the previous IEEE 802.3ad, but also proprietary protocols. Motivation Link aggregation increases the bandwidth and resilience of Ethernet connections. Bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased tenfold each generation: 10 megabit/s, 100 Mbit/s, 1000 Mbit/s, 10,000 Mbit/s. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation, which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to use link aggregation to combine two physical Ethernet links into one logical link. Most of these early solutions required manual configuration and identical equipment on both sides of the connection. There are three single points of failure inherent to a typical port-cable-port connection, in either a computer-to-switch or a switch-to-switch configuration: the cable itself or either of the ports the cable is plugged into can fail. Multiple logical connections can be made, but many of the higher level protocols were not designed to fail over completely seamlessly. Combining multiple physical connections into one logical connection using link aggregation provides more resilient communications. Architecture Network architects can implement aggregation at an
https://en.wikipedia.org/wiki/Freifunk
Freifunk (German for: "free radio") is a non-commercial open grassroots initiative to support free computer networks in the German region. Freifunk is part of the international movement for a wireless community network. The initiative counts about 400 local communities with over 41,000 access points. Among them, Münster, Aachen, Munich, Hanover, Stuttgart, and Uelzen are the biggest communities, with more than 1,000 access points each. Aim The main goals of Freifunk are to build a large-scale free wireless Wi-Fi network that is decentralized, owned by those who run it and to support local communication. The initiative is based on the Picopeering Agreement. In this agreement, participants agree upon a network that is free from discrimination, in the sense of net neutrality. Similar grassroots initiatives in Austria and in Switzerland are FunkFeuer and Openwireless. Technology Like many other free community-driven networks, Freifunk uses mesh technology to bring up ad hoc networks by interconnecting multiple Wireless LANs. In a Wi-Fi mobile ad hoc network, all routers connect to each other using special routing software. When a router fails, this software automatically calculates a new route to the destination. This software, the Freifunk firmware, is based on OpenWrt and other free software. There are several different implementations of the firmware depending on the hardware and protocols local communities use. The first Wi-Fi ad hoc network was done in Georgia, USA in 1999 as demonstrated by Toh. It was a six-node implementation running the Associativity-Based Routing protocol on Linux kernel and WAVELAN WiFi. But ABR was a patented protocol. Following that experience, Freifunk worked on standard IETF protocols - the two common standard proposals are OLSR and B.A.T.M.A.N. The development of B.A.T.M.A.N. is driven by Freifunk activists on a volunteering basis. History One of the results of the BerLon workshop in October 2002 on free wireless community networ
https://en.wikipedia.org/wiki/Redundancy%20%28engineering%29
In engineering and systems theory, redundancy is the intentional duplication of critical components or functions of a system with the goal of increasing reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-threaded computer processing. In many safety-critical systems, such as fly-by-wire and hydraulic systems in aircraft, some parts of the control system may be triplicated, which is formally termed triple modular redundancy (TMR). An error in one component may then be out-voted by the other two. In a triply redundant system, the system has three sub components, all three of which must fail before the system fails. Since each one rarely fails, and the sub components are designed to preclude common failure modes (which can then be modelled as independent failure), the probability of all three failing is calculated to be extraordinarily small; it is often outweighed by other risk factors, such as human error. Electrical surges arising from lightning strikes are an example of a failure mode which is difficult to fully isolate, unless the components are powered from independent power busses and have no direct electrical pathway in their interconnect (communication by some means is required for voting). Redundancy may also be known by the terms "majority voting systems" or "voting logic". Redundancy sometimes produces less, instead of greater reliability it creates a more complex system which is prone to various issues, it may lead to human neglect of duty, and may lead to higher production demands which by overstressing the system may make it less safe. Redundancy is one form of robustness as practiced in computer science. Geographic redundancy has become important in the data center industry, to safeguard data against natural disasters and political instability (see below). Forms of redundancy In computer science, there are four major forms of redundancy:
https://en.wikipedia.org/wiki/Redundancy%20%28information%20theory%29
In information theory, redundancy measures the fractional difference between the entropy of an ensemble , and its maximum possible value . Informally, it is the amount of wasted "space" used to transmit certain data. Data compression is a way to reduce or eliminate unwanted redundancy, while forward error correction is a way of adding desired redundancy for purposes of error detection and correction when communicating over a noisy channel of limited capacity. Quantitative definition In describing the redundancy of raw data, the rate of a source of information is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the most general case of a stochastic process, it is in the limit, as n goes to infinity, of the joint entropy of the first n symbols divided by n. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a memoryless source is simply , since by definition there is no interdependence of the successive messages of a memoryless source. The absolute rate of a language or source is simply the logarithm of the cardinality of the message space, or alphabet. (This formula is sometimes called the Hartley function.) This is the maximum possible rate of information that can be transmitted with that alphabet. (The logarithm should be taken to a base appropriate for the unit of measurement in use.) The absolute rate is equal to the actual rate if the source is memoryless and has a uniform distribution. The absolute redundancy can then be defined as the difference between the absolute rate and the rate. The quantity is called the relative redundancy and gives the maximum possible data compression ratio, when expressed as the percentage by which a file size can be decreased. (When expressed as a ratio of original file size to compressed file size, the quantity gives the maxim
https://en.wikipedia.org/wiki/Global%20Information%20Grid
The Global Information Grid (GIG) is a network of information transmission and processing maintained by the United States Department of Defense. More descriptively, it is a worldwide network of information transmission, of associated processes, and of personnel serving to collect, process, safeguard, transmit, and manage this information. It is an all-encompassing communications project of the United States Department of Defense. The GIG makes this immediately available to military personnel, to those responsible for military politics, and for support personnel. It includes all infrastructure, bought or loaned, of communications, electronics, informatics (including software and databases), and security. It is the most visible manifestation of network-centric warfare. It is the combination of technology and human activity that enables warfighters to access information on demand. It is defined as a "globally interconnected, end-to-end set of information capabilities for collecting, processing, storing, disseminating, and managing information on demand to warfighters, policy makers, and support personnel". The GIG includes owned and leased communications and computing systems and services, software (including applications), data, security services, other associated services, and National Security Systems. Non-GIG Information Technology (IT) includes stand-alone, self-contained, or embedded IT that is not, and will not be, connected to the enterprise network. This new definition removes references to the National Security Systems as defined in section 5142 of the Clinger-Cohen Act of 1996. Further, this new definition removes the references to the GIG providing capabilities from all operating locations (bases, posts, camps, stations, facilities, mobile platforms, and deployed sites). And lastly, this definition removes the part of the definition that discusses the interfaces to coalition, allied, and non-Department of Defense users and systems. The DoD's use of t
https://en.wikipedia.org/wiki/Stanford%20Research%20Institute%20Problem%20Solver
The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner. Definition A STRIPS instance is composed of: An initial state; The specification of the goal states – situations that the planner is trying to reach; A set of actions. For each action, the following are included: preconditions (what must be established before the action is performed); postconditions (what is established after the action is performed). Mathematically, a STRIPS instance is a quadruple , in which each component has the following meaning: is a set of conditions (i.e., propositional variables); is a set of operators (i.e., actions); each operator is itself a quadruple , each element being a set of conditions. These four sets specify, in order, which conditions must be true for the action to be executable, which ones must be false, which ones are made true by the action and which ones are made false; is the initial state, given as the set of conditions that are initially true (all others are assumed false); is the specification of the goal state; this is given as a pair , which specify which conditions are true and false, respectively, in order for a state to be considered a goal state. A plan for such a planning instance is a sequence of operators that can be executed from the initial state and that leads to a goal state. Formally, a state is a set of conditions: a state is represented by the set of conditions that are true in it. Transitions between states are modeled by a transition function, which is a function mapping states into new states
https://en.wikipedia.org/wiki/Hackathon
A hackathon (also known as a hack day, hackfest, datathon or codefest; a portmanteau of hacking and marathon) is an event where people engage in rapid and collaborative engineering over a relatively short period of time such as 24 or 48 hours. They are often run using agile software development practices, such as sprint-like design wherein computer programmers and others involved in software development, including graphic designers, interface designers, product managers, project managers, domain experts, and others collaborate intensively on engineering projects, such as software engineering. The goal of a hackathon is to create functioning software or hardware by the end of the event. Hackathons tend to have a specific focus, which can include the programming language used, the operating system, an application, an API, or the subject and the demographic group of the programmers. In other cases, there is no restriction on the type of software being created or the design of the new system. Etymology The word "hackathon" is a portmanteau of the words "hack" and "marathon", where "hack" is used in the sense of exploratory programming, not its alternate meaning as a reference to breaching computer security. OpenBSD's apparent first use of the term referred to a cryptographic development event held in Calgary on June 4, 1999, where ten developers came together to avoid legal problems caused due to export regulations of cryptographic software from the United States. Since then, a further three to six events per year have occurred around the world to advance development, generally on university campuses. For Sun Microsystems, the usage referred to an event at the JavaOne conference from June 15 to June 19, 1999; there John Gage challenged attendees to write a program in Java for the new Palm V using the infrared port to communicate with other people who are using Palm and register it on the Internet. Starting in the mid to late 2000s, hackathons became significantly
https://en.wikipedia.org/wiki/Fredholm%27s%20theorem
In mathematics, Fredholm's theorems are a set of celebrated results of Ivar Fredholm in the Fredholm theory of integral equations. There are several closely related theorems, which may be stated in terms of integral equations, in terms of linear algebra, or in terms of the Fredholm operator on Banach spaces. The Fredholm alternative is one of the Fredholm theorems. Linear algebra Fredholm's theorem in linear algebra is as follows: if M is a matrix, then the orthogonal complement of the row space of M is the null space of M: Similarly, the orthogonal complement of the column space of M is the null space of the adjoint: Integral equations Fredholm's theorem for integral equations is expressed as follows. Let be an integral kernel, and consider the homogeneous equations and its complex adjoint Here, denotes the complex conjugate of the complex number , and similarly for . Then, Fredholm's theorem is that, for any fixed value of , these equations have either the trivial solution or have the same number of linearly independent solutions , . A sufficient condition for this theorem to hold is for to be square integrable on the rectangle (where a and/or b may be minus or plus infinity). Here, the integral is expressed as a one-dimensional integral on the real number line. In Fredholm theory, this result generalizes to integral operators on multi-dimensional spaces, including, for example, Riemannian manifolds. Existence of solutions One of Fredholm's theorems, closely related to the Fredholm alternative, concerns the existence of solutions to the inhomogeneous Fredholm equation Solutions to this equation exist if and only if the function is orthogonal to the complete set of solutions of the corresponding homogeneous adjoint equation: where is the complex conjugate of and the former is one of the complete set of solutions to A sufficient condition for this theorem to hold is for to be square integrable on the rectangle . References E.I. Fredholm, "Su
https://en.wikipedia.org/wiki/Lipofuscin
Lipofuscin is the name given to fine yellow-brown pigment granules composed of lipid-containing residues of lysosomal digestion. It is considered to be one of the aging or "wear-and-tear" pigments, found in the liver, kidney, heart muscle, retina, adrenals, nerve cells, and ganglion cells. Formation and turnover Lipofuscin appears to be the product of the oxidation of unsaturated fatty acids and may be symptomatic of membrane damage, or damage to mitochondria and lysosomes. Aside from a large lipid content, lipofuscin is known to contain sugars and metals, including mercury, aluminium, iron, copper and zinc. Lipofuscin is also accepted as consisting of oxidized proteins (30–70%) as well as lipids (20–50%). It is a type of lipochrome and is specifically arranged around the nucleus. The accumulation of lipofuscin-like material may be the result of an imbalance between formation and disposal mechanisms. Such accumulation can be induced in rats by administering a protease inhibitor (leupeptin); after a period of three months, the levels of the lipofuscin-like material return to normal, indicating the action of a significant disposal mechanism. However, this result is controversial, as it is questionable if the leupeptin-induced material is true lipofuscin. There exists evidence that "true lipofuscin" is not degradable in vitro; whether this holds in vivo over longer time periods is not clear. The ABCR -/- knockout mouse has delayed dark adaptation but normal final rod threshold relative to controls. Bleaching the retina with strong light leads to formation of toxic cationic bis-pyridinium salt, N-retinylidene-N-retinyl-ethanolamine (A2E), which causes dry and wet age-related macular degeneration. From this experiment, it was concluded that ABCR has a significant role in preventing formation of A2E in extracellular photoreceptor surfaces during bleach recovery. Relation to diseases Lipofuscin accumulation in the eye, is a major risk factor implicated in macular deg
https://en.wikipedia.org/wiki/Product%20order
In mathematics, given a partial order and on a set and , respectively, the product order (also called the coordinatewise order or componentwise order) is a partial ordering on the Cartesian product Given two pairs and in declare that if and Another possible ordering on is the lexicographical order, which is a total ordering. However the product order of two total orders is not in general total; for example, the pairs and are incomparable in the product order of the ordering with itself. The lexicographic combination of two total orders is a linear extension of their product order, and thus the product order is a subrelation of the lexicographic order. The Cartesian product with the product order is the categorical product in the category of partially ordered sets with monotone functions. The product order generalizes to arbitrary (possibly infinitary) Cartesian products. Suppose is a set and for every is a preordered set. Then the on is defined by declaring for any and in that if and only if for every If every is a partial order then so is the product preorder. Furthermore, given a set the product order over the Cartesian product can be identified with the inclusion ordering of subsets of The notion applies equally well to preorders. The product order is also the categorical product in a number of richer categories, including lattices and Boolean algebras. References See also Direct product of binary relations Examples of partial orders Star product, a different way of combining partial orders Orders on the Cartesian product of totally ordered sets Ordinal sum of partial orders Order theory
https://en.wikipedia.org/wiki/Tom%20DeMarco
Tom DeMarco (born August 20, 1940) is an American software engineer, author, and consultant on software engineering topics. He was an early developer of structured analysis in the 1970s. Early life and education Tom DeMarco was born in Hazleton, Pennsylvania. He received a BSEE degree in Electrical Engineering from Cornell University, a M.S. from Columbia University, and a diplôme from the University of Paris. Career DeMarco started working at Bell Telephone Laboratories in 1963, where he participated in ESS-1 project to develop the first large scale Electronic Switching System, which became installed in telephone offices all over the world. Later in the 1960s he started working for a French IT consulting firm, where he worked on the development of a conveyor system for the new merchandise mart at La Villette in Paris, and in the 1970s on the development of on-line banking systems in Sweden, Holland, France and New York. In the 1970s DeMarco was one of the major figures in the development of structured analysis and structured design in software engineering. In January 1978 he published Structured Analysis and System Specification, a major milestone in the field. In the 1980s with Tim Lister, Stephen McMenamin, John F. Palmer, James Robertson and Suzanne Robertson, he founded the consulting firm "The Atlantic Systems Guild" in New York. The firm initially shared offices with the Dorset House Publishing owned by Wendy Eachan, Tim Lister's wife. Their company developed into a New York- and London-based consulting company specializing in methods and management of software development. DeMarco has lectured and consulted throughout the Americas, Europe, Africa, Australia and the Far East. He has also been a technical advisor for ZeniMax Media, the parent company of video game publisher Bethesda Softworks. He is a member of the ACM and a Fellow of the IEEE. He lives in Camden, Maine, and is a principal of the Atlantic Systems Guild, and a fellow and Senior Consulta
https://en.wikipedia.org/wiki/Fan%20film
A fan film is a film or video inspired by a film, television program, comic book, book, or video game created by fans rather than by the source's copyright holders or creators. Fan filmmakers have traditionally been amateurs, but some of the more notable films have actually been produced by professional filmmakers as film school class projects or as demonstration reels. Fan films vary tremendously in quality, as well as in length, from short faux-teaser trailers for non-existent motion pictures to full-length motion pictures. Fan films are also examples of fan labor and the remix culture. Closely related concepts are fandubs, fansubs and vidding which are reworks of fans on already released film material. History The earliest known fan film is Anderson 'Our Gang, which was produced in 1926 by a pair of itinerant filmmakers. Shot in Anderson, South Carolina, the short is based on the Our Gang film series; the only known copy resides in the University of South Carolina's Newsfilm Library. Various amateur filmmakers created their own fan films throughout the ensuing decades, including a teenaged Hugh Hefner, but the technology required to make fan films was a limiting factor until relatively recently. In the 1960s UCLA film student Don Glut filmed a series of short black and white "underground films", based on adventure and comic book characters from 1940s and 1950s motion picture serials. Around the same time, artist Andy Warhol produced a film called Batman Dracula which could be described as a fan film. But it wasn't until the 1970s that the popularization of science fiction conventions allowed fans to show their films to the wider fan community. In the 1960s, 1970s, and 1980s, there were many unofficial foreign remakes of American films that today may be considered fan films, such as Süpermenler (Superman), 3 Dev Adam, (Spider-Man), Mahakaal (A Nightmare on Elm Street), and Dünyayı Kurtaran Adam (Star Wars). Most of the more prominent science fiction films and t
https://en.wikipedia.org/wiki/Osmoconformer
Osmoconformers are marine organisms that maintain an internal environment which is isotonic to their external environment. This means that the osmotic pressure of the organism's cells is equal to the osmotic pressure of their surrounding environment. By minimizing the osmotic gradient, this subsequently minimizes the net influx and efflux of water into and out of cells. Even though osmoconformers have an internal environment that is isosmotic to their external environment, the types of ions in the two environments differ greatly in order to allow critical biological functions to occur. An advantage of osmoconformation is that such organisms don’t need to expend as much energy as osmoregulators in order to regulate ion gradients. However, to ensure that the correct types of ions are in the desired location, a small amount of energy is expended on ion transport. A disadvantage to osmoconformation is that the organisms are subject to changes in the osmolarity of their environment. Examples Invertebrates Most osmoconformers are marine invertebrates such as echinoderms (such as starfish), mussels, marine crabs, lobsters, jellyfish, ascidians (sea squirts - primitive chordates), and scallops. Some insects are also osmoconformers. Some osmoconformers, such as echinoderms, are stenohaline, which means they can only survive in a limited range of external osmolarities. The survival of such organisms is thus contingent on their external osmotic environment remaining relatively constant. On the other hand, some osmoconformers are classified as euryhaline, which means they can survive in a broad range of external osmolarities. Mussels are a prime example of a euryhaline osmoconformer. Mussels have adapted to survive in a broad range of external salinities due to their ability to close their shells which allows them to seclude themselves from unfavorable external environments. Craniates There are a couple of examples of osmoconformers that are craniates such as ha
https://en.wikipedia.org/wiki/Brookings%20Report
Proposed Studies on the Implications of Peaceful Space Activities for Human Affairs, often referred to as "the Brookings Report", was a 1960 report commissioned by NASA and created by the Brookings Institution in collaboration with NASA's Committee on Long-Range Studies. It was submitted to the House Committee on Science and Astronautics of the United States House of Representatives in the 87th United States Congress on April 18, 1961. Significance The report has become noted for one short section entitled "The implications of a discovery of extraterrestrial life", which examines the potential implications of such a discovery on public attitudes and values. The section briefly considers possible public reactions to some possible scenarios for the discovery of extraterrestrial life, stressing a need for further research in this area. It recommended continuing studies to determine the likely social impact of such a discovery and its effects on public attitudes, including study of the question of how leadership should handle information about such a discovery and under what circumstances leaders might or might not find it advisable to withhold such information from the public. The significance of this section of the report is a matter of controversy. Persons who believe that extraterrestrial life has already been confirmed and that this information is being withheld by government from the public sometimes turn to this section of the report as support for their view. Frequently cited passages from this section of the report are drawn both from its main body and from its footnotes. The report has been mentioned in newspapers such as The New York Times, The Baltimore Sun, The Washington Times, and the Huffington Post. Background and context The report was entered into the Congressional Record, which is currently archived at over 1110 libraries as part of the Federal Depository Library Program. The main author Donald N. Michael was a "social psychologist with a backg
https://en.wikipedia.org/wiki/FAAC
FAAC or Freeware Advanced Audio Coder is a software project which includes the AAC encoder FAAC and decoder FAAD2. It supports MPEG-2 AAC as well as MPEG-4 AAC. It supports several MPEG-4 Audio object types (LC, Main, LTP for encoding and SBR, PS, ER, LD for decoding), file formats (ADTS AAC, raw AAC, MP4), multichannel and gapless encoding/decoding and MP4 metadata tags. The encoder and decoder is compatible with standard-compliant audio applications using one or more of these object types and facilities. It also supports Digital Radio Mondiale. FAAC and FAAD2, being distributed in C source code form, can be compiled on various platforms and are distributed free of charge. FAAD2 is free software. FAAC contains some code which is published as Free Software, but as a whole it is only distributed under a proprietary license. FAAC was originally written by Menno Bakker. FAAC encoder FAAC stands for Freeware Advanced Audio Coder. The FAAC encoder is an audio compression computer program that creates AAC (MPEG-2 AAC/MPEG-4 AAC) sound files from other formats (usually, CD-DA audio files). It contains a library (libfaac) that can be used by other programs. AAC files are commonly used in computer programs and portable music players, being Apple Inc.'s recommended format for the company's iPod music player. Some of the features that FAAC has are: cross-platform support, "reasonably" fast encoding, support for more than one "object type" of the AAC format, multi-channel encoding, and support for Digital Radio Mondiale streams. It also supports multi-channel streams, like 5.1. The MPEG-4 object types of the AAC format supported by FAAC are the "Low Complexity" (LC), "Main", and "Long Term Prediction" (LTP). The MPEG-2 AAC profiles supported by FAAC are LC and Main. The SBR and PS object types are not supported, so the HE-AAC and HE-AACv2 profiles are also not supported. The object type "Low Complexity" is the default and also happens to be used in videos meant to be play
https://en.wikipedia.org/wiki/Fuzzy%20electronics
Fuzzy electronics is an electronic technology that uses fuzzy logic, instead of the two-state Boolean logic more commonly used in digital electronics. Fuzzy electronics is fuzzy logic implemented on dedicated hardware. This is to be compared with fuzzy logic implemented in software running on a conventional processor. Fuzzy electronics has a wide range of applications, including control systems and artificial intelligence. History The first fuzzy electronic circuit was built by Takeshi Yamakawa et al. in 1980 using discrete bipolar transistors. The first industrial fuzzy application was in a cement kiln in Denmark in 1982. The first VLSI fuzzy electronics was by Masaki Togai and Hiroyuki Watanabe in 1984. In 1987, Yamakawa built the first analog fuzzy controller. The first digital fuzzy processors came in 1988 by Togai (Russo, pp. 2-6). In the early 1990s, the first fuzzy logic chips were presented to the public. Two companies which are Omron and NEC have announced the development of dedicated fuzzy electronic hardware in the year 1991. Two years later, the japanese Omron Cooperation has shown a working fuzzy chip during a technical fair. See also Defuzzification Fuzzy set Fuzzy set operations References Bibliography Ahmad M. Ibrahim, Introduction to Applied Fuzzy Electronics, . Abraham Kandel, Gideon Langholz (eds), Fuzzy Hardware: Architectures and Applications, Springer Science & Business Media, 2012 . Marco Russo, "Fuzzy hardware research from historical point of view", op. cit. chapt. 1. Further reading Yamakawa, T.; Inoue, T.; Ueno, F.; Shirai, Y., "Implementation of Fuzzy Logic hardware systems-Three fundamental arithmetic circuits", Transactions of the Institute of Electronics and Communications Engineers, vol. 63, 1980, pp. 720-721. Togai, M.; Watanabe, H., "A VLSI implementation of a fuzzy inference engine: towards an expert system on a chip", Information Sciences, vol. 38, iss. 2, April 1986, pp. 147-163 External links Applications o
https://en.wikipedia.org/wiki/Amyloid%20beta
Amyloid beta (Aβ or Abeta) denotes peptides of 36–43 amino acids that are the main component of the amyloid plaques found in the brains of people with Alzheimer's disease. The peptides derive from the amyloid-beta precursor protein (APP), which is cleaved by beta secretase and gamma secretase to yield Aβ in a cholesterol-dependent process and substrate presentation. Aβ molecules can aggregate to form flexible soluble oligomers which may exist in several forms. It is now believed that certain misfolded oligomers (known as "seeds") can induce other Aβ molecules to also take the misfolded oligomeric form, leading to a chain reaction akin to a prion infection. The oligomers are toxic to nerve cells. The other protein implicated in Alzheimer's disease, tau protein, also forms such prion-like misfolded oligomers, and there is some evidence that misfolded Aβ can induce tau to misfold. A study has suggested that APP and its amyloid potential is of ancient origins, dating as far back as early deuterostomes. Normal function The normal function of Aβ is not well understood. Though some animal studies have shown that the absence of Aβ does not lead to any obvious loss of physiological function, several potential activities have been discovered for Aβ, including activation of kinase enzymes, protection against oxidative stress, regulation of cholesterol transport, functioning as a transcription factor, and anti-microbial activity (potentially associated with Aβ's pro-inflammatory activity). The glymphatic system clears metabolic waste from the mammalian brain, and in particular amyloid beta. A number of proteases have been implicated by both genetic and biochemical studies as being responsible for the recognition and degradation of amyloid beta; these include insulin degrading enzyme and presequence protease. The rate of removal is significantly increased during sleep. However, the significance of the glymphatic system in Aβ clearance in Alzheimer's disease is unknown. D
https://en.wikipedia.org/wiki/Vanguard%20%28microkernel%29
Vanguard is a discontinued experimental microkernel developed at Apple Computer, in the research-oriented Apple Advanced Technology Group (ATG) in the early 1990s. Based on the V-System, Vanguard introduced standardized object identifiers and a unique message chaining system for improved performance. Vanguard was not used in any of Apple's commercial products. Development ended in 1993 when Ross Finlayson, the project's main investigator, left Apple. Basic concepts Vanguard was generally very similar to the V-System, but added support for true object-oriented programming of the operating system. This meant that kernel and server interfaces were exported as objects, which could be inherited and extended in new code. This change has no visible effect on the system, it is mainly a change in the source code that makes programming easier. For example, Vanguard had an input/output (I/O) class which was supported by several different servers, for example, networking and file servers, which new applications could interact with by importing the I/O interface and calling methods. This also made writing new servers much easier, because they had a standard to program to, and were able to share code more easily. V messaging semantics A key concept to almost all microkernels is to break down one larger kernel into a set of communicating servers. Instead of having one larger program controlling all of the hardware of a computer system, the various duties are apportioned among smaller programs that are given rights to control different parts of the machine. For example, one server can be given control of the networking hardware, while another has the task of managing the hard disk drives. Another server would handle the file system, calling both of these lower-level servers. User applications ask for services by sending messages to these servers, using some form of inter-process communications (IPC), in contrast to asking the kernel to do this work via a system call (syscall) or
https://en.wikipedia.org/wiki/Location%20transparency
In computer networks, location transparency is the use of names to identify network resources, rather than their actual location. For example, files are accessed by a unique file name, but the actual data is stored in physical sectors scattered around a disk in either the local computer or in a network. In a location transparency system, the actual location where the file is stored doesn't matter to the user. A distributed system will need to employ a networked scheme for naming resources. The main benefit of location transparency is that it no longer matters where the resource is located. Depending on how the network is set, the user may be able to obtain files that reside on another computer connected to the particular network. This means that the location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development. An additional benefit is the flexibility it provides. Systems resources can be moved to a different computer at any time without disrupting any software systems running on them. By simply updating the location that goes with the named resource, every program using that resource will be able to find it. Location transparency effectively makes the location easy to use for users, since the data can be accessed by almost everyone who can connect to the Internet, who knows the right file names for usage, and who has proper security credentials to access it. See also Transparency (computing) References Computer networking
https://en.wikipedia.org/wiki/Error%20catastrophe
Error catastrophe refers to the cumulative loss of genetic information in a lineage of organisms due to high mutation rates. The mutation rate above which error catastrophe occurs is called the error threshold. Both terms were coined by Manfred Eigen in his mathematical evolutionary theory of the quasispecies. The term is most widely used to refer to mutation accumulation to the point of inviability of the organism or virus, where it cannot produce enough viable offspring to maintain a population. This use of Eigen's term was adopted by Lawrence Loeb and colleagues to describe the strategy of lethal mutagenesis to cure HIV by using mutagenic ribonucleoside analogs. There was an earlier use of the term introduced in 1963 by Leslie Orgel in a theory for cellular aging, in which errors in the translation of proteins involved in protein translation would amplify the errors until the cell was inviable. This theory has not received empirical support. Error catastrophe is predicted in certain mathematical models of evolution and has also been observed empirically. Like every organism, viruses 'make mistakes' (or mutate) during replication. The resulting mutations increase biodiversity among the population and help subvert the ability of a host's immune system to recognise it in a subsequent infection. The more mutations the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on biodiversity for an explanation of the selective advantages of this). However, if it makes too many mutations, it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all. The question arises: how many mutations can be made during each replication before the population of viruses begins to lose self-identity? Basic mathematical model Consider a virus which has a genetic identity modeled by a string of ones and zeros (e.g.
https://en.wikipedia.org/wiki/EMachines%20eOne
The eOne is an all-in-one desktop computer that was produced by eMachines in 1999. It resembles Apple's "Bondi Blue" iMac. Apple sued eMachines for allegedly infringing upon the distinctive trade dress of the iMac with the eOne. Apple and eMachines settled the case in 2000, which required the model to be discontinued. History and legal issues Upon its release in 1999, the eOne came with a translucent "cool blue" case, while the original iMac had a two-toned case with "Bondi Blue" accents. At US$799, the eOne was also cheaper than the US$1,199 iMac. eMachines hoped to avoid legal trouble because the shape of the computer was different from the iMac. However, Apple sued eMachines, alleging that the computer's design infringed upon the protected trade dress of the iMac. In March 2000, eMachines reached a settlement with Apple, under which it agreed to discontinue the infringing model. The eOne was available at Circuit City and Micro Center, but it did not sell well in the few months when it was available due to a lawsuit from Apple which eventually caused the eOne to be widely considered a failure for eMachines. The eOne was discontinued in 2002, and due to its lackluster sales, is rare in the secondary market. Technical specifications The eOne had a 433 MHz Intel Celeron microprocessor, 64 megabytes of PC-100 SDRAM RAM, a 15-inch CRT monitor, a 10BASE-T Ethernet port, a floppy drive, an 8 MB ATI video card, a 56k modem, and a CD-ROM drive, along with the ability to use two PC cards, which were commonly used to expand the capabilities of laptops. As a Wintel-based computer, the eOne ran Windows 98 or Windows Me depending on the time of manufacture, as opposed to the iMac running Mac OS 8 or Mac OS 9. Legacy In 2007, three years after acquiring eMachines, Gateway released the One, an all-in-one desktop computer similar to the eOne but in black and utilizing a flat-screen monitor. References EMachines All-in-one desktop computers Computer-related introductions in
https://en.wikipedia.org/wiki/Cleaning%20station
A cleaning station is a location where aquatic life congregate to be cleaned by smaller creatures. Such stations exist in both freshwater and marine environments, and are used by animals including fish, sea turtles and hippos, referred to as clients. The cleaning process includes the removal of parasites from the animal's body (both externally and internally), and is performed by various smaller animals including cleaner shrimp and numerous species of cleaner fish, especially wrasses and gobies (Elacatinus spp.), collectively referred to as cleaners. When the animal approaches a cleaning station, it will open its mouth wide or position its body in such a way as to signal that it needs to be cleaned. The cleaner fish will then remove and eat the parasites from the skin, even swimming into the mouth and gills of any fish being cleaned. This is a form of cleaning symbiosis. How predator clients recognize cleaners is still uncertain. It has been hypothesized that color, size, and pattern indicate to clients that an organism is a cleaner. For example, cleaning gobies tend to exhibit full-body lateral stripes, unlike their non-cleaning counterparts, which exhibit shorter lateral stripes. Cleaners also tend to be smaller because in fish species, usually juveniles are cleaners. Cleaning stations may be associated with coral reefs, located either on top of a coral head or in a slot between two outcroppings. Other cleaning stations may be located under large clumps of floating seaweed or at an accepted point in a river or lagoon. Cleaning stations are an exhibition of mutualism between cleaners and clients. Cleaner fish may also impact species diversity around coral reefs. Some clients have smaller home ranges and can only access one cleaning station. Clients with larger home ranges are able to access a variety of cleaning stations and are capable of choosing between cleaning stations. Visitor clients travel long distances to a cleaning station and are not local to the
https://en.wikipedia.org/wiki/Machine%20coordinate%20system
In the manufacturing industry, with regard to numerically controlled machine tools, the phrase machine coordinate system refers to the physical limits of the motion of the machine in each of its axes, and to the numerical coordinate which is assigned (by the machine tool builder) to each of these limits. CNC Machinery refers to machines and devices that are controlled by using programmed commands which are encoded on to a storage medium, and NC refers to the automation of machine tools that are operated by abstract commands programmed and encoded onto a storage medium. Types of Machine Coordinate Systems The absolute coordinate system uses the cartesian coordinate system, where a point on the machine is specifically defined. The cartesian coordinate system is a set of three number lines labeled X, Y, and Z, which are used to determine the point in the workspace that the machine needs to operate in. This absolute coordinate system allows the machine operator to edit the machine code in a way where the specifically defined section is easy to pinpoint. Before putting in theses coordinates though, the machine operator needs to set a point of origin on the machine. The point of origin in the cartesian system is 0, 0, 0. This allows the machine operator to know which directions are positive and negative in the cartesian plane. It also makes sure that every move made is based on the distance from this origin point. The relative coordinate system, also known as the incremental coordinate system, also uses the cartesian coordinate system, but in a different manner. The relative coordinate system allows the machine operator to define a point in the workspace based on, or relative to, the previous point that the machine tool was at. This means that after every move the machine tool makes, the point that it ends up at is based on the distance from the previous point. So, the origin set on the machine changes after every move. The polar coordinate system does not use the car
https://en.wikipedia.org/wiki/Van%20der%20Pol%20oscillator
In the study of dynamical systems, the van der Pol oscillator (named for Dutch physicist Balthasar van der Pol) is a non-conservative, oscillating system with non-linear damping. It evolves in time according to the second-order differential equation where is the position coordinate—which is a function of the time —and is a scalar parameter indicating the nonlinearity and the strength of the damping. History The Van der Pol oscillator was originally proposed by the Dutch electrical engineer and physicist Balthasar van der Pol while he was working at Philips. Van der Pol found stable oscillations, which he subsequently called relaxation-oscillations and are now known as a type of limit cycle, in electrical circuits employing vacuum tubes. When these circuits are driven near the limit cycle, they become entrained, i.e. the driving signal pulls the current along with it. Van der Pol and his colleague, van der Mark, reported in the September 1927 issue of Nature that at certain drive frequencies an irregular noise was heard, which was later found to be the result of deterministic chaos. The Van der Pol equation has a long history of being used in both the physical and biological sciences. For instance, in biology, Fitzhugh and Nagumo extended the equation in a planar field as a model for action potentials of neurons. The equation has also been utilised in seismology to model the two plates in a geological fault, and in studies of phonation to model the right and left vocal fold oscillators. Two-dimensional form Liénard's theorem can be used to prove that the system has a limit cycle. Applying the Liénard transformation , where the dot indicates the time derivative, the Van der Pol oscillator can be written in its two-dimensional form: . Another commonly used form based on the transformation leads to: . Results for the unforced oscillator When , i.e. there is no damping function, the equation becomesThis is a form of the simple harmonic oscillator, and there
https://en.wikipedia.org/wiki/KRON-TV
KRON-TV (channel 4) is a television station licensed to San Francisco, California, United States, serving as the San Francisco Bay Area's outlet for The CW Television Network. The station also maintains a secondary affiliation with MyNetworkTV. Owned and operated by The CW's majority owner, Nexstar Media Group, KRON-TV has studios on Front Street in the city's historic Northeast Waterfront, in the same building as ABC owned-and-operated station KGO-TV, channel 7 (but with completely separate operations from that station). The transmitting antenna is located atop Sutro Tower in San Francisco. History NBC affiliation (1949–2001) In 1948, the Federal Communications Commission (FCC) authorized a construction permit by the Chronicle Publishing Company, publishers of the San Francisco Chronicle daily newspaper, for a new television station in San Francisco, KRON-TV. Chronicle Publishing was founded by brothers Charles and Michael de Young. The company already owned radio station KRON-FM. Managed by Michael de Young's grandson Charles de Young Thieriot, KRON signed on the air on November 15, 1949, as a full-time NBC affiliate. Its opening night program schedule included a special about San Francisco entertainment followed by the usual NBC prime time lineup of the Texaco Star Theater with Milton Berle, The Life of Riley, Mohawk Showroom, and The Chesterfield Supper Club. KRON-TV was the third television outlet in the Bay Area behind KGO-TV (channel 7) and KPIX-TV (channel 5), all going on the air within a year, and the last license before the FCC placed a moratorium on new television station licenses that would last the next four years. KRON-TV originally broadcast from studios located in the basement of the Chronicle Building at Fifth and Mission Streets. Newscasts benefited from the resources of the Chronicle and there was cooperation between KRON-TV and the newspaper. It originally maintained transmitter facilities, master control and a small insert studio on San Br
https://en.wikipedia.org/wiki/Geobiology
Geobiology is a field of scientific research that explores the interactions between the physical Earth and the biosphere. It is a relatively young field, and its borders are fluid. There is considerable overlap with the fields of ecology, evolutionary biology, microbiology, paleontology, and particularly soil science and biogeochemistry. Geobiology applies the principles and methods of biology, geology, and soil science to the study of the ancient history of the co-evolution of life and Earth as well as the role of life in the modern world. Geobiologic studies tend to be focused on microorganisms, and on the role that life plays in altering the chemical and physical environment of the pedosphere, which exists at the intersection of the lithosphere, atmosphere, hydrosphere and/or cryosphere. It differs from biogeochemistry in that the focus is on processes and organisms over space and time rather than on global chemical cycles. Geobiological research synthesizes the geologic record with modern biologic studies. It deals with process - how organisms affect the Earth and vice versa - as well as history - how the Earth and life have changed together. Much research is grounded in the search for fundamental understanding, but geobiology can also be applied, as in the case of microbes that clean up oil spills. Geobiology employs molecular biology, environmental microbiology, organic geochemistry, and the geologic record to investigate the evolutionary interconnectedness of life and Earth. It attempts to understand how the Earth has changed since the origin of life and what it might have been like along the way. Some definitions of geobiology even push the boundaries of this time frame - to understanding the origin of life and to the role that humans have played and will continue to play in shaping the Earth in the Anthropocene. History The term geobiology was coined by Lourens Baas Becking in 1934. In his words, geobiology "is an attempt to describe the relationship b
https://en.wikipedia.org/wiki/Szemer%C3%A9di%20regularity%20lemma
Szemerédi's regularity lemma is one of the most powerful tools in extremal graph theory, particularly in the study of large dense graphs. It states that the vertices of every large enough graph can be partitioned into a bounded number of parts so that the edges between different parts behave almost randomly. According to the lemma, no matter how large a graph is, we can approximate it with the edge densities between a bounded number of parts. Between any two parts, the distribution of edges will be pseudorandom as per the edge density. These approximations provide essentially correct values for various properties of the graph, such as the number of embedded copies of a given subgraph or the number of edge deletions required to remove all copies of some subgraph. Statement To state Szemerédi's regularity lemma formally, we must formalize what the edge distribution between parts behaving 'almost randomly' really means. By 'almost random', we're referring to a notion called -regularity. To understand what this means, we first state some definitions. In what follows is a graph with vertex set . Definition 1. Let be disjoint subsets of . The edge density of the pair is defined as: where denotes the set of edges having one end vertex in and one in . We call a pair of parts -regular if, whenever you take a large subset of each part, their edge density isn't too far off the edge density of the pair of parts. Formally, Definition 2. For , a pair of vertex sets and is called -regular, if for all subsets , satisfying , , we have The natural way to define an -regular partition should be one where each pair of parts is -regular. However, some graphs, such as the half graphs, require many pairs of partitions (but a small fraction of all pairs) to be irregular. So we shall define -regular partitions to be one where most pairs of parts are -regular. Definition 3. A partition of into sets is called an -regular partition if Now we can state the lemma: Szemerédi'
https://en.wikipedia.org/wiki/Tux%20Paint
Tux Paint is a free and open source raster graphics editor geared towards young children. The project was started in 2002 by Bill Kendrick who continues to maintain and improve it, with help from numerous volunteers. Tux Paint is seen by many as a free software alternative to Kid Pix, a similar proprietary educational software product. History Tux Paint was initially created for the Linux operating system, as there was no suitable drawing program for young children available for Linux at that time. It is written in the C programming language and uses various free and open source helper libraries, including the Simple DirectMedia Layer (SDL), and has since been made available for Microsoft Windows, Apple macOS, Android, Haiku, and other platforms. Selected milestone releases: 2002.06.16 (June 16, 2002) - Initial release (brushes, stamps, lines, eraser), two days after coding started 2002.06.30 (June 30, 2002) - First Magic Tools added (blur, blocks, negative) 2002.07.31 (July 31, 2002) - Localization support added 0.9.11 (June 17, 2003) - Right-to-left support, UTF-8 support in Text tool 0.9.14 (October 12, 2004) - Tux Paint Config. configuration tool released, Starter image support 0.9.16 (October 21, 2006) - Slideshow feature, animated and directional brushes 0.9.17 (July 1, 2007) - Arbitrary screen size and orientation support, SVG support, input method support 0.9.18 (November 21, 2007) - Magic Tools turned into plug-ins, Pango text rendering 0.9.25 (December 20, 2020) - Support for exporting individual drawings and slideshows (as animated GIFs) 0.9.28 (June 4, 2022) - 20-year milestone release, adds the ability to use any colour by setting hue and saturation instead of a static palette. 0.9.29 (April 2, 2023) - Introduces fifteen new Magic Tools, improvements to the Stamp and Shapes tool and a new quick start guide. Features Tux Paint stands apart from typical graphics editing software (such as GIMP or Photoshop) that it was designed to be us
https://en.wikipedia.org/wiki/HCLTech
HCL Technologies Limited, d/b/a HCLTech (formerly Hindustan Computers Pvt. Limited), is an Indian multinational information technology (IT) services and consulting company headquartered in Noida. The founder of HCLTech is Shiv Nadar. It emerged as an independent company in 1991 when HCL entered into the software services business. The company has offices in 52 countries and over 225,944 employees. HCLTech is on the Forbes Global 2000 list. It is among the top 20 largest publicly traded companies in India with a market capitalization of 281,209 crore as of March 2022. It is one of the top Big Tech (India) companies. History Formation and early years In 1976, a group of eight engineers, all former employees of Delhi Cloth & General Mills, led by Shiv Nadar, started a company that would make personal computers. Initially floated as Microcomp Limited, Nadar and his team (which also included Arjun Malhotra, Ajai Chowdhry, D.S. Puri, Yogesh Vaidya and Subhash Arora) started selling teledigital calculators to gather capital for their main product. On 11 August 1976, the company was renamed Hindustan Computers Limited (HCL). The company originally was focused on hardware but, via HCL Technologies, software and services became a main focus. HCL Technologies began as the R&D Division of HCL Enterprise, a company which was a contributor to the development and growth of the IT and computer industry in India. HCL Enterprise developed an indigenous microcomputer in 1978, and a networking OS and client-server architecture in 1983. On 12 November 1991, HCL Technologies was spun off as a separate unit to provide software services. Later subsidiaries included HCL Infosystems and HCL Healthcare. On 12 November 1991, a company called HCL Overseas Limited was incorporated as a provider of technology development services. It received the certificate of commencement of business on 10 February 1992 after which it began its operations. Two years later, in July 1994, the company nam
https://en.wikipedia.org/wiki/Bloom%20%28novel%29
Bloom, written in 1998, is the fifth science fiction novel written by Wil McCarthy. It was first released as a hardcover in September 1998. Almost a year later, in August 1999, its first mass market edition was published. An ebook reprint was published in 2011. Bloom is one of Borders' "Best 10 Books of 1998" and is a New York Times Notable Book. The premise of the book is how to handle human technology that has evolved beyond human control. Plot summary Bloom is set in the year 2106, in a world where self-replicating nanomachines called "Mycora" have consumed Earth and other planets of the inner Solar System, forcing humankind to eke out a bleak living in the asteroids and Galilean moons. Two groups of humanity are described—The Immunity, who use "ladderdown" technology and augmented reality and live on the moons of Jupiter, and The Gladholders, who use human intelligence amplification and artificial intelligence and live in the asteroid belt. The story begins on Ganymede with an article about a "bloom", or outbreak of Mycora, that serves to emphasize the danger and horror of this technogenic life (TGL). The article is written by Strasheim, the primary narrator character. He is first seen in the office of Chief of Immunology Lottick, the effective ruler of Ganymede, who has called him there for an unknown purpose. Lottick tells Strasheim that the Mycora have apparently been stealing or assimilating human designed defensive nanotech and may soon develop resistance to the coldness of the outer Solar System, which incites concern. It is planned to send mission to drop TGL detectors onto the polar ice caps of Mars, Earth, and the Moon, and Lottick asks Strasheim to go along as a reporter. For the longer term, a starship is being constructed to colonize other star systems before the Mycora. Strasheim agrees, and goes to meet the other crew-members and inspect the ship, which is called the Louis Pasteur. The ship is technologically camouflaged to protect the cre
https://en.wikipedia.org/wiki/Multiresolution%20analysis
A multiresolution analysis (MRA) or multiscale approximation (MSA) is the design method of most of the practically relevant discrete wavelet transforms (DWT) and the justification for the algorithm of the fast wavelet transform (FWT). It was introduced in this context in 1988/89 by Stephane Mallat and Yves Meyer and has predecessors in the microlocal analysis in the theory of differential equations (the ironing method) and the pyramid methods of image processing as introduced in 1981/83 by Peter J. Burt, Edward H. Adelson and James L. Crowley. Definition A multiresolution analysis of the Lebesgue space consists of a sequence of nested subspaces that satisfies certain self-similarity relations in time-space and scale-frequency, as well as completeness and regularity relations. Self-similarity in time demands that each subspace Vk is invariant under shifts by integer multiples of 2k. That is, for each the function g defined as also contained in . Self-similarity in scale demands that all subspaces are time-scaled versions of each other, with scaling respectively dilation factor 2k-l. I.e., for each there is a with . In the sequence of subspaces, for k>l the space resolution 2l of the l-th subspace is higher than the resolution 2k of the k-th subspace. Regularity demands that the model subspace V0 be generated as the linear hull (algebraically or even topologically closed) of the integer shifts of one or a finite number of generating functions or . Those integer shifts should at least form a frame for the subspace , which imposes certain conditions on the decay at infinity. The generating functions are also known as scaling functions or father wavelets. In most cases one demands of those functions to be piecewise continuous with compact support. Completeness demands that those nested subspaces fill the whole space, i.e., their union should be dense in , and that they are not too redundant, i.e., their intersection should only contain the zero element.
https://en.wikipedia.org/wiki/Institution%20of%20Incorporated%20Engineers
The Institution of Incorporated Engineers (IIE) was a multidisciplinary engineering institution in the United Kingdom. In 2006 it merged with the Institution of Electrical Engineers (IEE) to form the Institution of Engineering and Technology (IET). Before the merger the IIE had approximately 40,000 members. The IET is now the second largest engineering society in the world next to the IEEE. The IET has the authority to establish professional registration of engineers (Chartered Engineer or Incorporated Engineer) through the Engineering Council. The IEEE does not have the authority to replicate the registration process in its complementary environment. History The IIE traces its heritage to the Vulcanic Society that was founded in 1884. The Vulcanic Society was formed by a group of apprentices from the works of Maudslay, Son & Field Ltd, in Lambeth, London. This society went through three name changes before it became the Junior Institution of Engineers in 1902, which became the Institution of General Technician Engineers in 1970 and the Institute of Mechanical and General Technician Engineers (IMGTechE) in 1976. In 1982 the IMGTechE and Institution of Technician Engineers in Mechanical Engineering (ITEME) merged to form the Institution of Mechanical Incorporated Engineers (IMechIE). The Institution of Electrical and Electronic Incorporated Engineers (IEEIE) and the Society of Electronic and Radio Technicians (SERT) merged in 1990 to form the Institution of Electronics and Electrical Incorporated Engineers (IEEIE). The IIE was formed in April 1998 by the merger of the IMechIE, the IEEIE and The Institute of Engineers and Technicians (IET). In 1999 The Institution of Incorporated Executive Engineers (IIExE) merged with the IIE. In October 2001, IIE received a Royal Charter in recognition of the significant contribution of its members to the UK economy and society. In 2005 The Society of Engineers also merged with the IIE. Formation of the IET Discussions
https://en.wikipedia.org/wiki/Institution%20of%20Engineering%20and%20Technology
The Institution of Engineering and Technology (IET) is a multidisciplinary professional engineering institution. The IET was formed in 2006 from two separate institutions: the Institution of Electrical Engineers (IEE), dating back to 1871, and the Institution of Incorporated Engineers (IIE) dating back to 1884. Its worldwide membership is currently in excess of 158,000 in 153 countries. The IET's main offices are in Savoy Place in London, England, and at Michael Faraday House in Stevenage, England. In the United Kingdom, the IET has the authority to establish professional registration for the titles of Chartered Engineer, Incorporated Engineer, Engineering Technician, and ICT Technician, as a licensed member institution of the Engineering Council. The IET is registered as a charity in England, Wales and Scotland. Formation Discussions started in 2004 between the IEE and the IIE about merging to form a new institution. In September 2005, both institutions held votes of the merger and the members voted in favour (73.5% IEE, 95.7% IIE). This merger also needed government approval, so a petition was then made to the Privy Council of the United Kingdom for a Supplemental Charter, to allow the creation of the new institution. This was approved by the Privy Council on 14 December 2005, and the new institution emerged on 31 March 2006. History of the IEE The Society of Telegraph Engineers (STE) was formed on 17 May 1871, and it published the Journal of the Society of Telegraph Engineers from 1872 through 1880. Carl Wilhelm Siemens was first President of IEE in 1872. On 22 December 1880, the STE was renamed as the Society of Telegraph Engineers and of Electricians and, as part of this change, it renamed its journal the Journal of the Society of Telegraph Engineers and of Electricians (1881–1 82) and later the Journal of the Society of Telegraph-Engineers and Electricians (1883–1888). Following a meeting of its Council on 10 November 1887, it was decided to adopt the na
https://en.wikipedia.org/wiki/Content-addressable%20network
The content-addressable network (CAN) is a distributed, decentralized P2P infrastructure that provides hash table functionality on an Internet-like scale. CAN was one of the original four distributed hash table proposals, introduced concurrently with Chord, Pastry, and Tapestry. Overview Like other distributed hash tables, CAN is designed to be scalable, fault tolerant, and self-organizing. The architectural design is a virtual multi-dimensional Cartesian coordinate space, a type of overlay network, on a multi-torus. This n-dimensional coordinate space is a virtual logical address, completely independent of the physical location and physical connectivity of the nodes. Points within the space are identified with coordinates. The entire coordinate space is dynamically partitioned among all the nodes in the system such that every node possesses at least one distinct zone within the overall space. Routing A CAN node maintains a routing table that holds the IP address and virtual coordinate zone of each of its neighbors. A node routes a message towards a destination point in the coordinate space. The node first determines which neighboring zone is closest to the destination point, and then looks up that zone's node's IP address via the routing table. Node joining To join a CAN, a joining node must: Find a node already in the overlay network. Identify a zone that can be split Update the routing tables of nodes neighboring the newly split zone. To find a node already in the overlay network, bootstrapping nodes may be used to inform the joining node of IP addresses of nodes currently in the overlay network. After the joining node receives an IP address of a node already in the CAN, it can attempt to identify a zone for itself. The joining node randomly picks a point in the coordinate space and sends a join request, directed to the random point, to one of the received IP addresses. The nodes already in the overlay network route the join request to the correct d
https://en.wikipedia.org/wiki/Codan
Codan Limited is a manufacturer and supplier of communications, metal detection, and mining technology, headquartered in Adelaide, South Australia with revenue of A$348.0 million (2020). Codan Limited is the communications business unit and the parent company of the Codan group, which is engaged in business through its operating segment Radio Communications. This product range is sold to customers in more than 150 countries. In addition to its global service and support network, the Codan group has regional sales offices in Perth (Western Australia), Washington D.C., and Chicago (United States), Victoria, BC, (Canada), Farnham (UK), Cork (Ireland), Florianópolis (Brazil), Penang (Malaysia) and Dubai (United Arab Emirates). The company maintains quality assurance systems approved to the ISO 9001:2000 standard. The company was established in 1959 by three friends from the University of Adelaide: Alastair Wood, Ian Wall and Jim Bettison. The company was established as Electronics, Instrument and Lighting Company Limited (EILCO), renaming as Codan in 1970. Codan was listed on the Australian Stock Exchange in 2003 and expanded into military technology in 2006. In 2005, CEO Mike Heard denied that Codan had knowingly supplied technology to an Al-Qaeda operative in 2001. Mike Heard acted as the company's CEO during the 1990s, and held the position until his retirement in 2010. In 2009, Codan established its Military and Security Division in the US. On 30 June 2012, Codan Limited sold its Satellite Communications assets to CPI International Holding Corp, and its wholly owned subsidiary CPI International, Inc (CPI). In 2016, Codan Defence Electronics was established to "leverage core competencies in military radio and countermine technology." Codan Radio Communications Codan designs and manufactures a range of HF equipment including transceivers (base, portable and mobile), modems, power supplies, amplifiers, antennas and accessories. It also provides HF solutions ran
https://en.wikipedia.org/wiki/Zobrist%20hashing
Zobrist hashing (also referred to as Zobrist keys or Zobrist signatures ) is a hash function construction used in computer programs that play abstract board games, such as chess and Go, to implement transposition tables, a special kind of hash table that is indexed by a board position and used to avoid analyzing the same position more than once. Zobrist hashing is named for its inventor, Albert Lindsey Zobrist. It has also been applied as a method for recognizing substitutional alloy configurations in simulations of crystalline materials. Zobrist hashing is the first known instance of the generally useful underlying technique called tabulation hashing. Calculation of the hash value Zobrist hashing starts by randomly generating bitstrings for each possible element of a board game, i.e. for each combination of a piece and a position (in the game of chess, that's 12 pieces × 64 board positions, or 18 × 64 if kings and rooks that may still castle, and pawns that may capture en passant, are treated separately for both colors). Now any board configuration can be broken up into independent piece/position components, which are mapped to the random bitstrings generated earlier. The final Zobrist hash is computed by combining those bitstrings using bitwise XOR. Example pseudocode for the game of chess: constant indices white_pawn := 1 white_rook := 2 # etc. black_king := 12 function init_zobrist(): # fill a table of random numbers/bitstrings table := a 2-d array of size 64×12 for i from 1 to 64: # loop over the board, represented as a linear array for j from 1 to 12: # loop over the pieces table[i][j] := random_bitstring() table.black_to_move = random_bitstring() function hash(board): h := 0 if is_black_turn(board): h := h XOR table.black_to_move for i from 1 to 64: # loop over the board positions if board[i] ≠ empty: j := the piece at board[i], as
https://en.wikipedia.org/wiki/Boris%20Babayan
Boris Artashesovich Babayan (; ; born Baku, 20 December 1933) is a Soviet and Russian computer scientist of Armenian descent, notable as the pioneering creator of supercomputers in the former Soviet Union and Russia. Biography Babayan was born in Baku, Soviet Union to an Armenian family. He graduated from the Moscow Institute of Physics and Technology in 1957. He completed his Ph.D. in 1964 and his doctorate of science in 1971. From 1956 to 1996, Babayan worked in the Lebedev Institute of Precision Mechanics and Computer Engineering, where he eventually became chief of the hardware and software division. Babayan and his team built their first computers during the 1950s. In the 1970s, being one of 15 deputies of chief architect V. S. Burtsev, he worked on the first superscalar computer, the Elbrus-1 and programming language Эль-76. Using these computers in 1978, ten years before commercial applications appeared in the West, the Soviet Union developed its missile systems and its nuclear and space programs. A team headed by Babayan designed Elbrus-3 computer using an architecture named Explicitly Parallel Instruction Computing (EPIC). From 1992 to 2004, Babayan held senior positions in the Moscow Center for SPARC Technology (MCST) and Elbrus International. In these roles he led the development of Elbrus 2000 (single-chip implementation of Elbrus-3) and Elbrus90micro (SPARC computer based on domestically developed microprocessor) projects. Since August 2004, Babayan is the Director of Architecture for the Software and Solutions Group in Intel Corporation and scientific advisor of the Intel R&D center in Moscow. He leads efforts in such areas as compilers, binary translation and security technologies. He became the second European holding the Intel Fellow title (after Norwegian, Tryggve Fossum). Babayan was awarded the two highest honors in the former Soviet Union: the USSR State Prize for his achievements in 1974 in the field of computer-aided design, and the Len
https://en.wikipedia.org/wiki/Calculator%20input%20methods
There are various ways in which calculators interpret keystrokes. These can be categorized into two main types: On a single-step or immediate-execution calculator, the user presses a key for each operation, calculating all the intermediate results, before the final value is shown. On an expression or formula calculator, one types in an expression and then presses a key, such as "=" or "Enter", to evaluate the expression. There are various systems for typing in an expression, as described below. Immediate execution The immediate execution mode of operation (also known as single-step, algebraic entry system (AES) or chain calculation mode) is commonly employed on most general-purpose calculators. In most simple four-function calculators, such as the Windows calculator in Standard mode and those included with most early operating systems, each binary operation is executed as soon as the next operator is pressed, and therefore the order of operations in a mathematical expression is not taken into account. Scientific calculators, including the Scientific mode in the Windows calculator and most modern software calculators, have buttons for brackets and can take order of operation into account. Also, for unary operations, like √ or x2, the number is entered first, then the operator; this is largely because the display screens on these kinds of calculators are generally composed entirely of seven-segment characters and thus capable of displaying only numbers, not the functions associated with them. This mode of operation also makes it impossible to change the expression being input without clearing the display entirely. The first two examples have been given twice. The first version is for simple calculators, showing how it is necessary to rearrange operands in order to get the correct result. The second version is for scientific calculators, where operator precedence is observed. Different forms of operator precedence schemes exist. In the algebraic entry system with
https://en.wikipedia.org/wiki/Whitney%20umbrella
In geometry, the Whitney umbrella (or Whitney's umbrella, named after American mathematician Hassler Whitney, and sometimes called a Cayley umbrella) is a specific self-intersecting ruled surface placed in three dimensions. It is the union of all straight lines that pass through points of a fixed parabola and are perpendicular to a fixed straight line which is parallel to the axis of the parabola and lies on its perpendicular bisecting plane. Formulas Whitney's umbrella can be given by the parametric equations in Cartesian coordinates where the parameters u and v range over the real numbers. It is also given by the implicit equation This formula also includes the negative z axis (which is called the handle of the umbrella). Properties Whitney's umbrella is a ruled surface and a right conoid. It is important in the field of singularity theory, as a simple local model of a pinch point singularity. The pinch point and the fold singularity are the only stable local singularities of maps from R2 to R3. It is named after the American mathematician Hassler Whitney. In string theory, a Whitney brane is a D7-brane wrapping a variety whose singularities are locally modeled by the Whitney umbrella. Whitney branes appear naturally when taking Sen's weak coupling limit of F-theory. See also Cross-cap Right conoid Ruled surface References (Images and movies of the Whitney umbrella.) Differential topology Singularity theory Surfaces Algebraic geometry
https://en.wikipedia.org/wiki/Photoperiodism
Photoperiodism is the physiological reaction of organisms to the length of night or a dark period. It occurs in plants and animals. Plant photoperiodism can also be defined as the developmental responses of plants to the relative lengths of light and dark periods. They are classified under three groups according to the photoperiods: short-day plants, long-day plants, and day-neutral plants. In animals photoperiodism (sometimes called seasonality) is the suite of physiological changes that occur in response to changes in day length. This allows animals to respond to a temporally changing environment associated with changing seasons as the earth orbits the sun. Plants Many flowering plants (angiosperms) use a circadian rhythm together with photoreceptor protein, such as phytochrome or cryptochrome, to sense seasonal changes in night length, or photoperiod, which they take as signals to flower. In a further subdivision, obligate photoperiodic plants absolutely require a long or short enough night before flowering, whereas facultative photoperiodic plants are more likely to flower under one condition. Phytochrome comes in two forms: Pr and Pfr. Red light (which is present during the day) converts phytochrome to its active form (Pfr) which then stimulates various processes such as germination, flowering or branching. In comparison, plants receive more far-red in the shade, and this converts phytochrome from Pfr to its inactive form, Pr, inhibiting germination. This system of Pfr to Pr conversion allows the plant to sense when it is night and when it is day. Pfr can also be converted back to Pr by a process known as dark reversion, where long periods of darkness trigger the conversion of Pfr. This is important in regards to plant flowering. Experiments by Halliday et al. showed that manipulations of the red-to far-red ratio in Arabidopsis can alter flowering. They discovered that plants tend to flower later when exposed to more red light, proving that red light i