source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/F-14%20CADC
The F-14's Central Air Data Computer, also abbreviated as CADC, computes altitude, vertical speed, air speed, and mach number from sensor inputs such as pitot and static pressure and temperature. Earlier air data computer systems were electromechanical computers, such as in the F-111. From 1968 to 1970, the first CADC to use custom digital integrated circuits was developed for the F-14. History The CADC was a multi-chip integrated flight control system developed by Garrett AiResearch and used in early versions of the US Navy's F-14 Tomcat fighter. It is notable for early use of MOS custom integrated circuits and has been claimed as the first microprocessor. The first microprocessor existing on a single chip was the contemporary Intel 4004. However, the 4004 did not have nearly the computing power or interfacing capability required to perform the functions of the CADC; at the time, the best integrated circuit (chip) technology available lacked the scale (number of transistors per chip) necessary to build a single-chip microprocessor for a flight control system. The CADC was designed and built by a team led by Steve Geller and Ray Holt, and supported by the startup American Microsystems. Design work started in 1968 and was completed in June 1970, beating out a number of electromechanical systems that had also been designed for the F-14. It was classified by the Navy until 1998. Ray Holt's story of this design and development is presented in his autobiography The Accidental Engineer. The CADC consisted of an A-to-D converter, several quartz pressure sensors, and a number of MOS-based microchips. Inputs to the system included the primary flight controls, a number of switches, static and dynamic air pressure (for calculating stall points and aircraft speed) and a temperature gauge. The outputs controlled the primary flight controls, wing sweep, the F-14's leading edge "glove vane", and the flaps. The MP944 ran at 375 kHz. It contained six chips used to build the C
https://en.wikipedia.org/wiki/Test%20case
In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing. Formal test cases In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test. If a requirement has sub-requirements, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted. A formal written test case is characterized by a known input and by an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a postcondition. Informal test cases For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run. In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing envi
https://en.wikipedia.org/wiki/Deontic%20logic
Deontic logic is the field of philosophical logic that is concerned with obligation, permission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts. It can be used to formalize imperative logic, or directive modality in natural languages. Typically, a deontic logic uses OA to mean it is obligatory that A (or it ought to be (the case) that A), and PA to mean it is permitted (or permissible) that A, which is defined as . Note that in natural language, the statement "You may go to the zoo OR the park" should be understood as instead of , as both options are permitted by the statement; See Hans Kamp's paradox of free choice for more details. When there are multiple agents involved in the domain of discourse, the deontic modal operator can be specified to each agent to express their individual obligations and permissions. For example, by using a subscript for agent , means that "It is an obligation for agent (to bring it about/make it happen) that ". Note that could be stated as an action by another agent; One example is "It is an obligation for Adam that Bob doesn't crash the car", which would be represented as , where B="Bob doesn't crash the car". Etymology The term deontic is derived from the (gen.: ), meaning "that which is binding or proper." Standard deontic logic In Georg Henrik von Wright's first system, obligatoriness and permissibility were treated as features of acts. Soon after this, it was found that a deontic logic of propositions could be given a simple and elegant Kripke-style semantics, and von Wright himself joined this movement. The deontic logic so specified came to be known as "standard deontic logic," often referred to as SDL, KD, or simply D. It can be axiomatized by adding the following axioms to a standard axiomatization of classical propositional logic: In English, these axioms say, respectively: If A is a tautology, then it ou
https://en.wikipedia.org/wiki/John%20N.%20Mather
John Norman Mather (June 9, 1942 – January 28, 2017) was a mathematician at Princeton University known for his work on singularity theory and Hamiltonian dynamics. He was descended from Atherton Mather (1663–1734), a cousin of Cotton Mather. His early work dealt with the stability of smooth mappings between smooth manifolds of dimensions n (for the source manifold N) and p (for the target manifold P). He determined the precise dimensions (n,p) for which smooth mappings are stable with respect to smooth equivalence by diffeomorphisms of the source and target (i.e., infinitely differentiable coordinate changes).<ref>Mather, J. N. "Stability of C∞ mappings. VI: The nice dimensions". ``Proceedings of Liverpool Singularities-Symposium, I (1969/70), Lecture Notes in Math., Vol. 192, Springer, Berlin (1971), 207–253.</ref> Mather also proved the conjecture of the French topologist René Thom that under topological equivalence smooth mappings are generically stable: the subset of the space of smooth mappings between two smooth manifolds consisting of the topologically stable mappings is a dense subset in the smooth Whitney topology. His notes on the topic of topological stability are still a standard reference on the topic of topologically stratified spaces. In the 1970s, Mather switched to the field of dynamical systems. He made the following main contributions to dynamical systems that deeply influenced the field. 1. He introduced the concept of Mather spectrum and gave a characterization of Anosov diffeomorphisms. 2. Jointly with Richard McGehee, he gave an example of collinear four-body problem which has initial conditions leading to solutions that blow up in finite time. This was the first result that made the Painlevé conjecture plausible. 3. He developed a variational theory for the globally action minimizing orbits for twist maps (convex Hamiltonian systems of two degrees of freedom), along the line of the work of George David Birkhoff, Marston Morse, Gustav A
https://en.wikipedia.org/wiki/Distributed%20Objects%20Everywhere
Distributed Objects Everywhere (DOE) was a long-running Sun Microsystems project to build a distributed computing environment based on the CORBA system in the 'back end' and OpenStep as the user interface. First started in 1990 and announced soon thereafter, it remained vaporware for many years before it was finally released as NEO in 1995. It was sold for only a short period before being dropped (along with OpenStep) in 1996. In its place is what is today known as Enterprise JavaBeans. Background In the early 1990s the 'next big thing' in computing was to use desktop microcomputers to display and edit data being provided by mainframes and minicomputers. Although a number of methods for this sort of access already existed, the division of labor was not at all even. For instance, SQL required the workstation to download huge data sets and then process them locally, whereas use of terminal emulators left all of the work to the server and provided no GUI. It seemed that the proper split of duties would be to have a cooperative set of objects, the workstation being responsible for display and user interaction, with processing on the server. Standing in the way of this sort of solution was the massive differences in operating systems and programming languages between platforms. While it might be possible to build such a system that would work on any one combination of workstation and server, the same solution would not work on any other system. Oddly, the differences between any two programming languages on a single platform was almost as great. Each language had its own format for passing parameters into procedure calls, the file formats that they generated were often quite different. In general terms, it was not always possible to write different portions of a program in different languages, although doing so often has real utility. The problem was not so acute on minicomputers and mainframes where the vendor often specified standards for their libraries, but on mi
https://en.wikipedia.org/wiki/Data-flow%20diagram
A data-flow diagram is a way of representing a flow of data through a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart. There are several notations for displaying data-flow diagrams. The notation presented above was described in 1979 by Tom DeMarco as part of structured analysis. For each data flow, at least one of the endpoints (source and / or destination) must exist in a process. The refined representation of a process can be done in another data-flow diagram, which subdivides this process into sub-processes. The data-flow diagram is a tool that is part of structured analysis and data modeling. When using UML, the activity diagram typically takes over the role of the data-flow diagram. A special form of data-flow plan is a site-oriented data-flow plan. Data-flow diagrams can be regarded as inverted Petri nets, because places in such networks correspond to the semantics of data memories. Analogously, the semantics of transitions from Petri nets and data flows and functions from data-flow diagrams should be considered equivalent. History The DFD notation draws on graph theory, originally used in operational research to model workflow in organizations. DFD originated from the activity diagram used in the structured analysis and design technique methodology at the end of the 1970s. DFD popularizers include Edward Yourdon, Larry Constantine, Tom DeMarco, Chris Gane and Trish Sarson. Data-flow diagrams (DFD) quickly became a popular way to visualize the major steps and data involved in software-system processes. DFDs were usually used to show data flow in a computer system, although they could in theory be applied to business process modeling. DFDs were useful to document the major data flows or to explore a new high-
https://en.wikipedia.org/wiki/Green%20building
Green building (also known as green construction or sustainable building) refers to both a structure and the application of processes that are environmentally responsible and resource-efficient throughout a building's life-cycle: from planning to design, construction, operation, maintenance, renovation, and demolition. This requires close cooperation of the contractor, the architects, the engineers, and the client at all project stages. The Green Building practice expands and complements the classical building design concerns of economy, utility, durability, and comfort. Green building also refers to saving resources to the maximum extent, including energy saving, land saving, water saving, material saving, etc., during the whole life cycle of the building, protecting the environment and reducing pollution, providing people with healthy, comfortable and efficient use of space, and being in harmony with nature Buildings that live in harmony. Green building technology focuses on low consumption, high efficiency, economy, environmental protection, integration and optimization.’ Leadership in Energy and Environmental Design (LEED) is a set of rating systems for the design, construction, operation, and maintenance of green buildings which was developed by the U.S. Green Building Council. Other certificate systems that confirm the sustainability of buildings are the British BREEAM (Building Research Establishment Environmental Assessment Method) for buildings and large-scale developments or the DGNB System (Deutsche Gesellschaft für Nachhaltiges Bauen e.V.) which benchmarks the sustainability performance of buildings, indoor environments and districts. Currently, the World Green Building Council is conducting research on the effects of green buildings on the health and productivity of their users and is working with the World Bank to promote Green Buildings in Emerging Markets through EDGE (Excellence in Design for Greater Efficiencies) Market Transformation Program and
https://en.wikipedia.org/wiki/Online%20text-based%20role-playing%20game
An online text-based role playing game is a role-playing game played online using a solely text-based interface. Online text-based role playing games date to 1978, with the creation of MUD1, which began the MUD heritage that culminates in today's MMORPGs. Some online-text based role playing games are video games, but some are organized and played entirely by humans through text-based communication. Over the years, games have used TELNET, internet forums, IRC, email and social networking websites as their media. There are varied genres of online text-based roleplaying, including fantasy, drama, horror, anime, science fiction, and media-based fan role-play. Role-playing games based on popular media (for example, the Harry Potter series) are common, and the players involved tend to overlap with the relevant fandoms. Varieties MUDs Precursor to the now more popular MMORPGs of today are the branch of text-based games known as MUD, MOO, MUCK, MUSH et al., a broad family of server software tracing their origins back to MUD1 and being used to implement a variety of games and other services. Many of these platforms implement Turing-complete programming languages and can be used for any purpose, but various types of server have historical and traditional associations with particular uses: "mainstream" MUD servers like LPMud and DikuMUD are typically used to implement combat-focused games, while the TinyMUD family of servers, sometimes referred to by the term MU*, are more usually used to create "social MUDs" devoted to role-playing and socializing, or non-game services such as educational MUDs. While these are often seen as definitive boundaries, exceptions abound; many MUSHes have a software-supported combat system, while a "Role-Playing Intensive MUD" movement occurred primarily in the DikuMUD world, and both the first Internet talker (a type of purely social server) and the very popular talker software ew-too were based on LPMud code. Although interest in these ga
https://en.wikipedia.org/wiki/Schwartzian%20transform
In computer programming, the Schwartzian transform is a technique used to improve the efficiency of sorting a list of items. This idiom is appropriate for comparison-based sorting when the ordering is actually based on the ordering of a certain property (the key) of the elements, where computing that property is an intensive operation that should be performed a minimal number of times. The Schwartzian transform is notable in that it does not use named temporary arrays. The Schwartzian transform is a version of a Lisp idiom known as decorate-sort-undecorate, which avoids recomputing the sort keys by temporarily associating them with the input items. This approach is similar to , which avoids repeating the calculation of the key corresponding to a specific input value. By comparison, this idiom assures that each input item's key is calculated exactly once, which may still result in repeating some calculations if the input data contains duplicate items. The idiom is named after Randal L. Schwartz, who first demonstrated it in Perl shortly after the release of Perl 5 in 1994. The term "Schwartzian transform" applied solely to Perl programming for a number of years, but it has later been adopted by some users of other languages, such as Python, to refer to similar idioms in those languages. However, the algorithm was already in use in other languages (under no specific name) before it was popularized among the Perl community in the form of that particular idiom by Schwartz. The term "Schwartzian transform" indicates a specific idiom, and not the algorithm in general. For example, to sort the word list ("aaaa","a","aa") according to word length: first build the list (["aaaa",4],["a",1],["aa",2]), then sort it according to the numeric values getting (["a",1],["aa",2],["aaaa",4]), then strip off the numbers and you get ("a","aa","aaaa"). That was the algorithm in general, so it does not count as a transform. To make it a true Schwartzian transform, it would be done in Pe
https://en.wikipedia.org/wiki/Position%20%28geometry%29
In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents the position of a point P in space in relation to an arbitrary reference origin O. Usually denoted x, r, or s, it corresponds to the straight line segment from O to P. In other words, it is the displacement or translation that maps the origin to P: The term position vector is used mostly in the fields of differential geometry, mechanics and occasionally vector calculus. Frequently this is used in two-dimensional or three-dimensional space, but can be easily generalized to Euclidean spaces and affine spaces of any dimension. Relative position The relative position of a point Q with respect to point P is the Euclidean vector resulting from the subtraction of the two absolute position vectors (each with respect to the origin): where . The relative direction between two points is their relative position normalized as a unit vector: where the denominator is the distance between the two points, . A relative direction is a bound vector, in contrast to an ordinary direction, which is a free vector. Definition Three dimensions In three dimensions, any set of three-dimensional coordinates and their corresponding basis vectors can be used to define the location of a point in space—whichever is the simplest for the task at hand may be used. Commonly, one uses the familiar Cartesian coordinate system, or sometimes spherical polar coordinates, or cylindrical coordinates: where t is a parameter, owing to their rectangular or circular symmetry. These different coordinates and corresponding basis vectors represent the same position vector. More general curvilinear coordinates could be used instead and are in contexts like continuum mechanics and general relativity (in the latter case one needs an additional time coordinate). n dimensions Linear algebra allows for the abstraction of an n-dimensional position vector. A position vector can be e
https://en.wikipedia.org/wiki/Widal%20test
The Widal test, developed in 1896 and named after its inventor, Georges-Fernand Widal, is an indirect agglutination test for enteric fever or undulant fever whereby bacteria causing typhoid fever is mixed with a serum containing specific antibodies obtained from an infected individual. In cases of Salmonella infection, it is a demonstration of the presence of O-soma false-positive result. Test results need to be interpreted carefully to account for any history of enteric fever, typhoid vaccination, and the general level of antibodies in the populations in endemic areas of the world. As with all serological tests, the rise in antibody levels needed to perform the diagnosis takes 7–14 days, which limits its applicability in early diagnosis. Other means of diagnosing Salmonella typhi (and paratyphi) include cultures of blood, urine and faeces. These organisms produce H2S from thiosulfate and can be identified easily on differential media such as bismuth sulfite agar. Typhidot is the other test used to ascertain the diagnosis of typhoid fever. A new serological test called the Tubex test is neither superior nor better performing than the Widal test. Therefore, Tubex test is not recommended for diagnosis of typhoid fever. 2-mercaptoethanol is often added to the Widal test. This agent more easily denatures the IgM class of antibodies, so if a decrease in the titer is seen after using this agent, it means that the contribution of IgM has been removed leaving the IgG component. This differentiation of antibody classes is important as it allows for the distinction of a recent (IgM) from an old infection (IgG). The Widal test is positive if TO antigen titer is more than 1:160 in an active infection, or if TH antigen titer is more than 1:160 in past infection or in immunized persons. A single Widal test is of little clinical relevance especially in endemic areas such as Indian subcontinent, Africa and South-east Asia. This is due to recurrent exposure to the typhoid causing
https://en.wikipedia.org/wiki/Transfer%20operator
In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system. The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator. Definition The iterated function to be studied is a map for an arbitrary set . The transfer operator is defined as an operator acting on the space of functions as where is an auxiliary valuation function. When has a Jacobian determinant , then is usually taken to be . The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoretic pushforward of g: in essence, the transfer operator is the direct image functor in the category of measurable spaces. The left-adjoint of the Frobenius–Perron operator is the Koopman operator or composition operator. The general setting is provided by the Borel functional calculus. As a general rule, the transfer operator can usually be interpreted as a (left-)shift operator acting on a shift space. The most commonly studied shifts are the subshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include the Jacobi operator and the Hessenberg matrix, both of which generate systems of orthogonal polynomials via a right-shift. Applications Whereas the iteration of a function naturally leads to a study of the orbits of points of X under iteration (the study of point dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear in physic
https://en.wikipedia.org/wiki/4th%20Dimension%20%28software%29
4D (4th Dimension, or Silver Surfer, as it was known during early development) is a relational database management system and integrated development environment developed by Laurent Ribardière. 4D was created in 1984 and had a slightly delayed public release for Macintosh in 1987 with its own programming language. The 4D product line has since expanded to an SQL back-end, integrated compiler, integration of PHP, and several productivity plug-ins and interfaces. Some of the plug-ins created by 4D include 4D Write (a word processor), 4D View (somewhat like a spreadsheet, but with extra functionality) and 4D Internet Commands (which allowed for the addition of Internet-related functionality to a database). There are also over 100 third-party plugins, free and commercial. 4D can also be used as a web server, to run compiled database applications. Today, 4D is published by the French company 4D SAS and has a sales, distribution and support presence in most major markets, with the United States, the United Kingdom, and France being the primary markets. The product is localized in more than a dozen languages. History Silver Surfer, as it was known during early development, was developed by Laurent Ribardière in 1984. Following negotiations with Ribardiere it was planned that Apple Inc. (formerly Apple Computer Inc) would publish the software but Apple canceled the plan, reportedly due to pressure from other potential database publishers who claimed that if Apple had their own "brand" database, third party products would be disadvantaged in the marketplace. Apple tried at the time to ensure well-known software publishers supported the Macintosh platform, and as a result the project reverted to Laurent Ribardière, who with the French businesswoman Marylene Delbourg-Delphis published 4th Dimension. Although independently published, Apple supported the new venture and used 4D extensively throughout the organization for projects including fitness center management and CIM
https://en.wikipedia.org/wiki/AMD%20Am29000
The AMD Am29000, commonly shortened to 29k, is a family of 32-bit RISC microprocessors and microcontrollers developed and fabricated by Advanced Micro Devices (AMD). Based on the seminal Berkeley RISC, the 29k added a number of significant improvements. They were, for a time, the most popular RISC chips on the market, widely used in laser printers from a variety of manufacturers. Developed since 1984–1985, announced in March 1987 and released in May 1988, the initial Am29000 was followed by several versions, ending with the Am29040 in 1995. The 29050 was notable for being early to feature a floating point unit capable of executing one multiply–add operation per cycle. AMD was designing a superscalar version until late 1995, when AMD dropped the development of the 29k because the design team was transferred to support the PC (x86) side of the business. What remained of AMD's embedded business was realigned towards the embedded 186 family of 80186 derivatives. By then the majority of AMD's resources were concentrated on their high-performance x86 processors for desktop PCs, using many of the ideas and individual parts of the 29k designs to produce the AMD K5. Design The 29k evolved from the same Berkeley RISC design that also led to the Sun SPARC, Intel i960, ARM and RISC-V. One design element used in some of the Berkeley RISC-derived designs is the concept of register windows, a technique used to speed up procedure calls significantly. The idea is to use a large set of registers as a stack, loading local data into a set of registers during a call, and marking them "dead" when the procedure returns. Values being returned from the routines would be placed in the "global page", the top eight registers in the SPARC (for instance). The competing early RISC design from Stanford University, the Stanford MIPS, also looked at this concept but decided that improved compilers could make more efficient use of general purpose registers than a hard-wired window. In the origin
https://en.wikipedia.org/wiki/File-system%20permissions
Most file systems include attributes of files and directories that control the ability of users to read, change, navigate, and execute the contents of the file system. In some cases, menu options or functions may be made visible or hidden depending on a user's permission level; this kind of user interface is referred to as permission-driven. Two types of permissions are widely available: POSIX file system permissions and access-control lists (ACLs) which are capable of more specific control. File system variations The original File Allocation Table file system has a per-file all-user read-only attribute. NTFS implemented in Microsoft Windows NT and its derivatives, use ACLs to provide a complex set of permissions. OpenVMS uses a permission scheme similar to that of Unix. There are four categories (system, owner, group, and world) and four types of access permissions (Read, Write, Execute and Delete). The categories are not mutually disjoint: World includes Group, which in turn includes Owner. The System category independently includes system users. HFS, and its successor HFS+, implemented in the Classic Mac OS operating systems, do not support permissions. macOS uses POSIX-compliant permissions. Beginning with version 10.4 ("Tiger"), it also supports the use of NFSv4 ACLs in addition to POSIX-compliant permissions. The Apple Mac OS X Server version 10.4+ File Services Administration Manual recommends using only traditional Unix permissions if possible. macOS also still supports the Classic Mac OS's "Protected" attribute. Solaris ACL support depends on the filesystem being used; older UFS filesystem supports POSIX.1e ACLs, while ZFS supports only NFSv4 ACLs. Linux supports ext2, ext3, ext4, Btrfs and other file systems many of which include POSIX.1e ACLs. There is experimental support for NFSv4 ACLs for ext3 and ext4 filesystems. FreeBSD supports POSIX.1e ACLs on UFS, and NFSv4 ACLs on UFS and ZFS. IBM z/OS implements file security using RACF (Resource A
https://en.wikipedia.org/wiki/Software%20test%20documentation
Status of IEEE 829 Note: IEEE 829-2008 has been superseded by ISO/IEC/IEEE 29119-3:2013. Background to IEEE 829 IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, was an IEEE standard that specified the form of a set of documents for use in eight defined stages of software testing and system testing, each stage potentially producing its own separate type of document. The standard specified the format of these documents, but did not stipulate whether they must all be produced, nor did it include any criteria regarding adequate content for these documents. These were a matter of judgment outside the purview of the standard. Documents Required by IEEE 829 The documents are: Master Test Plan (MTP): The purpose of the Master Test Plan (MTP) is to provide an overall test planning and test management document for multiple levels of test (either within one project or across multiple projects). Level Test Plan (LTP): For each LTP the scope, approach, resources, and schedule of the testing activities for its specified level of testing need to be described. The items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the associated risk(s) need to be identified. Level Test Design (LTD): Detailing test cases and the expected results as well as test pass criteria. Level Test Case (LTC): Specifying the test data for use in running the test cases identified in the Level Test Design. Level Test Procedure (LTPr): Detailing how to run each test, including any set-up preconditions and the steps that need to be followed. Level Test Log (LTL): To provide a chronological record of relevant details about the execution of tests, e.g. recording which tests cases were run, who ran them, in what order, and whether each test passed or failed. Anomaly Report (AR): To document any event that occurs during the testing process that requires investigation. This may be called a problem, t
https://en.wikipedia.org/wiki/Partial%20isometry
In mathematical functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel. The orthogonal complement of its kernel is called the initial subspace and its range is called the final subspace. Partial isometries appear in the polar decomposition. General definition The concept of partial isometry can be defined in other equivalent ways. If U is an isometric map defined on a closed subset H1 of a Hilbert space H then we can define an extension W of U to all of H by the condition that W be zero on the orthogonal complement of H1. Thus a partial isometry is also sometimes defined as a closed partially defined isometric map. Partial isometries (and projections) can be defined in the more abstract setting of a semigroup with involution; the definition coincides with the one herein. Characterization in finite dimensions In finite-dimensional vector spaces, a matrix is a partial isometry if and only if is the projection onto its support. Contrast this with the more demanding definition of isometry: a matrix is an isometry if and only if . In other words, an isometry is an injective partial isometry. Any finite-dimensional partial isometry can be represented, in some choice of basis, as a matrix of the form , that is, as a matrix whose first columns form an isometry, while all the other columns are identically 0. Note that for any isometry , the Hermitian conjugate is a partial isometry, although not every partial isometry has this form, as shown explicitly in the given examples. Operator Algebras For operator algebras one introduces the initial and final subspaces: C*-Algebras For C*-algebras one has the chain of equivalences due to the C*-property: So one defines partial isometries by either of the above and declares the initial resp. final projection to be W*W resp. WW*. A pair of projections are partitioned by the equivalence relation: It plays an important
https://en.wikipedia.org/wiki/ISO/IEC%209126
ISO/IEC 9126 Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011. The fundamental objective of the ISO/IEC 9126 standard is to address some of the well-known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of "success". By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project's objectives and goals. The standard is divided into four parts: quality model external metrics internal metrics quality in use metrics. Quality The quality model presented in the first part of the standard, ISO/IEC 9126-1, classifies software quality in a structured set of characteristics and sub-characteristics as follows: Functionality - "A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs." Suitability Accuracy Interoperability Security Functionality compliance Reliability - "A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time." Maturity Fault tolerance Recoverability Reliability compliance Usability - "A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users." Understandability Learnability Operability Attractiveness Usability compliance Efficiency - "A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions."
https://en.wikipedia.org/wiki/Cetology
Cetology (from Greek , kētos, "whale"; and , -logia) or whalelore (also known as whaleology) is the branch of marine mammal science that studies the approximately eighty species of whales, dolphins, and porpoises in the scientific order Cetacea. Cetologists, or those who practice cetology, seek to understand and explain cetacean evolution, distribution, morphology, behavior, community dynamics, and other topics. History Observations about Cetacea have been recorded since at least classical times. Ancient Greek fishermen created an artificial notch on the dorsal fin of dolphins entangled in nets so that they could tell them apart years later. Approximately 2,300 years ago, Aristotle carefully took notes on cetaceans while traveling on boats with fishermen in the Aegean Sea. In his book Historia animalium (History of Animals), Aristotle was careful enough to distinguish between the baleen whales and toothed whales, a taxonomical separation still used today. He also described the sperm whale and the common dolphin, stating that they can live for at least twenty-five or thirty years. His achievement was remarkable for its time, because even today it is very difficult to estimate the life-span of advanced marine animals. After Aristotle's death, much of the knowledge he had gained about cetaceans was lost, only to be re-discovered during the Renaissance. Many of the medieval texts on cetaceans come mainly from Scandinavia and Iceland, most came about the mid-13th century. One of the better known is Speculum Regale. This text describes various species that lived around the island of Iceland. It mentions orcs that had dog-like teeth and would demonstrate the same kind of aggression towards other cetaceans as wild dogs would to other terrestrial animals. The text even illustrated the hunting technique of orcs, which are now called orcas. The Speculum Regale describes other cetaceans, including the sperm whale and narwhal. Many times they were seen as terrible monster
https://en.wikipedia.org/wiki/Skolem%27s%20paradox
In mathematical logic and philosophy, Skolem's paradox is a seeming contradiction that arises from the downward Löwenheim–Skolem theorem. Thoralf Skolem (1922) was the first to discuss the seemingly contradictory aspects of the theorem, and to discover the relativity of set-theoretic notions now known as non-absoluteness. Although it is not an actual antinomy like Russell's paradox, the result is typically called a paradox and was described as a "paradoxical state of affairs" by Skolem (1922: p. 295). Skolem's paradox is that every countable axiomatisation of set theory in first-order logic, if it is consistent, has a model that is countable. This appears contradictory because it is possible to prove, from those same axioms, a sentence that intuitively says (or that precisely says in the standard model of the theory) that there exist sets that are not countable. Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets, satisfies the first-order sentence that intuitively states "there are uncountable sets". A mathematical explanation of the paradox, showing that it is not a contradiction in mathematics, was given by Skolem (1922). Skolem's work was harshly received by Ernst Zermelo, who argued against the limitations of first-order logic, but the result quickly came to be accepted by the mathematical community. The philosophical implications of Skolem's paradox have received much study. One line of inquiry questions whether it is accurate to claim that any first-order sentence actually states "there are uncountable sets". This line of thought can be extended to question whether any set is uncountable in an absolute sense. More recently, the paper "Models and Reality" by Hilary Putnam, and responses to it, led to renewed interest in the philosophical aspects of Skolem's result. Background One of the earliest results in set theory, published by Georg Cantor in 1874, was the existence of uncountable
https://en.wikipedia.org/wiki/Numbers%20%28TV%20series%29
Numbers (stylized as NUMB3RS) is an American crime drama television series that was broadcast on CBS from January 23, 2005, to March 12, 2010, for six seasons and 118 episodes. The series was created by Nicolas Falacci and Cheryl Heuton, and follows FBI Special Agent Don Eppes (Rob Morrow) and his brother Charlie Eppes (David Krumholtz), a college mathematics professor and prodigy, who helps Don solve crimes for the FBI. Brothers Ridley and Tony Scott produced Numbers; its production companies are the Scott brothers' Scott Free Productions and CBS Television Studios (originally Paramount Network Television, and later CBS Paramount Network Television). The show focuses equally on the relationships among Don Eppes, his brother Charlie Eppes, and their father, Alan Eppes (Judd Hirsch), and on the brothers' efforts to fight crime, usually in Los Angeles. A typical episode begins with a crime, which is subsequently investigated by a team of FBI agents led by Don and mathematically modeled by Charlie, with the help of Larry Fleinhardt (Peter MacNicol) and Amita Ramanujan (Navi Rawat). The insights provided by Charlie's mathematics were always in some way crucial to solving the crime. On May 18, 2010, CBS canceled the series after six seasons. Cast and characters The show revolved around three intersecting groups of characters: the FBI, scientists at the fictitious California Institute of Science (CalSci), and the Eppes family. Don Eppes (Rob Morrow), Charlie's older brother, is the lead FBI agent at the Los Angeles Violent Crimes Squad. Professor Charlie Eppes (David Krumholtz) is a mathematical genius, who in addition to teaching at CalSci, consults for the FBI and NSA. Alan Eppes (Judd Hirsch) is a former L.A. city planner, a widower, and the father of both Charlie and Don Eppes. Alan lives in a historic two-story California bungalow furnished with period Arts and Crafts furniture. David Sinclair (Alimi Ballard) is an FBI field agent and was later made Don's se
https://en.wikipedia.org/wiki/Handheld%20projector
A handheld projector (also known as a pocket projector, mobile projector, pico projector or mini beamer) is an image projector in a handheld device. It was developed as a computer display device for compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but are too small to accommodate a display screen that an audience can see easily. Handheld projectors involve miniaturized hardware, and software that can project digital images onto a nearby viewing surface. The system comprises five main parts: the battery, the electronics, the laser or LED light sources, the combiner optic, and in some cases, scanning micromirror devices. First, the electronics system turns the image into an electronic signal. Next, the electronic signals drive laser or LED light sources with different colors and intensities down different paths. In the combiner optic, the different light paths are combined into one path, defining a palette of colors. An important design characteristic of a handheld projector is the ability to project a clear image on various viewing surfaces. History Major advances in imaging technology have allowed the introduction of hand-held (pico) type video projectors. The concept was also introduced by Explay in 2003 to various consumer electronics players. Their solution was publicly announced through their relationship with Kopin in January 2005. Insight Media market research has divided the leading players in this application into various categories: Micro-display makers (e.g., TI's DLP, Himax, Microvision, Lemoptix and bTendo MEMS scanners) Light source makers (e.g., Philips Lumileds, Osram, Cree LEDs and Corning, Nichia, Mitsubishi Lasers) Module makers (e.g., Texas Instruments (DLP), 3M Liquid crystal on silicon (LCoS)) Various manufacturers have produced handheld projectors exhibiting high-resolution, good brightness, and low energy cons
https://en.wikipedia.org/wiki/Bell%20series
In mathematics, the Bell series is a formal power series used to study properties of arithmetical functions. Bell series were introduced and developed by Eric Temple Bell. Given an arithmetic function and a prime , define the formal power series , called the Bell series of modulo as: Two multiplicative functions can be shown to be identical if all of their Bell series are equal; this is sometimes called the uniqueness theorem: given multiplicative functions and , one has if and only if: for all primes . Two series may be multiplied (sometimes called the multiplication theorem): For any two arithmetic functions and , let be their Dirichlet convolution. Then for every prime , one has: In particular, this makes it trivial to find the Bell series of a Dirichlet inverse. If is completely multiplicative, then formally: Examples The following is a table of the Bell series of well-known arithmetic functions. The Möbius function has The Mobius function squared has Euler's totient has The multiplicative identity of the Dirichlet convolution has The Liouville function has The power function Idk has Here, Idk is the completely multiplicative function . The divisor function has The constant function, with value 1, satisfies , i.e., is the geometric series. If is the power of the prime omega function, then Suppose that f is multiplicative and g is any arithmetic function satisfying for all primes p and . Then If denotes the Möbius function of order k, then See also Bell numbers References Arithmetic functions Mathematical series
https://en.wikipedia.org/wiki/Evgenii%20Landis
Evgenii Mikhailovich Landis (, Yevgeny Mikhaylovich Landis; 6 October 1921 – 12 December 1997) was a Soviet mathematician who worked mainly on partial differential equations. Life Landis was born in Kharkiv, Ukrainian SSR, Soviet Union. He was Jewish. He studied and worked at the Moscow State University, where his advisor was Alexander Kronrod, and later Ivan Petrovsky. In 1946, together with Kronrod, he rediscovered Sard's lemma, unknown in USSR at the time. Later, he worked on uniqueness theorems for elliptic and parabolic differential equations, Harnack inequalities, and Phragmén–Lindelöf type theorems. With Georgy Adelson-Velsky, he invented the AVL tree data structure (where "AVL" stands for Adelson-Velsky Landis). He died in Moscow. His students include Yulij Ilyashenko. External links Biography of Y.M. Landis at the International Centre for Mathematical Sciences. 20th-century Russian mathematicians Soviet mathematicians Soviet inventors Moscow State University alumni 1921 births 1997 deaths Mathematical analysts Ukrainian Jews Ukrainian mathematicians Ukrainian inventors Russian scientists
https://en.wikipedia.org/wiki/Normalized%20number
In applied mathematics, a number is normalized when it is written in scientific notation with one non-zero decimal digit before the decimal point. Thus, a real number, when written out in normalized scientific notation, is as follows: where n is an integer, are the digits of the number in base 10, and is not zero. That is, its leading digit (i.e., leftmost) is not zero and is followed by the decimal point. Simply speaking, a number is normalized when it is written in the form of a × 10n where 1 ≤ a < 10 without leading zeros in a. This is the standard form of scientific notation. An alternative style is to have the first non-zero digit after the decimal point. Examples As examples, the number 918.082 in normalized form is while the number in normalized form is Clearly, any non-zero real number can be normalized. Other bases The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10. In base b a normalized number will have the form where again and the digits, are integers between and . In many computer systems, binary floating-point numbers are represented internally using this normalized form for their representations; for details, see normal number (computing). Although the point is described as floating, for a normalized floating-point number, its position is fixed, the movement being reflected in the different values of the power. See also Significand Normal number (computing) References Computer arithmetic
https://en.wikipedia.org/wiki/Micro-Controller%20Operating%20Systems
Micro-Controller Operating Systems (MicroC/OS, stylized as μC/OS, or Micrium OS) is a real-time operating system (RTOS) designed by Jean J. Labrosse in 1991. It is a priority-based preemptive real-time kernel for microprocessors, written mostly in the programming language C. It is intended for use in embedded systems. MicroC/OS allows defining several functions in C, each of which can execute as an independent thread or task. Each task runs at a different priority, and runs as if it owns the central processing unit (CPU). Lower priority tasks can be preempted by higher priority tasks at any time. Higher priority tasks use operating system (OS) services (such as a delay or event) to allow lower priority tasks to execute. OS services are provided for managing tasks and memory, communicating between tasks, and timing. History The MicroC/OS kernel was published originally in a three-part article in Embedded Systems Programming magazine and the book μC/OS The Real-Time Kernel by Labrosse. He intended at first to simply describe the internals of a portable OS he had developed for his own use, but later developed it as a commercial product in his own company Micrium, Inc. in versions II and III. In 2016 Micrium, Inc. was acquired by Silicon Laboratories and it was subsequently released as open-source unde the Apache license. Silicon Labs continues to maintain an open-source product named Micrium OS for use on their own silicon and a group of former Micrium, Inc. employees (including Labrosse) provides consultancy and support for both μC/OS and Cesium RTOS, a proprietary fork made just after the open-source release. μC/OS-II Based on the source code written for μC/OS, and introduced as a commercial product in 1998, μC/OS-II is a portable, ROM-able, scalable, preemptive, real-time, deterministic, multitasking kernel for microprocessors, and digital signal processors (DSPs). It manages up to 64 tasks. Its size can be scaled (between 5 and 24 Kbytes) to only contain the
https://en.wikipedia.org/wiki/Bead%20sort
Bead sort, also called gravity sort, is a natural sorting algorithm, developed by Joshua J. Arulanandham, Cristian S. Calude and Michael J. Dinneen in 2002, and published in The Bulletin of the European Association for Theoretical Computer Science. Both digital and analog hardware implementations of bead sort can achieve a sorting time of O(n); however, the implementation of this algorithm tends to be significantly slower in software and can only be used to sort lists of positive integers. Also, it would seem that even in the best case, the algorithm requires O(n2) space. Algorithm overview The bead sort operation can be compared to the manner in which beads slide on parallel poles, such as on an abacus. However, each pole may have a distinct number of beads. Initially, it may be helpful to imagine the beads suspended on vertical poles. In Step 1, such an arrangement is displayed using n=5 rows of beads on m=4 vertical poles. The numbers to the right of each row indicate the number that the row in question represents; rows 1 and 2 are representing the positive integer 3 (because they each contain three beads) while the top row represents the positive integer 2 (as it only contains two beads). If we then allow the beads to fall, the rows now represent the same integers in sorted order. Row 1 contains the largest number in the set, while row n contains the smallest. If the above-mentioned convention of rows containing a series of beads on poles 1..k and leaving poles k+1..m empty has been followed, it will continue to be the case here. The action of allowing the beads to "fall" in our physical example has allowed the larger values from the higher rows to propagate to the lower rows. If the value represented by row a is smaller than the value contained in row a+1, some of the beads from row a+1 will fall into row a; this is certain to happen, as row a does not contain beads in those positions to stop the beads from row a+1 from falling. The mechanism unde
https://en.wikipedia.org/wiki/Affine%20Lie%20algebra
In mathematics, an affine Lie algebra is an infinite-dimensional Lie algebra that is constructed in a canonical fashion out of a finite-dimensional simple Lie algebra. Given an affine Lie algebra, one can also form the associated affine Kac-Moody algebra, as described below. From a purely mathematical point of view, affine Lie algebras are interesting because their representation theory, like representation theory of finite-dimensional semisimple Lie algebras, is much better understood than that of general Kac–Moody algebras. As observed by Victor Kac, the character formula for representations of affine Lie algebras implies certain combinatorial identities, the Macdonald identities. Affine Lie algebras play an important role in string theory and two-dimensional conformal field theory due to the way they are constructed: starting from a simple Lie algebra , one considers the loop algebra, , formed by the -valued functions on a circle (interpreted as the closed string) with pointwise commutator. The affine Lie algebra is obtained by adding one extra dimension to the loop algebra and modifying the commutator in a non-trivial way, which physicists call a quantum anomaly (in this case, the anomaly of the WZW model) and mathematicians a central extension. More generally, if σ is an automorphism of the simple Lie algebra associated to an automorphism of its Dynkin diagram, the twisted loop algebra consists of -valued functions f on the real line which satisfy the twisted periodicity condition . Their central extensions are precisely the twisted affine Lie algebras. The point of view of string theory helps to understand many deep properties of affine Lie algebras, such as the fact that the characters of their representations transform amongst themselves under the modular group. Affine Lie algebras from simple Lie algebras Definition If is a finite-dimensional simple Lie algebra, the corresponding affine Lie algebra is constructed as a central extension of the lo
https://en.wikipedia.org/wiki/Ligand-gated%20ion%20channel
Ligand-gated ion channels (LICs, LGIC), also commonly referred to as ionotropic receptors, are a group of transmembrane ion-channel proteins which open to allow ions such as Na+, K+, Ca2+, and/or Cl− to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand), such as a neurotransmitter. When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response. These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands, by channel blockers, ions, or the membrane potential. LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors, ionotropic glutamate receptors and ATP-gated channels. Cys-loop receptors The cys-loop receptors are named after a characteristic loop formed by a disulfide bond between two cysteine residues in the N terminal extracellular domain. They are part of a larger family of pentameric ligand-gated ion channels that usually lack this disulfide bond, hence the tentative name "Pro-loop receptors"
https://en.wikipedia.org/wiki/Avast
Avast Software s.r.o. is a Czech multinational cybersecurity software company headquartered in Prague, Czech Republic, that researches and develops computer security software, machine learning, and artificial intelligence. Avast has more than 435 million monthly active users and the second largest market share among anti-malware application vendors worldwide as of April 2020. The company has approximately 1,700 employees across its 25 offices worldwide. In July 2021, NortonLifeLock, an American cybersecurity company, announced that it was in talks to merge with Avast Software. In August 2021, Avast's board of directors agreed to an offer of US$8 billion. Avast was founded by Pavel Baudiš and Eduard Kučera in 1988 as a cooperative. It had been a private company since 2010 and had its IPO in May 2018. In July 2016, Avast acquired competitor AVG Technologies for $1.3 billion. At the time, AVG was the third-ranked antivirus product. It was dual-listed on the Prague Stock Exchange and on the London Stock Exchange and was a constituent of the FTSE 100 Index until it was acquired by NortonLifeLock in September 2022. The company's main product is Avast Antivirus, along with tools such as the Avast Secure Browser and the Avast SecureLine VPN. Avast produces Avast Online Security, which is its main extension, but it also has extensions like Avast SafePrice and Avast Passwords. History Avast was founded by Eduard Kučera and Pavel Baudiš in 1988. The founders met each other at the Research Institute for Mathematical Machines in Czechoslovakia. They studied math and computer science, because the Communist Party of Czechoslovakia would require them to join the communist party to study physics. At the institute, Pavel Baudiš discovered the Vienna virus on a floppy disk and developed the first program to remove it. Afterwards, he asked Eduard Kucera to join him in cofounding Avast as a cooperative. The cooperative was originally called Alwil and only the software was named Avas
https://en.wikipedia.org/wiki/Modulo
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another (called the modulus of the operation). Given two positive numbers and , modulo (often abbreviated as ) is the remainder of the Euclidean division of by , where is the dividend and is the divisor. For example, the expression "5 mod 2" evaluates to 1, because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0, because 9 divided by 3 has a quotient of 3 and a remainder of 0. Although typically performed with and both being integers, many computing systems now allow other types of numeric operands. The range of values for an integer modulo operation of is 0 to ( mod 1 is always 0; is undefined, being a division by zero). When exactly one of or is negative, the basic definition breaks down, and programming languages differ in how these values are defined. Variants of the definition In mathematics, the result of the modulo operation is an equivalence class, and any member of the class may be chosen as representative; however, the usual representative is the least positive residue, the smallest non-negative integer that belongs to that class (i.e., the remainder of the Euclidean division). However, other conventions are possible. Computers and calculators have various ways of storing and representing numbers; thus their definition of the modulo operation depends on the programming language or the underlying hardware. In nearly all computing systems, the quotient and the remainder of divided by satisfy the following conditions: This still leaves a sign ambiguity if the remainder is non-zero: two possible choices for the remainder occur, one negative and the other positive, and two possible choices for the quotient occur. In number theory, the positive remainder is always chosen, but in computing, programming languages choose depending on the language and the signs of or . Stand
https://en.wikipedia.org/wiki/Multi-monitor
Multi-monitor, also called multi-display and multi-head, is the use of multiple physical display devices, such as monitors, televisions, and projectors, in order to increase the area available for computer programs running on a single computer system. Research studies show that, depending on the type of work, multi-head may increase the productivity by 50–70%. Measurements of the Institute for Occupational Safety and Health of the German Social Accident Insurance showed that the quality and quantity of worker performance varies according to the screen setup and type of task. Overall, the results of physiological studies and the preferences of the test persons favour a dual-monitor rather than single-monitor setup. Physiologically limiting factors observed during work on dual monitors were minor and not generally significant. There is no evidence that office work with dual-monitor setups presents a possible hazard to workers. Implementation Multiple computers can be connected to provide a single display, e.g. over Gigabit Ethernet/Ethernet to drive a large video wall. Display modes USB One way to extend the number of displays on one computer is to add displays via USB. Starting in 2006, DisplayLink released several chips for USB support on VGA/DVI/LVDS and other interfaces. Adoption In the office In many professions, including graphic design, architecture, communications, accounting, engineering and video editing, the idea of two or more monitors being driven from one machine is not a new one. While in the past, it has meant multiple graphics adapters and specialized software, it was common for engineers to have at least two, if not more, displays to enhance productivity. In video gaming Early versions of Doom permitted a three-monitor display mode, using three networked machines to show left, right, and center views. More recently, games have used multiple monitors to show a more absorbing interface to the player or to display game information. Vario
https://en.wikipedia.org/wiki/Decider%20%28Turing%20machine%29
In computability theory, a decider is a Turing machine that halts for every input. A decider is also called a total Turing machine as it represents a total function. Because it always halts, such a machine is able to decide whether a given string is a member of a formal language. The class of languages which can be decided by such machines is the set of recursive languages. Given an arbitrary Turing machine, determining whether it is a decider is an undecidable problem. This is a variant of the halting problem, which asks for whether a Turing machine halts on a specific input. Functions computable by total Turing machines In practice, many functions of interest are computable by machines that always halt. A machine that uses only finite memory on any particular input can be forced to halt for every input by restricting its flow control capabilities so that no input will ever cause the machine to enter an infinite loop. As a trivial example, a machine implementing a finitary decision tree will always halt. It is not required that the machine be entirely free of looping capabilities, however, to guarantee halting. If we restrict loops to be of a predictably finite size (like the FOR loop in BASIC), we can express all of the primitive recursive functions (Meyer and Ritchie, 1967). An example of such a machine is provided by the toy programming language PL-{GOTO} of Brainerd and Landweber (1974). We can further define a programming language in which we can ensure that even more sophisticated functions always halt. For example, the Ackermann function, which is not primitive recursive, nevertheless is a total computable function computable by a term rewriting system with a reduction ordering on its arguments (Ohlebusch, 2002, pp. 67). Despite the above examples of programming languages which guarantee termination of the programs, there exists no programming language which captures exactly the total recursive functions, i.e. the functions which can be compute
https://en.wikipedia.org/wiki/Modulo%20%28mathematics%29
In mathematics, the term modulo ("with respect to a modulus of", the Latin ablative of modulus which itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for"). For the most part, the term often occurs in statements of the form: A is the same as B modulo C which is often equivalent to "A is the same as B up to C", and means A and B are the same—except for differences accounted for or explained by C. History Modulo is a mathematical jargon that was introduced into mathematics in the book Disquisitiones Arithmeticae by Carl Friedrich Gauss in 1801. Given the integers a, b and n, the expression "a ≡ b (mod n)", pronounced "a is congruent to b modulo n", means that a − b is an integer multiple of n, or equivalently, a and b both share the same remainder when divided by n. It is the Latin ablative of modulus, which itself means "a small measure." The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of an equivalence relation R, where a is equivalent (or congruent) to b modulo R if aRb. More informally, the term is found in statements of the form: A is the same as B modulo C which means A and B are the same—except for differences accounted for or explained by C. Usage Original use Gauss originally intended to use "modulo" as follows: given the integers a, b and n, the expression a ≡ b (mod n) (pronounced "a is congruent to b modulo n") means that a − b is an integer multiple of n, or equivalently, a and b both leave the same remainder when divided by n. For example: 13 is congruent to 63 modulo 10 means that 13 − 63 is a
https://en.wikipedia.org/wiki/HP%20200LX
The HP 200LX Palmtop PC (F1060A, F1061A, F1216A), also known as project Felix, is a personal digital assistant introduced by Hewlett-Packard in August 1994. It was often called a Palmtop PC, and it was notable that it was, with some minor exceptions, a MS-DOS-compatible computer in a palmtop format, complete with a monochrome graphic display, QWERTY keyboard, serial port, and PCMCIA expansion slot. Description Input is accomplished via a small QWERTY-keyboard with a numeric keypad, enclosed in a clamshell-style case, less than about 25% of the size of a standard notebook computer. The palmtop runs for about 30–40 hours on two size AA alkaline or Ni-Cd rechargeable cells and can charge batteries (both Ni-Cd and NiMH) via a 12 V DC wall adapter. The HP 200LX has an Intel 80186 compatible embedded central processing unit named "Hornet", which runs at ~7.91 megahertz (which can be upgraded or overclocked to up to 15.8 MHz) and 1, 2 or 4 MB of memory, of which 640 KB is RAM and the rest can be used for expanded memory (EMS) or memory-based storage space. After-market updates can bring the memory chips to up to 64 MB, which frees the PCMCIA slot for modem or Ethernet card use. The Silicom, Accton 2212/2216, Netgear FA411, and Sohoware ND5120 network cards were compatible. Being IBM PC/XT compatible and running MS-DOS 5.0 from ROM, the HP 200LX can run virtually any program that would run on a full-size PC compatible computer as long as the code is written for the Intel 8086, 8088 or 80186 CPU and can run using CGA graphics. It can also run programs written for the 80286 CPU, provided they do not require the use of protected mode. It has a 16-bit PCMCIA Type II expansion slot that supports 5 V at 150 mA maximum, a SIR compatible infrared port and a full serial port (but with a proprietary mini connector for space constraint reasons). The built-in software suite runs from ROM and includes the Lotus 1-2-3 Release 2.4 spreadsheet, a calendar, a phone book, a terminal, L
https://en.wikipedia.org/wiki/Microprobe
A microprobe is an instrument that applies a stable and well-focused beam of charged particles (electrons or ions) to a sample. Types When the primary beam consists of accelerated electrons, the probe is termed an electron microprobe, when the primary beam consists of accelerated ions, the term ion microprobe is used. The term microprobe may also be applied to optical analytical techniques, when the instrument is set up to analyse micro samples or micro areas of larger specimens. Such techniques include micro Raman spectroscopy, micro infrared spectroscopy and micro LIBS. All of these techniques involve modified optical microscopes to locate the area to be analysed, direct the probe beam and collect the analytical signal. A laser microprobe is a mass spectrometer that uses ionization by a pulsed laser and subsequent mass analysis of the generated ions. Uses Scientists use this beam of charged particles to determine the elemental composition of solid materials (minerals, glasses, metals). The chemical composition of the target can be found from the elemental data extracted through emitted X-rays (in the case where the primary beam consists of charged electrons) or measurement of an emitted secondary beam of material sputtered from the target (in the case where the primary beam consists of charged ions). When the ion energy is in the range of a few tens of keV (kilo-electronvolt) these microprobes are usually called FIB (Focused ion beam). An FIB makes a small portion of the material into a plasma; the analysis is done by the same basic techniques as the ones used in mass spectrometry. When the ion energy is higher, hundreds of keV to a few MeV (mega-electronvolt) they are called nuclear microprobes. Nuclear microprobes are extremely powerful tools that utilize ion beam analysis techniques as microscopies with spot sizes in the micro-/nanometre range. These instruments are applied to solve scientific problems in a diverse range of fields, from microelectronics to
https://en.wikipedia.org/wiki/Gaussian%20binomial%20coefficient
In mathematics, the Gaussian binomial coefficients (also called Gaussian coefficients, Gaussian polynomials, or q-binomial coefficients) are q-analogs of the binomial coefficients. The Gaussian binomial coefficient, written as or , is a polynomial in q with integer coefficients, whose value when q is set to a prime power counts the number of subspaces of dimension k in a vector space of dimension n over , a finite field with q elements; i.e. it is the number of points in the finite Grassmannian . Definition The Gaussian binomial coefficients are defined by: where m and r are non-negative integers. If , this evaluates to 0. For , the value is 1 since both the numerator and denominator are empty products. Although the formula at first appears to be a rational function, it actually is a polynomial, because the division is exact in Z[q] All of the factors in numerator and denominator are divisible by , and the quotient is the q-number: Dividing out these factors gives the equivalent formula In terms of the q factorial , the formula can be stated as Substituting into gives the ordinary binomial coefficient . The Gaussian binomial coefficient has finite values as : Examples Combinatorial descriptions Inversions One combinatorial description of Gaussian binomial coefficients involves inversions. The ordinary binomial coefficient counts the -combinations chosen from an -element set. If one takes those elements to be the different character positions in a word of length , then each -combination corresponds to a word of length using an alphabet of two letters, say with copies of the letter 1 (indicating the positions in the chosen combination) and letters 0 (for the remaining positions). So, for example, the words using 0s and 1s are . To obtain the Gaussian binomial coefficient , each word is associated with a factor , where is the number of inversions of the word, where, in this case, an inversion is a pair of positions where the left of the pai
https://en.wikipedia.org/wiki/Breakout%20box
A breakout box is a piece of electrical test equipment used to support integration testing, expedite maintenance, and streamline the troubleshooting process at the system, subsystem, and component-level by simplifying the access to test signals. Breakout boxes span a wide spectrum of functionality. Some serve to break out every signal connection coming into a unit, while others breakout only specific signals commonly monitored for either testing or troubleshooting purposes. Some have electrical connectors, and others have optical fiber connectors. A breakout box serves as a troubleshooting tool to determine the wiring of an electrical connector interface on a networking device or computer. Typically, a breakout box is inserted between two electrical devices to determine which signal or power interconnects are active. Breakout boxes are handy in troubleshooting connection problems resulting from manufacturing errors (e.g., miswiring) or defective interconnects resulting from broken wiring. Breakout boxes are specific examples of a more general category of network testing equipment called "status monitors". Various such monitoring devices are available for testing serial interfaces, including RS-232, RS-449, V.35, and X.21, as well as specialty interfaces. They generally come with several kinds of connectors and are quick and easy to use for isolating problems with serial transmission connections in networking, telecommunications, and industrial settings. Variants The term breakout box is derived from the mechanical enclosure in which a connector's aggregate connections are separated (i.e., broken out) into the individual signal or current-carrying wires or cables. Often, if there are only a few connections, then a breakout cable (also called an octopus cable) may be used, as is common on notebook computers. The most common breakout boxes use D-subminiature connectors (sometimes referred to as D-sub connectors and sometimes erroneously as DB connectors) a
https://en.wikipedia.org/wiki/Object%20REXX
The Object REXX programming language is a general-purpose object-oriented scripting language. Based on the Rexx programming language (often called "Classic Rexx"), Object REXX is designed to be easy to learn, use, and maintain. Object REXX retains all the features and syntax of Classic Rexx while adding full object-oriented programming capabilities. History Object REXX was initially introduced by IBM for the operating system OS/2. IBM later ported it to Microsoft Windows and IBM's AIX. Object REXX was a follow-on to and a significant extension of the "Classic Rexx" language. Classic Rexx is a cross-platform scripting language that runs on all popular operating systems. It was originally created for the Conversational Monitor System (CMS) component of the operating system VM/SP Release 3 and later implemented by IBM on Multiple Virtual Storage (MVS), OS/2, PC DOS, and AIX. Other organizations subsequently implemented Classic Rexx interpreters for Windows, Linux, Unix, macOS, Android, and many other operating systems. On October 12, 2004, IBM released Object REXX as free and open-source software. In this new incarnation, Object REXX was rechristened Open Object Rexx (ooREXX). Since 2004, the Rexx Language Association has supported, maintained, and further developed ooRexx. ooRexx is currently available for Windows, Linux, MacOS, and Unix. IBM's original Object REXX interpreter continues to be available in OS/2-derived operating systems, such as ArcaOS and eComStation. Features As supersets of Classic Rexx, ooRexx and Object REXX endeavor to retain all the features of Classic Rexx. To this, ooRexx and Object REXX add all the features typical of object-oriented languages, such as subclassing, polymorphism, and data encapsulation. Further features include multiple inheritance via the use of mixin classes. ooRexx and Object REXX are designed to be a compatible superset of Classic Rexx. They conform to the ANSI standard for the Rexx language (X3.274-1996, “
https://en.wikipedia.org/wiki/Heat%20death%20paradox
The heat death paradox, also known as thermodynamic paradox, Clausius' paradox and Kelvin’s paradox, is a reductio ad absurdum argument that uses thermodynamics to show the impossibility of an infinitely old universe. It was formulated in February 1862 by Lord Kelvin and expanded upon by Hermann von Helmholtz and William John Macquorn Rankine. The paradox This theoretical paradox is directed at the then-mainstream strand of belief in a classical view of a sempiternal universe, whereby its matter is postulated as everlasting and having always been recognisably the universe. Heat death paradox is born of a paradigm resulting from fundamental ideas about the cosmos. It is necessary to change the paradigm to resolve the paradox. The paradox was based upon the rigid mechanical point of view of the second law of thermodynamics postulated by Rudolf Clausius and Lord Kelvin, according to which heat can only be transferred from a warmer to a colder object. It notes: if the universe were eternal, as claimed classically, it should already be cold and isotropic (its objects should have the same temperature, and the distribution of matter or radiation should be even). Kelvin compared the universe to a clock that runs slower and slower, constantly dissipating energy in impalpable heat, although he was unsure whether it would stop for ever (reach thermodynamic equilibrium). According to this model, the existence of usable energy, which can be used to perform work and produce entropy, means that the clock has not stopped - since a conversion of heat in mechanical energy (which Kelvin called a rejuvenating universe scenario) is not contemplated. According to the laws of thermodynamics, any hot object transfers heat to its cooler surroundings, until everything is at the same temperature. For two objects at the same temperature as much heat flows from one body as flows from the other, and the net effect is no change. If the universe were infinitely old, there must have been enough
https://en.wikipedia.org/wiki/Lichenometry
In archaeology, palaeontology, and geomorphology, lichenometry is a geomorphic method of geochronologic dating that uses lichen growth to determine the age of exposed rock, based on a presumed specific rate of increase in radial size over time. Measuring the diameter of the largest lichen of a species on a rock surface can therefore be used to determine the length of time the rock has been exposed. Lichen can be preserved on old rock faces for up to 10,000 years, providing the maximum age limit of the technique, though it is most accurate (within 10% error) when applied to surfaces that have been exposed for less than 1,000 years. (The practical limit of the technique might be 4,000 to 5,000 years.) Lichenometry is especially useful for dating surfaces less than 500 years old, as radiocarbon dating techniques are less accurate over this period. The lichens most commonly used for lichenometry are those of the genera Rhizocarpon (e.g. the species Rhizocarpon geographicum) and Xanthoria. The measured growth rates of R. geographicum tends to fall within the range of 0.9–0.3 millimeter per year, depending on several factors, including the size of the lichen patch. It was first employed by Knut Fægri in 1933, though the first exclusively lichenometric paper was not published until 1950, by Austrian Roland Beschel in a paper concerning the European Alps. Lichenometry can provide dates for glacial deposits in tundra environments, lake level changes, glacial moraines, trim lines, palaeofloods, rockfalls, seismic events associated with the rockfalls, talus (scree) stabilization and former extent of permafrost or very persistent snow cover. It has also been explored as a tool in assessing the speed of glacier retreat due to climate change. Among the potential problems of the technique are the difficulty of correctly identifying the species, delay between exposure and colonization, varying growth rates from region to region as well as the fact that growth rates are not alway
https://en.wikipedia.org/wiki/SAE%20Institute
The SAE Institute (SAE), formerly the School of Audio Engineering and the SAE Technology College and badged SAE Creative Media Education, is a network of colleges around the world that provides creative media programmes. Founded in 1976 in Sydney, Australia, by Tom Misner, SAE is now owned by Navitas Limited. History SAE was established by Tom Misner in 1976 in Sydney, converting a small advertising studio into a classroom. Over the next six years, campuses in Melbourne, Brisbane, Adelaide, and Perth were established. In the mid-1980s, SAE began opening colleges outside of Australia, including locations in London, Munich, Frankfurt, Vienna, Berlin, Auckland, and Glasgow. In the 1990s, SAE opened a European head office in Amsterdam, and locations were opened in Paris, Hamburg, Zürich, Hobart, Cologne, Stockholm, Athens, and Milan. SAE also began expanding into Asia in the 1990s, opening locations in Singapore and Kuala Lumpur. In the late 1990s, SAE formed the SAE Entertainment Company and launched full university degree programs with the co-operation of Southern Cross University and Middlesex University. In 1999, SAE began opening facilities in the United States, and over the following decade opened locations in Nashville, Miami, San Francisco, Atlanta, Los Angeles, and Chicago. In 2000, SAE began licensing franchise schools in India, opening four that year. In 2000s, locations were opened in Liverpool, Madrid, Brussels, Bangkok, Leipzig, Barcelona, Dubai, Amman, Cape Town, Istanbul, and Serbia. Licence agreements were signed for new schools in Qatar, Bogotá Colombia, Mexico, Saudi Arabia and Egypt. The Dubai branch offers degree certification accredited by Middlesex University. In the 2000s SAE also acquired QANTM, an Australian production, media and training company, and relocated its head office to Littlemore Park, Oxford, and its headquarters to Byron Bay, Australia. In 2010, the SAE Institute was sold to Navitas, a publicly traded educational services company
https://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen%20algorithm
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971. It works by recursively applying fast Fourier transform (FFT) over the integers modulo 2n+1. The run-time bit complexity to multiply two -digit numbers using the algorithm is in big notation. The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007. It is asymptotically faster than older methods such as Karatsuba and Toom–Cook multiplication, and starts to outperform them in practice for numbers beyond about 10,000 to 100,000 decimal digits. In 2007, Martin Fürer published an algorithm with faster asymptotic complexity. In 2019, David Harvey and Joris van der Hoeven demonstrated that multi-digit multiplication has theoretical complexity; however, their algorithm has constant factors which make it impossibly slow for any conceivable practical problem (see galactic algorithm). Applications of the Schönhage–Strassen algorithm include large computations done for their own sake such as the Great Internet Mersenne Prime Search and approximations of , as well as practical applications such as Lenstra elliptic curve factorization via Kronecker substitution, which reduces polynomial multiplication to integer multiplication. Description Every number in base B, can be written as a polynomial: Furthermore, multiplication of two numbers could be thought of as a product of two polynomials: Because,for : , we have a convolution. By using FFT (Fast Fourier transform), used in original version rather than NTT, with convolution rule; we get . That is; , where is the corresponding coefficient in fourier space. This can also be written as: fft(a * b) = fft(a) ● fft(b). We have the same coefficients due to linearity under Fourier transform, and because these polynomials only consist of one unique term per coefficient: and Convolution rule: We have red
https://en.wikipedia.org/wiki/A%20Million%20Random%20Digits%20with%20100%2C000%20Normal%20Deviates
A Million Random Digits with 100,000 Normal Deviates is a random number book by the RAND Corporation, originally published in 1955. The book, consisting primarily of a random number table, was an important 20th century work in the field of statistics and random numbers. Production and background It was produced starting in 1947 by an electronic simulation of a roulette wheel attached to a computer, the results of which were then carefully filtered and tested before being used to generate the table. The RAND table was an important breakthrough in delivering random numbers, because such a large and carefully prepared table had never before been available. In addition to being available in book form, one could also order the digits on a series of punched cards. The table is formatted as 400 pages, each containing 50 lines of 50 digits. Columns and lines are grouped in fives, and the lines are numbered 00000 through 19999. The standard normal deviates are another 200 pages (10 per line, lines 0000 through 9999), with each deviate given to three decimal places. There are 28 additional pages of front matter. Utility The main use of the tables was in statistics and the experimental design of scientific experiments, especially those that used the Monte Carlo method; in cryptography, they have also been used as nothing up my sleeve numbers, for example in the design of the Khafre cipher. The book was one of the last of a series of random number tables produced from the mid-1920s to the 1950s, after which the development of high-speed computers allowed faster operation through the generation of pseudorandom numbers rather than reading them from tables. 2001 edition The book was reissued in 2001 () with a new foreword by RAND Executive Vice President Michael D. Rich. It has generated many humorous user reviews on Amazon.com. Sample The digits begin: References Additional sources George W. Brown, "History of RAND's random digits—Summary," in A.S. Householder, G.E.
https://en.wikipedia.org/wiki/Numerical%20model%20of%20the%20Solar%20System
A numerical model of the Solar System is a set of mathematical equations, which, when solved, give the approximate positions of the planets as a function of time. Attempts to create such a model established the more general field of celestial mechanics. The results of this simulation can be compared with past measurements to check for accuracy and then be used to predict future positions. Its main use therefore is in preparation of almanacs. Older efforts The simulations can be done in either Cartesian or in spherical coordinates. The former are easier, but extremely calculation intensive, and only practical on an electronic computer. As such only the latter was used in former times. Strictly speaking, the latter was not much less calculation intensive, but it was possible to start with some simple approximations and then to add perturbations, as much as needed to reach the wanted accuracy. In essence this mathematical simulation of the Solar System is a form of the N-body problem. The symbol N represents the number of bodies, which can grow quite large if one includes the Sun, 8 planets, dozens of moons, and countless planetoids, comets and so forth. However the influence of the Sun on any other body is so large, and the influence of all the other bodies on each other so small, that the problem can be reduced to the analytically solvable 2-body problem. The result for each planet is an orbit, a simple description of its position as function of time. Once this is solved the influences moons and planets have on each other are added as small corrections. These are small compared to a full planetary orbit. Some corrections might be still several degrees large, while measurements can be made to an accuracy of better than 1″. Although this method is no longer used for simulations, it is still useful to find an approximate ephemeris as one can take the relatively simple main solution, perhaps add a few of the largest perturbations, and arrive without too much effort at
https://en.wikipedia.org/wiki/After%20Dark%20%28software%29
After Dark is a series of computer screensaver software introduced by Berkeley Systems in 1989 for the Apple Macintosh, and in 1991 for Microsoft Windows. Following the original, additional editions included More After Dark, Before Dark, and editions themed around licensed properties such as Star Trek, The Simpsons, Looney Tunes, Marvel, and Disney characters. On top of the included animated screensavers, the program allowed for the development and use of third-party modules, many hundreds of which were created at the height of its popularity. Flying Toasters The most famous of the included screensaver modules is the iconic Flying Toasters, which featured 1940s-style chrome toasters sporting bird-like wings, flying across the screen with pieces of toast. Engineer Jack Eastman came up with the display after seeing a toaster in the kitchen during a late-night programming session and imagining the addition of wings. A slider in the Flying Toasters module enabled users to adjust the toast's darkness and an updated Flying Toasters Pro module added a choice of music—Richard Wagner's Ride of the Valkyries or a flying toaster anthem with optional karaoke lyrics. Yet another version called Flying Toasters! added bagels and pastries, baby toasters, and more elaborate toaster animation. The Flying Toasters were one of the key reasons that After Dark became popular, and Berkeley began to produce other merchandising products such as T-shirts with the Flying Toaster image and slogans such as "The 51st Flying Toaster Squadron: On a mission to save your screen!" The toasters were the subject of two lawsuits, the first in 1993, Berkeley Systems vs Delrina Corporation, over a module of Delrina's Opus 'N Bill screensaver in which Opus the penguin shoots down the toasters. After a U.S. District judge ruled that Delrina's "Death Toasters" was infringing, Delrina later changed the wings of the toasters to propellers. The second case was brought in 1994 by 1960s rock group Jefferson
https://en.wikipedia.org/wiki/Berkeley%20RISC
Berkeley RISC is one of two seminal research projects into reduced instruction set computer (RISC) based microprocessor design taking place under the Defense Advanced Research Projects Agency VLSI Project. RISC was led by David Patterson (who coined the term RISC) at the University of California, Berkeley between 1980 and 1984. The other project took place a short distance away at Stanford University under their MIPS effort starting in 1981 and running until 1984. Berkeley's project was so successful that it became the name for all similar designs to follow; even the MIPS would become known as a "RISC processor". The Berkeley RISC design was later commercialized by Sun Microsystems as the SPARC architecture, and inspired the ARM architecture. The RISC concept Both RISC and MIPS were developed from the realization that the vast majority of programs did not use the vast majority of a processor's instructions. In a famous 1978 paper, Andrew S. Tanenbaum demonstrated that a complex 10,000 line high-level program could be represented using a simplified instruction set architecture using an 8-bit fixed-length opcode. This was roughly the same conclusion reached at IBM, whose studies of their own code running on mainframes like the IBM 360 used only a small subset of all the instructions available. Both of these studies suggested that one could produce a much simpler CPU that would still run the vast majority of real-world code. Another finding, not fully explored at the time, was Tanenbaum's note that 81% of the constants were either 0, 1, or 2. These realizations were taking place as the microprocessor market was moving from 8 to 16-bit with 32-bit designs about to appear. These processors were designed on the premise of trying to replicate some of the more well-respected ISAs from the mainframe and minicomputer world. For instance, the National Semiconductor NS32000 started out as an effort to produce a single-chip implementation of the VAX-11, which had a rich inst
https://en.wikipedia.org/wiki/Blackboard%20system
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts. Metaphor The following scenario provides a simple metaphor that gives some insight into how a blackboard functions: A group of specialists are seated in a room with a large blackboard. They work as a team to brainstorm a solution to a problem, using the blackboard as the workplace for cooperatively developing the solution. The session begins when the problem specifications are written onto the blackboard. The specialists all watch the blackboard, looking for an opportunity to apply their expertise to the developing solution. When someone writes something on the blackboard that allows another specialist to apply their expertise, the second specialist records their contribution on the blackboard, hopefully enabling other specialists to then apply their expertise. This process of adding contributions to the blackboard continues until the problem has been solved. Components A blackboard-system application consists of three major components The software specialist modules, which are called knowledge sources (KSs). Like the human experts at a blackboard, each knowledge source provides specific expertise needed by the application. The blackboard, a shared repository of problems, partial solutions, suggestions, and contributed information. The blackboard can be thought of as a dynamic "library" of contributions to the current problem that have been
https://en.wikipedia.org/wiki/Pell%20number
In mathematics, the Pell numbers are an infinite sequence of integers, known since ancient times, that comprise the denominators of the closest rational approximations to the square root of 2. This sequence of approximations begins , , , , and , so the sequence of Pell numbers begins with 1, 2, 5, 12, and 29. The numerators of the same sequence of approximations are half the companion Pell numbers or Pell–Lucas numbers; these numbers form a second infinite sequence that begins with 2, 6, 14, 34, and 82. Both the Pell numbers and the companion Pell numbers may be calculated by means of a recurrence relation similar to that for the Fibonacci numbers, and both sequences of numbers grow exponentially, proportionally to powers of the silver ratio 1 + . As well as being used to approximate the square root of two, Pell numbers can be used to find square triangular numbers, to construct integer approximations to the right isosceles triangle, and to solve certain combinatorial enumeration problems. As with Pell's equation, the name of the Pell numbers stems from Leonhard Euler's mistaken attribution of the equation and the numbers derived from it to John Pell. The Pell–Lucas numbers are also named after Édouard Lucas, who studied sequences defined by recurrences of this type; the Pell and companion Pell numbers are Lucas sequences. Pell numbers The Pell numbers are defined by the recurrence relation: In words, the sequence of Pell numbers starts with 0 and 1, and then each Pell number is the sum of twice the previous Pell number and the Pell number before that. The first few terms of the sequence are 0, 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860, … . Analogously to the Binet formula, the Pell numbers can also be expressed by the closed form formula: For large values of n, the term dominates this expression, so the Pell numbers are approximately proportional to powers of the silver ratio , analogous to the growth rate of Fibonacci numbers as powers of the
https://en.wikipedia.org/wiki/Keepalive
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent the link from being broken. Description Once a TCP connection has been established, that connection is defined to be valid until one side closes it. Once the connection has entered the connected state, it will remain connected indefinitely. But, in reality, the connection will not last indefinitely. Many firewall or NAT systems will close a connection if there has been no activity in some time period. The Keep Alive signal can be used to trick intermediate hosts to not close the connection due to inactivity. It is also possible that one host is no longer listening (e.g. application or system crash). In this case, the connection is closed, but no FIN was ever sent. In this case, a KeepAlive packet can be used to interrogate a connection to check if it is still intact. A keepalive signal is often sent at predefined intervals, and plays an important role on the Internet. After a signal is sent, if no reply is received, the link is assumed to be down and future data will be routed via another path until the link is up again. A keepalive signal can also be used to indicate to Internet infrastructure that the connection should be preserved. Without a keepalive signal, intermediate NAT-enabled routers can drop the connection after timeout. Since the only purpose is to find links that do not work or to indicate connections that should be preserved, keepalive messages tend to be short and not take much bandwidth. However, their precise format and usage terms depend on the communication protocol. TCP keepalive Transmission Control Protocol (TCP) keepalives are an optional feature, and if included must default to off. The keepalive packet contains no data. In an Ethernet network, this results in frames of minimum size (64 bytes). There are three parameters related to keepalive: Keepalive time is the duration between two keepalive transmissions in
https://en.wikipedia.org/wiki/Parallel%20evolution
Parallel evolution is the similar development of a trait in distinct species that are not closely related, but share a similar original trait in response to similar evolutionary pressure. Parallel vs. convergent evolution Given a trait that occurs in each of two lineages descended from a specified ancestor, it is possible in theory to define parallel and convergent evolutionary trends strictly, and distinguish them clearly from one another. However the criteria for defining convergent as opposed to parallel evolution are unclear in practice, so that arbitrary diagnosis is common. When two species share a trait, evolution is defined as parallel if the ancestors are known to have shared that similarity; if not, it is defined as convergent. However, the stated conditions are a matter of degree; all organisms share common ancestors. Scientists differ on whether the distinction is useful. Parallel evolution between marsupials and placentals A number of examples of parallel evolution are provided by the two main branches of the mammals, the placentals and marsupials, which have followed independent evolutionary pathways following the break-up of land-masses such as Gondwanaland roughly 100 million years ago. In South America, marsupials and placentals shared the ecosystem (before the Great American Interchange); in Australia, marsupials prevailed; and in the Old World and North America the placentals won out. However, in all these localities mammals were small and filled only limited places in the ecosystem until the mass extinction of dinosaurs sixty-five million years ago. At this time, mammals on all three landmasses began to take on a much wider variety of forms and roles. While some forms were unique to each environment, surprisingly similar animals have often emerged in two or three of the separated continents. Examples of these include the placental sabre-toothed cats (Machairodontinae) and the South American marsupial sabre-tooth (Thylacosmilus); the Tasmania
https://en.wikipedia.org/wiki/Recombinant%20DNA
Recombinant DNA (rDNA) molecules are DNA molecules formed by laboratory methods of genetic recombination (such as molecular cloning) that bring together genetic material from multiple sources, creating sequences that would not otherwise be found in the genome. Recombinant DNA is the general name for a piece of DNA that has been created by combining two or more fragments from different sources. Recombinant DNA is possible because DNA molecules from all organisms share the same chemical structure, differing only in the nucleotide sequence. Recombinant DNA molecules are sometimes called chimeric DNA because they can be made of material from two different species like the mythical chimera. rDNA technology uses palindromic sequences and leads to the production of sticky and blunt ends. The DNA sequences used in the construction of recombinant DNA molecules can originate from any species. For example, plant DNA can be joined to bacterial DNA, or human DNA can be joined with fungal DNA. In addition, DNA sequences that do not occur anywhere in nature can be created by the chemical synthesis of DNA and incorporated into recombinant DNA molecules. Using recombinant DNA technology and synthetic DNA, any DNA sequence can be created and introduced into living organisms. Proteins that can result from the expression of recombinant DNA within living cells are termed recombinant proteins. When recombinant DNA encoding a protein is introduced into a host organism, the recombinant protein is not necessarily produced. Expression of foreign proteins requires the use of specialized expression vectors and often necessitates significant restructuring by foreign coding sequences. Recombinant DNA differs from genetic recombination in that the former results from artificial methods while the latter is a normal biological process that results in the remixing of existing DNA sequences in essentially all organisms. DNA creation Molecular cloning is the laboratory process used to create rec
https://en.wikipedia.org/wiki/MLT-3%20encoding
MLT-3 encoding (Multi-Level Transmit) is a line code (a signaling method used in a telecommunication system for transmission purposes) that uses three voltage levels. An MLT-3 interface emits less electromagnetic interference and requires less bandwidth than most other binary or ternary interfaces that operate at the same bit rate (see PCM for discussion on bandwidth / quantization tradeoffs), such as Manchester code or Alternate Mark Inversion. MLT-3 cycles sequentially through the voltage levels −1, 0, +1, 0. It moves to the next state to transmit a 1 bit, and stays in the same state to transmit a 0 bit. Similar to simple NRZ encoding, MLT-3 has a coding efficiency of 1 bit/baud, however it requires four transitions (baud) to complete a full cycle (from low-to-middle, middle-to-high, high-to-middle, middle-to-low). Thus, the maximum fundamental frequency is reduced to one fourth of the baud rate. This makes signal transmission more amenable to copper wires. The lack of transition on a 0 bit means that for practical use, the number of consecutive 0 bits in the transmitted data must be bounded; i.e. it must be pre-coded using a run-length limited code. This results in an effective bitrate slightly lower than one bit per baud or four bits per cycle. MLT-3 was first introduced by Crescendo Communications as a coding scheme for FDDI copper interconnect (TP-PMD, aka CDDI). Later, the same technology was used in the 100BASE-TX physical medium dependent sublayer, given the considerable similarities between FDDI and 100BASE-[TF]X physical media attachment layer (section 25.3 of IEEE802.3-2002 specifies that ANSI X3.263:1995 TP-PMD should be consulted, with minor exceptions). Signaling specified by 100BASE-T4 Ethernet, while it has three levels, is not compatible with MLT-3. It uses selective base-2 to base-3 conversion with direct mapping of base-3 digits to line levels (8B6T code). See also 4B5B External links References Line codes
https://en.wikipedia.org/wiki/Lychrel%20number
A Lychrel number is a natural number that cannot form a palindrome through the iterative process of repeatedly reversing its digits and adding the resulting numbers. This process is sometimes called the 196-algorithm, after the most famous number associated with the process. In base ten, no Lychrel numbers have been yet proven to exist, but many, including 196, are suspected on heuristic and statistical grounds. The name "Lychrel" was coined by Wade Van Landingham as a rough anagram of "Cheryl", his girlfriend's first name. Reverse-and-add process The reverse-and-add process produces the sum of a number and the number formed by reversing the order of its digits. For example, 56 + 65 = 121. As another example, 125 + 521 = 646. Some numbers become palindromes quickly after repeated reversal and addition, and are therefore not Lychrel numbers. All one-digit and two-digit numbers eventually become palindromes after repeated reversal and addition. About 80% of all numbers under 10,000 resolve into a palindrome in four or fewer steps; about 90% of those resolve in seven steps or fewer. Here are a few examples of non-Lychrel numbers: 56 becomes palindromic after one iteration: 56+65 = 121. 57 becomes palindromic after two iterations: 57+75 = 132, 132+231 = 363. 59 becomes a palindrome after three iterations: 59+95 = 154, 154+451 = 605, 605+506 = 1111 89 takes an unusually large 24 iterations (the most of any number under 10,000 that is known to resolve into a palindrome) to reach the palindrome 8813200023188. 10,911 reaches the palindrome 4668731596684224866951378664 (28 digits) after 55 steps. 1,186,060,307,891,929,990 takes 261 iterations to reach the 119-digit palindrome 44562665878976437622437848976653870388884783662598425855963436955852489526638748888307835667984873422673467987856626544, which was a former world record for the Most Delayed Palindromic Number. It was solved by Jason Doucette's algorithm and program (using Benjamin Despres' reversal-addition code)
https://en.wikipedia.org/wiki/Pulse-Doppler%20radar
A pulse-Doppler radar is a radar system that determines the range to a target using pulse-timing techniques, and uses the Doppler effect of the returned signal to determine the target object's velocity. It combines the features of pulse radars and continuous-wave radars, which were formerly separate due to the complexity of the electronics. The first operational Pulse Doppler radar was in the CIM-10 Bomarc, an American long range supersonic missile powered by ramjet engines, and which was armed with a W40 nuclear weapon to destroy entire formations of attacking enemy aircraft. Pulse-Doppler systems were first widely used on fighter aircraft starting in the 1960s. Earlier radars had used pulse-timing in order to determine range and the angle of the antenna (or similar means) to determine the bearing. However, this only worked when the radar antenna was not pointed down; in that case the reflection off the ground overwhelmed any returns from other objects. As the ground moves at the same speed but opposite direction of the aircraft, Doppler techniques allow the ground return to be filtered out, revealing aircraft and vehicles. This gives pulse-Doppler radars "look-down/shoot-down" capability. A secondary advantage in military radar is to reduce the transmitted power while achieving acceptable performance for improved safety of stealthy radar. Pulse-Doppler techniques also find widespread use in meteorological radars, allowing the radar to determine wind speed from the velocity of any precipitation in the air. Pulse-Doppler radar is also the basis of synthetic aperture radar used in radar astronomy, remote sensing and mapping. In air traffic control, they are used for discriminating aircraft from clutter. Besides the above conventional surveillance applications, pulse-Doppler radar has been successfully applied in healthcare, such as fall risk assessment and fall detection, for nursing or clinical purposes. History The earliest radar systems failed to operate as ex
https://en.wikipedia.org/wiki/Sobol%20sequence
Sobol’ sequences (also called LPτ sequences or (t, s) sequences in base 2) are an example of quasi-random low-discrepancy sequences. They were first introduced by the Russian mathematician Ilya M. Sobol’ (Илья Меерович Соболь) in 1967. These sequences use a base of two to form successively finer uniform partitions of the unit interval and then reorder the coordinates in each dimension. Good distributions in the s-dimensional unit hypercube Let Is = [0,1]s be the s-dimensional unit hypercube, and f a real integrable function over Is. The original motivation of Sobol’ was to construct a sequence xn in Is so that and the convergence be as fast as possible. It is more or less clear that for the sum to converge towards the integral, the points xn should fill Is minimizing the holes. Another good property would be that the projections of xn on a lower-dimensional face of Is leave very few holes as well. Hence the homogeneous filling of Is does not qualify because in lower dimensions many points will be at the same place, therefore useless for the integral estimation. These good distributions are called (t,m,s)-nets and (t,s)-sequences in base b. To introduce them, define first an elementary s-interval in base b a subset of Is of the form where aj and dj are non-negative integers, and for all j in {1, ...,s}. Given 2 integers , a (t,m,s)-net in base b is a sequence xn of bm points of Is such that for all elementary interval P in base b of hypervolume λ(P) = bt−m. Given a non-negative integer t, a (t,s)-sequence in base b is an infinite sequence of points xn such that for all integers , the sequence is a (t,m,s)-net in base b. In his article, Sobol’ described Πτ-meshes and LPτ sequences, which are (t,m,s)-nets and (t,s)-sequences in base 2 respectively. The terms (t,m,s)-nets and (t,s)-sequences in base b (also called Niederreiter sequences) were coined in 1988 by Harald Niederreiter. The term Sobol’ sequences was introduced in late English-speaking papers in
https://en.wikipedia.org/wiki/Ansatz
In physics and mathematics, an ansatz (; , meaning: "initial placement of a tool at a work piece", plural ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results. Use An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a "trial answer" and an important technique in solving differential equations). After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find. It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available. Examples Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ansätze and then fit the parameters. Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the ansatz to most basic problems of thermodynamics. Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equatio
https://en.wikipedia.org/wiki/Dickey%E2%80%93Wicker%20Amendment
The Dickey–Wicker Amendment is the name of an appropriation bill rider attached to a bill passed by United States Congress in 1995, and signed by former President Bill Clinton, which prohibits the United States Department of Health and Human Services (HHS) from using appropriated funds for the creation of human embryos for research purposes or for research in which human embryos are destroyed. HHS funding includes the funding for the National Institutes of Health (NIH). It is named after Jay Dickey and Roger Wicker, two Republican Representatives. Technically, the Dickey Amendment is a rider to other legislation, which amends the original legislation. The rider receives its name from the name of the Congressman that originally introduced the amendment, Representative Dickey. The Dickey amendment language has been added to each of the Labor, HHS, and Education appropriations acts for fiscal years 1997 through 2009. The original rider can be found in Section 128 of P.L. 104–99. The wording of the rider is generally the same year after year. For fiscal year 2009, the wording in Division F, Section 509 of the Omnibus Appropriations Act, 2009, (enacted March 11, 2009) prohibits HHS, including NIH, from using fiscal year 2009 appropriated funds as follows: SEC. 509. (a) None of the funds made available in this Act may be used for-- (1) the creation of a human embryo or embryos for research purposes; or (2) research in which a human embryo or embryos are destroyed, discarded, or knowingly subjected to risk of injury or death greater than that allowed for research on fetuses in utero under 45 CFR 46.208(a)(2) and Section 498(b) of the Public Health Service Act (42 U.S.C. 289g(b)) (Title 42, Section 289g(b), United States Code). (b) For purposes of this section, the term "human embryo or embryos" includes any organism, not protected as a human subject under 45 CFR 46 (the Human Subject Protection regulations) ... that is derived by fertilization, parthenogenesis, cloni
https://en.wikipedia.org/wiki/Slime%20layer
A slime layer in bacteria is an easily removable (e.g. by centrifugation), unorganized layer of extracellular material that surrounds bacteria cells. Specifically, this consists mostly of exopolysaccharides, glycoproteins, and glycolipids. Therefore, the slime layer is considered as a subset of glycocalyx. While slime layers and capsules are found most commonly in bacteria, while rare, these structures do exist in archaea as well. This information about structure and function is also transferable to these microorganisms too. Structure Slime layers are amorphous and inconsistent in thickness, being produced in various quantities depending upon the cell type and environment. These layers present themselves as strands hanging extracellularly and forming net-like structures between cells that were 1-4μm apart. Researchers suggested that a cell will slow formation of the slime layer after around 9 days of growth, perhaps due to slower metabolic activity. A bacterial capsule is similar, but is more rigid than the slime layer. Capsules are more organized and difficult to remove compared to their slime layer counterparts. Another highly organized, but separate structure is an S-layer. S-layers are structures that integrate themselves into the cell wall and are composed of glycoproteins, these layers can offer the cell rigidity and protection. Because a slime layer is loose and flowing, it does not aide the cell in its rigidity. While biofilms can be composed of slime layer producing bacteria, it is typically not their main composition. Rather, a biofilm is made up of an array of microorganisms that come together to form a cohesive biofilm. Although, there are homogeneous biofilms that can form. For example, the plaque that forms on the surfaces of teeth is caused by a biofilm formation of primarily Streptococcus mutans and the slow breakdown of tooth enamel. Cellular function The function of the slime layer is to protect the bacteria cells from environmental dangers
https://en.wikipedia.org/wiki/Pregeometry%20%28model%20theory%29
Pregeometry, and in full combinatorial pregeometry, are essentially synonyms for "matroid". They were introduced by Gian-Carlo Rota with the intention of providing a less "ineffably cacophonous" alternative term. Also, the term combinatorial geometry, sometimes abbreviated to geometry, was intended to replace "simple matroid". These terms are now infrequently used in the study of matroids. It turns out that many fundamental concepts of linear algebra – closure, independence, subspace, basis, dimension – are available in the general framework of pregeometries. In the branch of mathematical logic called model theory, infinite finitary matroids, there called "pregeometries" (and "geometries" if they are simple matroids), are used in the discussion of independence phenomena. The study of how pregeometries, geometries, and abstract closure operators influence the structure of first-order models is called geometric stability theory. Motivation If is a vector space over some field and , we define to be the set of all linear combinations of vectors from , also known as the span of . Then we have and and . The Steinitz exchange lemma is equivalent to the statement: if , then The linear algebra concepts of independent set, generating set, basis and dimension can all be expressed using the -operator alone. A pregeometry is an abstraction of this situation: we start with an arbitrary set and an arbitrary operator which assigns to each subset of a subset of , satisfying the properties above. Then we can define the "linear algebra" concepts also in this more general setting. This generalized notion of dimension is very useful in model theory, where in certain situation one can argue as follows: two models with the same cardinality must have the same dimension and two models with the same dimension must be isomorphic. Definitions Pregeometries and geometries A combinatorial pregeometry (also known as a finitary matroid) is a pair , where is a set and (calle
https://en.wikipedia.org/wiki/Pomology
Pomology (from Latin , "fruit", + , "study") is a branch of botany that studies fruits and their cultivation. Someone who researches and practices the science of pomology is called a pomologist. The term fruticulture (from Latin , "fruit", + , "care") is also used to describe the agricultural practice of growing fruits in orchards. Pomological research is mainly focused on the development, enhancement, cultivation and physiological studies of fruit trees. The goals of fruit tree improvement include enhancement of fruit quality, regulation of production periods, and reduction of production costs. History Middle East In ancient Mesopotamia, pomology was practiced by the Sumerians, who are known to have grown various types of fruit, including dates, grapes, apples, melons, and figs. While the first fruits cultivated by the Egyptians were likely indigenous, such as the palm date and sorghum, more fruits were introduced as other cultural influences were introduced. Grapes and watermelon were found throughout predynastic Egyptian sites, as were the sycamore fig, dom palm and Christ's thorn. The carob, olive, apple and pomegranate were introduced to Egyptians during the New Kingdom. Later, during the Greco-Roman period peaches and pears were also introduced. Europe The ancient Greeks and Romans also had a strong tradition of pomology, and they cultivated a wide range of fruits, including apples, pears, figs, grapes, quinces, citron, strawberries, blackberries, elderberries, currants, damson plums, dates, melons, rose hips and pomegranates. Less common fruits were the more exotic azeroles and medlars. Cherries and apricots, both introduced in the 1st century BC, were popular. Peaches were introduced in the 1st century AD from Persia. Oranges and lemons were known but used more for medicinal purposes than in cookery. The Romans, in particular, were known for their advanced methods of fruit cultivation and storage, and they developed many of the techniques that are sti
https://en.wikipedia.org/wiki/Gauss%E2%80%93Kuzmin%E2%80%93Wirsing%20operator
In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. (This is not the same as the Gauss map in differential geometry.) It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function. Relationship to the maps and continued fractions The Gauss map The Gauss function (map) h is : where denotes the floor function. It has an infinite number of jump discontinuities at x = 1/n, for positive integers n. It is hard to approximate it by a single smooth polynomial. Operator on the maps The Gauss–Kuzmin–Wirsing operator acts on functions as Eigenvalues of the operator The first eigenfunction of this operator is which corresponds to an eigenvalue of λ1 = 1. This eigenfunction gives the probability of the occurrence of a given integer in a continued fraction expansion, and is known as the Gauss–Kuzmin distribution. This follows in part because the Gauss map acts as a truncating shift operator for the continued fractions: if is the continued fraction representation of a number 0 < x < 1, then Because is conjugate to a Bernoulli shift, the eigenvalue is simple, and since the operator leaves invariant the Gauss–Kuzmin measure, the operator is ergodic with respect to the measure. This fact allows a short proof of the existence of Khinchin's constant. Additional eigenvalues can be computed numerically; the next eigenvalue is λ2 = −0.3036630029... and its absolute value is known as the Gauss–Kuzmin–Wirsing constant. Analytic forms for additional eigenfunctions are not known. It is not known if the eigenvalues are irrational. Let us arrange the eigenvalues of the Gauss–Kuzmin–Wirsing operator according to an absolute value: It was conjectured in 1995 by Philippe Flajolet and Brigitte Vallée that In 2018, Giedrius Alkauskas gave a convincing argument th
https://en.wikipedia.org/wiki/Control%20volume
In continuum mechanics and thermodynamics, a control volume (CV) is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a fictitious region of a given volume fixed in space or moving with constant flow velocity through which the continuuum (a continuous medium such as gas, liquid or solid) flows. The closed surface enclosing the region is referred to as the control surface. At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram. Overview Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model. One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system. In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of th
https://en.wikipedia.org/wiki/Stochastic%20differential%20equation
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations. SDEs have a random differential that is in the most basic case random white noise calculated as the derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds. Background Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of Stochastic Differential Equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called 'Langevin' equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force. The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus. Terminology The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a whi
https://en.wikipedia.org/wiki/Mechanical%20television
Mechanical television or mechanical scan television is an obsolete television system that relies on a mechanical scanning device, such as a rotating disk with holes in it or a rotating mirror drum, to scan the scene and generate the video signal, and a similar mechanical device at the receiver to display the picture. This contrasts with vacuum tube electronic television technology, using electron beam scanning methods, for example in cathode ray tube (CRT) televisions. Subsequently, modern solid-state liquid-crystal displays (LCD) and LED displays are now used to create and display television pictures. Mechanical-scanning methods were used in the earliest experimental television systems in the 1920s and 1930s. One of the first experimental wireless television transmissions was by John Logie Baird on October 2, 1925, in London. By 1928 many radio stations were broadcasting experimental television programs using mechanical systems. However the technology never produced images of sufficient quality to become popular with the public. Mechanical-scan systems were largely superseded by electronic-scan technology in the mid-1930s, which was used in the first commercially successful television broadcasts which began in the late 1930s in Great Britain. In the U.S., experimental stations such as W2XAB in New York City began broadcasting mechanical television programs in 1931 but discontinued operations on February 20, 1933, until returning with an all-electronic system in 1939. A mechanical television receiver was also called a televisor. History Early research The first mechanical raster scanning techniques were developed in the 19th century for facsimile, the transmission of still images by wire. Alexander Bain introduced the facsimile machine in 1843 to 1846. Frederick Bakewell demonstrated a working laboratory version in 1851. The first practical facsimile system, working on telegraph lines, was developed and put into service by Giovanni Caselli from 1856 onward. W
https://en.wikipedia.org/wiki/Quaternary%20ammonium%20cation
In organic chemistry, quaternary ammonium cations, also known as quats, are positively-charged polyatomic ions of the structure , where R is an alkyl group, an aryl group or organyl group. Unlike the ammonium ion () and the primary, secondary, or tertiary ammonium cations, the quaternary ammonium cations are permanently charged, independent of the pH of their solution. Quaternary ammonium salts or quaternary ammonium compounds (called quaternary amines in oilfield parlance) are salts of quaternary ammonium cations. Polyquats are a variety of engineered polymer forms which provide multiple quat molecules within a larger molecule. Quats are used in consumer applications including as antimicrobials (such as detergents and disinfectants), fabric softeners, and hair conditioners. As an antimicrobial, they are able to inactivate enveloped viruses (such as SARS-CoV-2). Quats tend to be gentler on surfaces than bleach-based disinfectants, and are generally fabric-safe. Synthesis Quaternary ammonium compounds are prepared by the alkylation of tertiary amine. Industrial production of commodity quat salts usually involves hydrogenation of fatty nitriles, which can generate primary or secondary amines. These amines are then treated with methyl chloride. The quaternization of alkyl amines by alkyl halides is widely documented. In older literature this is often called a Menshutkin reaction, however modern chemists usually refer to it simply as quaternization. The reaction can be used to produce a compound with unequal alkyl chain lengths; for example when making cationic surfactants one of the alkyl groups on the amine is typically longer than the others. A typical synthesis is for benzalkonium chloride from a long-chain alkyldimethylamine and benzyl chloride: CH3(CH2)_\mathit{n}N(CH3)2{} + ClCH2C6H5 ->{} [CH3(CH2)_\mathit{n}N(CH3)2CH2C6H5]+Cl- Reactions Quaternary ammonium cations are unreactive toward even strong electrophiles, oxidants, and acids. They also are stable t
https://en.wikipedia.org/wiki/Bubble%20ring
A bubble ring, or toroidal bubble, is an underwater vortex ring where an air bubble occupies the core of the vortex, forming a ring shape. The ring of air as well as the nearby water spins poloidally as it travels through the water, much like a flexible bracelet might spin when it is rolled on to a person's arm. The faster the bubble ring spins, the more stable it becomes. The physics of vortex rings are still under active study in fluid dynamics. Devices have been invented which generate bubble vortex rings. Physics As the bubble ring rises, a lift force pointing downward that is generated by the vorticity acts on the bubble in order to counteract the buoyancy force. This reduces the bubble's velocity and increases its diameter. The ring becomes thinner, despite the total volume inside the bubble increasing as the external water pressure decreases. Bubble rings fragment into rings of spherical bubbles when the ring becomes thinner than a few millimetres. This is due to Plateau–Rayleigh instability. When the bubble reaches a certain thickness, surface tension effects distort the bubble's surface pulling it apart into separate bubbles. Circulation of the fluid around the bubble helps to stabilize the bubble for a longer duration, counteracting the effects of Plateau–Rayleigh instability. Below is the equation for Plateau–Rayleigh instability with circulation as a stabilizing term: where is the growth rate, is the wave number, is the radius of the bubble cylinder, is the surface tension, is the circulation, and is the modified Bessel function of the second kind of order . When is positive, the bubble is stable due to circulation and when is negative, surface tension effects destabilize it and break it up. Circulation also has an effect on the velocity and radial expansion of the bubble. Circulation increases the velocity while reducing the rate of radial expansion. Radial expansion however is what diffuses energy by stretching the vortex. Instability happe
https://en.wikipedia.org/wiki/Shear%20wall
In structural engineering, a shear wall is a two-dimensional vertical element of a system that is designed to resist in-plane lateral forces, typically wind and seismic loads. A shear wall resists loads parallel to the plane of the wall. Collectors, also known as drag members, transfer the diaphragm shear to shear walls and other vertical elements of the seismic force resisting system. Shear walls are typically light-framed or braced wooden walls with thin shear-resisting panels on the framing surface, or are reinforced concrete walls, reinforced masonry walls, or steel plates. Plywood is the conventional material used in wood (timber) shear walls, but with advances in technology and modern building methods, other prefabricated options have made it possible to use shear assemblies in narrow walls that fall at either side of an opening. Sheet steel and steel-backed shear panels in the place of structural plywood in shear walls has proved to provide stronger seismic resistance. Structural design considerations Loading and failure mechanisms A shear wall is stiffer in its principal axis than it is in the other axis. It is considered as a primary structure which provides relatively stiff resistance to vertical and horizontal forces acting in its plane. Under this combined loading condition, a shear wall develops compatible axial, shear, torsional and flexural strains, resulting in a complicated internal stress distribution. In this way, loads are transferred vertically to the building's foundation. Therefore, there are four critical failure mechanisms; as shown in Figure 1. The factors determining the failure mechanism include geometry, loading, material properties, restraint, and construction. Shewar walls may also be constructed using light-gauge steel diagonal bracing members tied to collector and ancor points. Slenderness ratio The slenderness ratio of a wall is defined as a function of the effective height divided by either the effective thickness or the r
https://en.wikipedia.org/wiki/MIPS%20Magnum
The MIPS Magnum was a line of computer workstations designed by MIPS Computer Systems, Inc. and based on the MIPS series of RISC microprocessors. The first Magnum was released in March, 1990, and production of various models continued until 1993 when SGI bought MIPS Technologies. SGI cancelled the MIPS Magnum line to promote their own workstations including the entry-level SGI Indy. The early, R3000-based Magnum series ran only RISC/os, a variant of BSD Unix, but the subsequent Magnum workstations based on the Jazz architecture ran both RISC/os and Windows NT. In addition to these proprietary operating systems, both Linux and NetBSD have been ported to the Jazz-based MIPS Magnum machines. Some models of MIPS Magnum were rebadged and sold by Groupe Bull and Olivetti. In addition, headless (i.e., without a framebuffer or video card) versions were marketed as servers under the name "MIPS Millennium". Series Model number information. MIPS Magnum 3000 Alternative model name: MIPS RC3230 Release: March, 1990 Initial price: $9000 USD Bus: TURBOchannel Maximum possible RAM: 128 MB MIPS Magnum R4000 Two subtypes: The R4000 PC-50 and R4000 SC-50 Release: April, 1992 Initial price: $12,000.00 USD Bus: EISA Maximum possible RAM: 256 MB Components Processors The MIPS Magnum 3000 has a 25 or 33 MHz MIPS R3000A microprocessor. The MIPS Magnum R4000 PC-50 has a MIPS R4000PC processor with only 16 kB L1 cache (but no L2 cache), running at an external clock rate of 50 MHz (which was internally doubled in the microprocessor to 100 MHz). The MIPS Magnum R4000 SC-50 is identical to the Magnum R4000PC, but includes one megabyte of secondary cache in addition to the primary cache. Memory For main memory, the MIPS Magnum 3000 accepted 30-pin true-parity, 80ns SIMMs up to a maximum of 128 MB. The MIPS Magnum R4000 accepted eight 72-pin true-parity SIMMs, up to a maximum of 256 MB. SCSI The MIPS Magnum R4000 (both the R4000 PC-50 and R4000 SC-50) includes a single on-board
https://en.wikipedia.org/wiki/Further%20Mathematics
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. United Kingdom Background A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available Although the subject has about 60% of its cohort obtainin
https://en.wikipedia.org/wiki/NEC%20RISCstation
The NEC RISCstation was a line of computer workstations made by NEC in the mid-1990s, based on MIPS RISC microprocessors and designed to run Microsoft Windows NT. A series of nearly identical machines were also sold by NEC in headless (i.e., no video card or framebuffer) configuration as the RISCserver series, and were intended for use as Windows NT workgroup servers. The RISCstation 2000 was announced in June 1994 by NEC with an availability slated for the end of that summer with the release of Windows NT "Daytona" at a price between US$6000 to US$10000. Historical development The RISCstations were based on a modified Jazz architecture licensed from MIPS Computer Systems, Inc. (and which was originally designed by Microsoft). Although architecturally similar to contemporaneous Intel 80386-based personal computers (including, for example, a PCI bus), the RISCstations were faster than the Pentium-based workstations of the time. Although based on the Jazz design, the RISCstations did not use the G364 framebuffer, instead using a S3 968-based video card or a 3Dlabs GLiNT-based adapter in a PCI slot. Form factor All RISCstations used a standard IBM AT-style tower or minitower case, a motherboard which also met the AT form factor standard, and PCI peripherals (such as the video card) for peripheral expansion. Operating systems Several operating systems supported RISCstations. Like all Jazz-based MIPS computers (such as the MIPS Magnum), the RISCstations ran the ARC console firmware to boot Windows NT in little-endian mode. The MIPS III architecture was capable of either little-endian or big-endian operation. However, Microsoft stopped supporting the MIPS architecture in Windows NT after version 4.0. RISCstations ceased production in 1996. In addition to Windows NT, NEC ported a version of Unix System V to the RISCstation. Although support is lacking from Linux/MIPS for the RISCstation series, they are supported by NetBSD as NetBSD/arc and had been supported by
https://en.wikipedia.org/wiki/Eugenol
Eugenol is an allyl chain-substituted guaiacol, a member of the allylbenzene class of chemical compounds. It is a colorless to pale yellow, aromatic oily liquid extracted from certain essential oils especially from clove, nutmeg, cinnamon, basil and bay leaf. It is present in concentrations of 80–90% in clove bud oil and at 82–88% in clove leaf oil. Eugenol has a pleasant, spicy, clove-like scent. The name is derived from Eugenia caryophyllata, the former Linnean nomenclature term for cloves. The currently accepted name is Syzygium aromaticum. Biosynthesis The biosynthesis of eugenol begins with the amino acid tyrosine. L-tyrosine is converted to p-coumaric acid by the enzyme tyrosine ammonia lyase (TAL). From here, p-coumaric acid is converted to caffeic acid by p-coumarate 3-hydroxylase using oxygen and NADPH. S-Adenosyl methionine (SAM) is then used to methylate caffeic acid, forming ferulic acid, which is in turn converted to feruloyl-CoA by the enzyme 4-hydroxycinnamoyl-CoA ligase (4CL). Next, feruloyl-CoA is reduced to coniferaldehyde by cinnamoyl-CoA reductase (CCR). Coniferaldeyhyde is then further reduced to coniferyl alcohol by cinnamyl-alcohol dehydrogenase (CAD) or sinapyl-alcohol dehydrogenase (SAD). Coniferyl alcohol is then converted to an ester in the presence of the substrate CH3COSCoA, forming coniferyl acetate. Finally, coniferyl acetate is converted to eugenol via the enzyme eugenol synthase 1 and the use of NADPH. Pharmacology Eugenol and thymol possess general anesthetic properties. Like many other anesthetic agents, these 2-alkyl(oxy)phenols act as positive allosteric modulators of the GABAA receptor. Although eugenol and thymol are too toxic and not potent enough to be used clinically, these findings led to the development of 2-substituted phenol anesthetic drugs, including propanidid (later withdrawn) and the widely used propofol. Eugenol and the structurally similar myristicin, have the common property of inhibiting MAO-A and MAO-B in vi
https://en.wikipedia.org/wiki/Kazhdan%27s%20property%20%28T%29
In mathematics, a locally compact topological group G has property (T) if the trivial representation is an isolated point in its unitary dual equipped with the Fell topology. Informally, this means that if G acts unitarily on a Hilbert space and has "almost invariant vectors", then it has a nonzero invariant vector. The formal definition, introduced by David Kazhdan (1967), gives this a precise, quantitative meaning. Although originally defined in terms of irreducible representations, property (T) can often be checked even when there is little or no explicit knowledge of the unitary dual. Property (T) has important applications to group representation theory, lattices in algebraic groups over local fields, ergodic theory, geometric group theory, expanders, operator algebras and the theory of networks. Definitions Let G be a σ-compact, locally compact topological group and π : G → U(H) a unitary representation of G on a (complex) Hilbert space H. If ε > 0 and K is a compact subset of G, then a unit vector ξ in H is called an (ε, K)-invariant vector if The following conditions on G are all equivalent to G having property (T) of Kazhdan, and any of them can be used as the definition of property (T). (1) The trivial representation is an isolated point of the unitary dual of G with Fell topology. (2) Any sequence of continuous positive definite functions on G converging to 1 uniformly on compact subsets, converges to 1 uniformly on G. (3) Every unitary representation of G that has an (ε, K)-invariant unit vector for any ε > 0 and any compact subset K, has a non-zero invariant vector. (4) There exists an ε > 0 and a compact subset K of G such that every unitary representation of G that has an (ε, K)-invariant unit vector, has a nonzero invariant vector. (5) Every continuous affine isometric action of G on a real Hilbert space has a fixed point (property (FH)). If H is a closed subgroup of G, the pair (G,H) is said to have relative property (T) of Margulis i
https://en.wikipedia.org/wiki/MIPS%20RISC/os
RISC/os is a discontinued UNIX operating system developed by MIPS Computer Systems, Inc. from 1985 to 1992, for their computer workstations and servers, including such models as the MIPS M/120 server and MIPS Magnum workstation. It was also known as UMIPS or MIPS OS. RISC/os was based largely on UNIX System V with additions from 4.3BSD UNIX, ported to the MIPS architecture. It was a "dual-universe" operating system, meaning that it had separate, switchable runtime environments providing compatibility with either System V Release 3 or 4.3BSD. MIPS OS was one of the first 32-bit operating systems for RISC-based workstation-class computers. It was also one of the first 64-bit Unix releases for RISC based microprocessors, with the first 64-bit versions appearing in 1990. MIPS OS supported full 32-bit and 64-bit applications simultaneously using the underlying hardware architecture supporting the MIPS-IV instruction set. Later releases added support for System V Release 4 compatibility, R6000 processor support and later symmetric multiprocessing support on the R4400 and R6000 processors. During the early 1990s, several vendors including DEC, Silicon Graphics, and Ardent licensed portions of the software MIPS had written for the RISC/os for their own Unix variants. Evans & Sutherland licensed RISC/os directly for its ESV series workstations. MIPS' influence was most visible as the C compiler and development tools shared by virtually all commercial Unixes for the MIPS processor, the low memory operating system code, and the ROM code for MIPS processors. Because of its early UNIX heritage, RISC/os was limited in comparison to modern UNIX variants for example, even the last releases of RISC/os did not support shared libraries. In July 1992, Silicon Graphics purchased MIPS Computer Systems for $220M. Support for RISC/os was subsequently phased out. See also Timeline of operating systems References Discontinued operating systems MIPS operating systems MIPS Technologi
https://en.wikipedia.org/wiki/ARC%20%28specification%29
Advanced RISC Computing (ARC) is a specification promulgated by a defunct consortium of computer manufacturers (the Advanced Computing Environment project), setting forth a standard MIPS RISC-based computer hardware and firmware environment. The firmware on Alpha machines that are compatible with ARC is known as AlphaBIOS, non-ARC firmware on Alpha is known as SRM. History Although ACE went defunct, and no computer was ever manufactured which fully complied with the ARC standard, the ARC system has a widespread legacy in that all operating systems in the Windows NT family use ARC conventions for naming boot devices. SGI's modified version of the ARC firmware is named ARCS. All SGI computers which run IRIX 6.1 or later, such as the Indy and Octane, boot from an ARCS console, which uses the same drive naming conventions as Windows. Most of the various RISC-based computers designed to run Windows NT have versions of the ARC boot console to boot NT. These include the following: MIPS R4000-based systems such as the MIPS Magnum workstation all Alpha-based machines with a PCI bus designed prior to the end of support for Windows NT Alpha in September 1999 (the Alpha ARC firmware is also known as AlphaBIOS; non-ARC Alphas use SRM console instead) most Windows NT-capable PowerPC computers (such as the IBM RS/6000 40P). It was predicted that Intel IA-32-based computers would adopt the ARC console, although only SGI ever marketed such machines with ARC firmware (namely, the SGI Visual Workstation series, which launched in 1999). Comparison with UEFI Compared to UEFI, the ARC firmware also included support for FAT, boot variables, C-calling interface. It did not include the same level of extensibility as UEFI and the same level of governance like with the UEFI Forum. List of partially ARC compatible computers Products complying (to some degree) with the ARC standard include these: Alpha DEC Multia and AlphaStation/AlphaServer DeskStation Raptor i386 SGI Visual Worksta
https://en.wikipedia.org/wiki/ARCS%20%28computing%29
ARCS is a firmware bootloader (also known as a PROM console) used in most computers produced by SGI since the beginning of the 1990s. The ARCS system is loosely compliant with the Advanced RISC Computing (ARC) standard, promulgated by the Advanced Computing Environment consortium in the early 1990s. In another sense, the ARC standard is based on SGI's ARCS, which was used as a basis for generating the ARC standard itself, although ARC calls for a little-endian system while ARCS system is big-endian on all MIPS-based systems. Despite various inconsistencies between the two, both SGI's ARCS implementations and the ARC standard share many commonalities (such as device naming, calling conventions, etc.). Most of the computers which use the ARCS firmware are based on the MIPS line of microprocessors. The SGI Visual Workstation series, which is based on the Intel Pentium III, also uses ARCS. The Visual Workstation series is the only commercially produced x86-compatible system which used an ARCS firmware, rather than the traditional PC BIOS used in most Intel 386-lineage machines. A list of product lines which use the ARCS console includes: SGI Crimson (IP17) SGI Indigo (R4000/R4400) (IP20) SGI Indigo2 (and Challenge M) (IP22) SGI Indy (and Challenge S) (IP24) SGI Onyx (IP19/IP21/IP25) SGI Indigo2 R8000 (IP26) SGI Indigo2 R10000 (IP28) SGI O2 (IP32) SGI Octane (IP30) SGI Origin 200 (IP27) SGI Origin 2000 (IP27/IP31) SGI Onyx2 (IP27/IP31) SGI Fuel (IP35) SGI Tezro (IP35) SGI Origin 300 (IP35) SGI Origin 350 (IP35) SGI Origin 3000 (IP27/IP35) SGI Onyx 300 (IP35) SGI Onyx 350 (IP35) SGI Onyx 3000 (IP27/IP35) SGI Onyx4 (IP35) SGI Visual Workstation models 320 and 540 (later models were BIOS-based PCs) Boot loaders Advanced RISC Computing
https://en.wikipedia.org/wiki/Modelling%20biological%20systems
Modelling biological systems is a significant task of systems biology and mathematical biology. Computational systems biology aims to develop and use efficient algorithms, data structures, visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism, signal transduction pathways and gene regulatory networks), to both analyze and visualize the complex connections of these cellular processes. An unexpected emergent property of a complex system may be a result of the interplay of the cause-and-effect among simpler, integrated parts (see biological organisation). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart. Standards By far the most widely accepted standard format for storing and exchanging models in the field is the Systems Biology Markup Language (SBML). The SBML.org website includes a guide to many important software packages used in computational systems biology. A large number of models encoded in SBML can be retrieved from BioModels. Other markup languages with different emphases include BioPAX and CellML. Particular tasks Cellular model Creating a cellular model has been a particularly challenging task of systems biology and
https://en.wikipedia.org/wiki/OP-20-G
OP-20-G or "Office of Chief Of Naval Operations (OPNAV), 20th Division of the Office of Naval Communications, G Section / Communications Security", was the U.S. Navy's signals intelligence and cryptanalysis group during World War II. Its mission was to intercept, decrypt, and analyze naval communications from Japanese, German, and Italian navies. In addition OP-20-G also copied diplomatic messages of many foreign governments. The majority of the section's effort was directed towards Japan and included breaking the early Japanese "Blue" book fleet code. This was made possible by intercept and High Frequency Direction Finder (HFDF) sites in the Pacific, Atlantic, and continental U.S., as well as a Japanese telegraphic code school for radio operators in Washington, D.C. Prewar The Code and Signal Section was formally made a part of the Division of Naval Communications (DNC), as Op-20-G, on July 1, 1922. In January 1924, a 34-year-old U.S. Navy lieutenant named Laurance F. Safford was assigned to expand OP-20-G's domain to radio interception. He worked out of Room 2646, on the top floor of the Navy Department building in Washington, D.C. Japan was of course a prime target for radio interception and cryptanalysis, but there was the problem of finding personnel who could speak Japanese. The Navy had a number of officers who had served in a diplomatic capacity in Japan and could speak Japanese fluently, but there was a shortage of radiotelegraph operators who could read Japanese Wabun code communications sent in kana. Fortunately, a number of US Navy and Marine radiotelegraph operators operating in the Pacific had formed an informal group in 1923 to compare notes on Japanese kana transmissions. Four of these men became instructors in the art of reading kana transmissions when the Navy began conducting classes in the subject in 1928. The classes were conducted by the Room 2426 crew, and the radiotelegraph operators became known as the "On-The-Roof Gang". By June 1940, OP
https://en.wikipedia.org/wiki/Signal%20Intelligence%20Service
The Signal Intelligence Service (SIS) was the United States Army codebreaking division through World War II. It was founded in 1930 to compile codes for the Army. It was renamed the Signal Security Agency in 1943, and in September 1945, became the Army Security Agency. For most of the war it was headquartered at Arlington Hall (former campus of Arlington Hall Junior College for Women), on Arlington Boulevard in Arlington, Virginia, across the Potomac River from Washington (D.C.). During World War II, it became known as the Army Security Agency, and its resources were reassigned to the newly established National Security Agency (NSA). History The Signal Intelligence Service was a part of the U.S. Army Signal Corps for most of World War II. At that time the Signal Corps was a bureau in the Headquarters, Department of the Army, in addition to being a branch of the Army to which personnel were commissioned or appointed. The Signal Corps supplied the Army with communications and photography equipment and services among other things. The Signal Corps also trained personnel and signal units for service with forces in the field. The evolution and activities of the Signal Intelligence Service before and during World War II is discussed in detail in Chapter XI, "Signal, Security, Intelligence," (pp. 327–350) in The Signal Corps: the Outcome, an official history of the Signal Corps. Chapters 2 and 3 (pp. 4–25) in Army Field Manual FM 11-35, 1942, describe the organization of the Signal Intelligence Service in the War Department and in the forces in the field and the functions performed by SIS units. That manual was marked "RESTRICTED" when it was issued. William Friedman began the division with three "junior cryptanalysts" in April 1930. Their names were Frank Rowlett, Abraham Sinkov, and Solomon Kullback. Before this, all three had been mathematics teachers and none had a cryptanalysis background. Friedman was a geneticist who developed his expertise in cryptology at
https://en.wikipedia.org/wiki/Jazz%20%28computer%29
The Jazz computer architecture is a motherboard and chipset design originally developed by Microsoft for use in developing Windows NT. The design was eventually used as the basis for most MIPS-based Windows NT systems. In part because Microsoft intended NT to be portable between various microprocessor architectures, the MIPS RISC architecture was chosen for one of the first development platforms for the NT project in the late 1980s/early 1990s. However, around 1990, the existing MIPS-based systems (such as the TURBOchannel-equipped DECstation or the SGI Indigo) varied drastically from standard Intel personal computers such as the IBM AT—for example, neither used the ISA bus so common in Intel 386-class machines. For those and other reasons, Microsoft decided to design their own MIPS-based hardware platform on which to develop NT, which resulted in the Jazz architecture. Later, Microsoft sold this architecture design to the MIPS Computer Systems, Inc. where it became the MIPS Magnum. The Jazz architecture includes: a MIPS R4000/R4400 or compatible microprocessor an EISA bus a framebuffer for video output (the G364 framebuffer) PS/2 connectors for mouse and keyboard a floppy-disk controller onboard 16-bit sound system onboard National Semiconductor SONIC Ethernet onboard NCR 53C9x SCSI chipset for hard disk and CD-ROM interface standard IBM AT serial and parallel ports IBM AT-style time-of-year clock This design was simple enough and powerful enough that a majority of Windows NT-capable MIPS systems were based on modified versions of the Jazz architecture. A list of systems which more or less were based on Jazz includes: MIPS Magnum (R4000 PC-50 and SC-50 versions) Acer PICA uses S3 videocard Olivetti M700 has different video and sound system NEC RISCstation Jazz with PCI The Jazz systems were designed to partially comply with the Advanced RISC Computing (ARC) standard, and each used the ARC firmware to boot Windows NT. Other operating systems were
https://en.wikipedia.org/wiki/SGI%20Visual%20Workstation
SGI Visual Workstation is a series of workstation computers that are designed and manufactured by SGI. Unlike its other product lines, which used the 64-bit MIPS RISC architecture, the line used Intel Pentium II and III processors and shipped with Windows NT 4.0 or Windows 2000 as its operating system in lieu of IRIX. However, the Visual Workstation 320 and 540 models deviated from the architecture of IBM-compatible PCs by using SGI's ARCS firmware instead of a traditional BIOS, internal components adapted from its MIPS-based products, and other proprietary components that made them incompatible with internal hardware designed for standard PCs and hence unable to run other versions of Microsoft Windows, especially Windows 9x. By contrast, the remaining models in the line are standard PCs, using VIA Technologies chipsets, Nvidia video cards, and standard components. Computer architecture There are two series of the Visual Workstations. All are based on Intel processors; the first series (320 and 540) used SGI's ARCloader PROM and Cobalt video chipset, the remainder are essentially standard PCs. The 320 and 540 use a Unified Memory Architecture (UMA) memory system. This shares the video and system memory and runs them at the same speed, and allows for up to 80 percent of the system ram to be applied to video memory. The allocation is static, however, and is adjusted via a profile. The 320 and 540 also use the onboard Cobalt video adapter, which is SGI's proprietary graphics chipset. The firmware used in these systems is a PROM that enables booting into a graphical subsystem before the OS was loaded. In this regard they resemble the Irix/MIPS line of SGI computers such as the SGI O2. The 320 and 540 also stand out for having FireWire (IEEE 1394) ports, onboard composite/S-video capture, and USB keyboards and mice. They differ from each other in that the 320 is dual Pentium II/III-capable with 1GB maximum system RAM, while the 540 is quad Pentium III Xeon-capable wi
https://en.wikipedia.org/wiki/Class%20of%20service
Class of service (COS or CoS) is a parameter used in data and voice protocols to differentiate the types of payloads contained in the packet being transmitted. The objective of such differentiation is generally associated with assigning priorities to the data payload or access levels to the telephone call. Data services As related to network technology, COS is a 3-bit field that is present in an Ethernet frame header when 802.1Q VLAN tagging is present. The field specifies a priority value between 0 and 7, more commonly known as CS0 through CS7, that can be used by quality of service (QoS) disciplines to differentiate and shape/police network traffic. COS operates only on 802.1Q VLAN Ethernet at the data link layer (layer 2), while other QoS mechanisms (such as DiffServ, also known as DSCP) operate at the IP network layer (layer 3) or use a local QoS tagging system that does not modify the actual packet, such as Cisco's "QoS-Group". Network devices (i.e. routers, switches, etc.) can be configured to use existing COS values on incoming packets from other devices (trust mode), or can rewrite the COS value to something completely different. Most Internet Service Providers do not trust incoming QoS markings from their customers, so COS is generally limited to use within an organization's intranet. Service providers offering private-line WAN services will typically offer services which can utilize COS/QoS. Voice services As related to legacy telephone systems, COS is often used to define the permissions an extension will have on a PBX or Centrex. Certain groups of users may have a need for extended voicemail message retention while another group may need the ability to forward calls to a cell phone, and still others have no need to make calls outside the office. Permissions for a group of extensions can be changed by modifying a COS variable applied to the entire group. COS is also used on trunks to define if they are full-duplex, incoming only, or outgoing only.
https://en.wikipedia.org/wiki/SGI%20Indigo
The Indigo, introduced as the IRIS Indigo, is a line of workstation computers developed and manufactured by Silicon Graphics, Inc. (SGI). SGI first announced the system in July 1991. The Indigo is considered one of the most capable graphics workstations of its era, and was essentially peerless in the realm of hardware-accelerated three-dimensional graphics rendering. For use as a graphics workstation, the Indigo was equipped with a two-dimensional framebuffer or, for use as a 3D graphics workstation, with the Elan graphics subsystem including one to four Geometry Engines (GEs). SGI sold a server version with no video adapter. The Indigo's design is based on a simple cube motif in indigo hue. Graphics and other peripheral expansions are accomplished via the GIO32 expansion bus. The Indigo was superseded generally by the SGI Indigo2, and in the low-cost market segment by the SGI Indy. Technical specifications The first Indigo model (code-named Hollywood) was introduced on July 22, 1991. It is based on the IP12 processor board, which contains a 32-bit MIPS R3000A microprocessor soldered on the board and proprietary memory slots supporting up to 96 MB of RAM. The later version (code-named Blackjack) is based on the IP20 processor board, which has a removable processor module (PM1 or PM2) containing a 64-bit MIPS R4000 (100 MHz) or R4400 processor (100 MHz or 150 MHz) that implements the MIPS-III instruction set. The IP20 uses standard 72-pin SIMMs with parity, and has 12 SIMM slots for a total of 384 MB of RAM at maximum. A Motorola 56000 DSP is used for Audio IO, giving it 4-channel 16-bit audio. Ethernet is supported on board by the SEEQ 80C03 chipset coupled with the HPC (High-performance Peripheral Controller), which provides the DMA engine. The HPC interfaces primarily between the GIO bus and the Ethernet, SCSI (WD33C93 chipset) and the 56000 DSP. The GIO bus interface is implemented by the PIC (Processor Interface Controller) on IP12 and MC (Memory Contr
https://en.wikipedia.org/wiki/Programmer%27s%20key
The programmer's key, or interrupt button, is a button or switch on Classic Mac OS-era Macintosh systems, which jumps to a machine code monitor. The symbol on the button is ⎉: . On most 68000 family based Macintosh computers, an interrupt request can also be sent by holding down the command key and pressing the power key on the keyboard. This effect is also simulated by the 68000 environment of the Mac OS nanokernel on PowerPC machines and the Classic environment. A plastic insert came with Macintosh 128K, Macintosh 512K, Macintosh Plus, and Macintosh SE computers that could be attached to the exterior of the case and was used to press an interrupt button located on the motherboard. Modern Mac hardware no longer includes the interrupt button, as the Mac OS X operating system has integrated debugging options. In addition, Mac OS X's protected memory blocks direct patching of system memory (in order to better secure the system). See also Interrupt Context switch MacsBug References External links Debugging Interrupts
https://en.wikipedia.org/wiki/NCR%2053C9x
The NCR 53C9x is a family of application-specific integrated circuits (ASIC) produced by the former NCR Corporation and others for implementing the SCSI (small computer standard interface) bus protocol in hardware and relieving the host system of the work required to sequence the SCSI bus. The 53C9x was a low-cost solution and was therefore widely adopted by OEMs in various motherboard and peripheral device designs. The original 53C90 lacked direct memory access (DMA) capability, an omission that was addressed in the 53C90A and subsequent versions. The 53C90(A) and later 53C94 supported the ANSI X3.l3I-I986 SCSI-1 protocol, implementing the eight bit parallel SCSI bus and eight bit host data bus transfers. The 53CF94 and 53CF96 added SCSI-2 support and implemented larger transfer sizes per SCSI transaction. Additionally, the 53CF96 could be interfaced to a single-ended bus or a high voltage differential (HVD) bus, the latter which supported long bus cables. All members of the 53C94/96 type support both eight and 16 bit host bus transfers via programmed input/output (PIO) and DMA. QLogic FAS216 and Emulex ESP100 chips are a drop-in replacement for the NCR 53C94. The 53C90A and 53C(F)94/96 were also produced under license by Advanced Micro Devices (AMD). A list of systems which included the 53C9x controller includes: 53C94 Sun Microsystems SPARCstations and the SPARCclassic DEC 3000 AXP DECstations and the PMAZ-A TURBOchannel card VAXstation model 60, 4000-m90 MIPS Magnum Power Macintosh G3; often used as a secondary SCSI controller with MESH (Macintosh Enhanced SCSI Hardware) as the primary MacroSystem's Evolution family for Amiga (FAS216) 53C96 Macintosh Quadra 650 Macintosh LC475/Quadra 605/Performa 475 Macintosh Quadra 900 and 950 See also NCR 5380 References SCSI Integrated circuits NCR Corporation products
https://en.wikipedia.org/wiki/Multi-Environment%20Real-Time
Multi-Environment Real-Time (MERT), later renamed UNIX Real-Time (UNIX-RT), is a hybrid time-sharing and real-time operating system developed in the 1970s at Bell Labs for use in embedded minicomputers (especially PDP-11s). A version named Duplex Multi Environment Real Time (DMERT) was the operating system for the AT&T 3B20D telephone switching minicomputer, designed for high availability; DMERT was later renamed Unix RTR (Real-Time Reliable). A generalization of Bell Labs' time-sharing operating system Unix, MERT featured a redesigned, modular kernel that was able to run Unix programs and privileged real-time computing processes. These processes' data structures were isolated from other processes with message passing being the preferred form of interprocess communication (IPC), although shared memory was also implemented. MERT also had a custom file system with special support for large, contiguous, statically sized files, as used in real-time database applications. The design of MERT was influenced by Dijkstra's THE, Hansen's Monitor, and IBM's CP-67. The MERT operating system was a four-layer design, in decreasing order of protection: Kernel: resource allocation of memory, CPU time and interrupts Kernel-mode processes including input/output (I/O) device drivers, file manager, swap manager, root process that connects the file manager to the disk (usually combined with the swap manager) Operating system supervisor User processes The standard supervisor was MERT/UNIX, a Unix emulator with an extended system call interface and shell that enabled the use of MERT's custom IPC mechanisms, although an RSX-11 emulator also existed. Kernel and non-kernel processes One interesting feature that DMERT – UNIX-RTR introduced was the notion of kernel processes. This is connected with its microkernelish architecture roots. In support, there is a separate command (/bin/kpkill) rather than (/bin/kill), that is used to send signals to kernel processes. It is likely there ar
https://en.wikipedia.org/wiki/Strategies%20for%20engineered%20negligible%20senescence
Strategies for engineered negligible senescence (SENS) is a range of proposed regenerative medical therapies, either planned or currently in development, for the periodic repair of all age-related damage to human tissue. These therapies have the ultimate aim of maintaining a state of negligible senescence in patients and postponing age-associated disease. SENS was first defined by British biogerontologist Aubrey de Grey. Many mainstream scientists believe that it is a fringe theory. While some biogerontologists support the SENS program, others contend that the ultimate goals of de Grey's programme are too speculative given the current state of technology. The 31-member Research Advisory Board of de Grey's SENS Research Foundation have signed an endorsement of the plausibility of the SENS approach. Framework The term "negligible senescence" was first used in the early 1990s by professor Caleb Finch to describe organisms such as lobsters and hydras, which do not show symptoms of aging. The term "engineered negligible senescence" first appeared in print in Aubrey de Grey's 1999 book The Mitochondrial Free Radical Theory of Aging. De Grey defined SENS as a "goal-directed rather than curiosity-driven" approach to the science of aging, and "an effort to expand regenerative medicine into the territory of aging". The ultimate objective of SENS is the eventual elimination of age-related diseases and infirmity by repeatedly reducing the state of senescence in the organism. The SENS project consists in implementing a series of periodic medical interventions designed to repair, prevent or render irrelevant all the types of molecular and cellular damage that cause age-related pathology and degeneration, in order to avoid debilitation and death from age-related causes. Strategies As described by SENS, the following table details major ailments and the program's proposed preventative strategies: Scientific reception While some fields mentioned as branches of SENS are suppor
https://en.wikipedia.org/wiki/Ghost%20Light%20%28Doctor%20Who%29
Ghost Light is the second serial of the 26th season of the British science fiction television series Doctor Who, which was first broadcast in three weekly parts on BBC1 from 4 to 18 October 1989. Set in a mansion house in Perivale in 1883, Josiah Smith (Ian Hogg), a cataloguer of life forms from another planet, seeks to assassinate Queen Victoria and take over the British Empire. Plot Thousands of years ago, an alien expedition came to Earth to catalogue life. After completing its task and collecting samples which included Nimrod, a being known as Light, the leader, went into slumber. By 1881, Josiah Smith gained control and kept Light in hibernation and imprisoned the creature known as Control on the ship, which is now the cellar of the house. Smith began evolving into the era's dominant life-form – the Victorian gentleman – and also took over the house. By 1883, Smith, having "evolved" into forms approximating a human and casting off his old husks as an insect would, managed to lure and capture the explorer Redvers Fenn-Cooper, brainwashing him. Utilising Fenn-Cooper's association with Queen Victoria, he plans to get close to her so that he can assassinate her and subsequently take control of the British Empire. The TARDIS arrives at Gabriel Chase. Ace had visited the house in 1983 and had felt an evil presence. The Seventh Doctor's curiosity drives him to seek answers. He encounters Control, which has now taken on human form, and makes a deal with it. The Doctor helps it release Light. Once awake, Light is displeased by all the changes while he was asleep. Smith tries to keep his plan intact, but events are moving beyond his control. As Control tries to "evolve" into a Lady, Ace tries to come to grips with her feelings about the house, revealing that she burned it down when she felt the evil. The Doctor finally convinces Light of the futility of opposing evolution, which causes him to overload and dissipate into the surrounding house. Control's complete evolu
https://en.wikipedia.org/wiki/Davfs2
In computer networking davfs2 is a Linux tool for connecting to WebDAV shares as though they were local disks. It is an open-source GPL-licensed file system for mounting WebDAV servers. It uses the FUSE file system API to communicate with the kernel and the neon WebDAV library for communicating with the web server. Applications davfs2 is e.g. used with the Apache web server, and Subversion installations. See also WebDAV FUSE References External links (old resources) Free special-purpose file systems Userspace file systems Network file systems
https://en.wikipedia.org/wiki/Levelling%20refraction
Levelling refraction refers to the systematic refraction effect distorting the results of line levelling over the Earth's surface. In line levelling, short segments of a line are levelled by taking readings through a level from two staffs, one fore and one behind. By chaining together the height differences of these segments, one can compute the total height difference between the end points of a line. The classical work on levelling refraction is that of TJ Kukkamäki in 1938–39. His analysis is based upon the understanding that the measurement beams travel within a boundary layer close to the Earth's surface, which behaves differently from the atmosphere at large. When measuring over a tilted surface, the systematic effect accumulates. The Kukkamäki levelling refraction became notorious as the explanation of the "Palmdale Bulge", which geodesists observed in California in the 1970s. Levelling refraction can be eliminated by either of two techniques: Measuring the vertical temperature gradient within the atmospheric boundary layer. Typically two temperature-dependent resistors are used, one at , the other at height above the ground, mounted on a staff and connected in a Wheatstone bridge. Using climatological modelling. Depending on the time of day and year, geographical location and general weather conditions, also levelling observations can be approximately corrected for which no original temperature gradient measurements were collected. An alternative, hi-tech approach is dispersometry using two different wavelengths of light. Only recently blue lasers have become readily available making this a realistic proposition. References Kukkamäki, T.J. (1938): Über die nivellitische Refraktion. Publ. 25, Finnish Geodetic Institute, Helsinki Kukkamäki, T.J. (1939): Formeln und Tabellen zur Berechning der nivellitischen Refraktion. Publ. 27, Finnish Geodetic Institute, Helsinki. Further reading Charles T. Whalen (1982), Results of Leveling Refraction Tests by
https://en.wikipedia.org/wiki/Handle
A handle is a part of, or attachment to, an object that allows it to be grasped and manipulated by hand. The design of each type of handle involves substantial ergonomic issues, even where these are dealt with intuitively or by following tradition. Handles for tools are an important part of their function, enabling the user to exploit the tools to maximum effect. Package handles allow for convenient carrying of packages. General design criteria The three nearly universal requirements of are: Sufficient strength to support the object, or to otherwise transmit the force involved in the task the handle serves. Sufficient length to permit the hand or hands gripping it to comfortably exert the force. Sufficiently small circumference to permit the hand or hands to surround it far enough to grip it as solidly as needed to exert that force. Specific needs Other requirements may apply to specific handles: A sheath or coating on the handle that provides friction against the hand, reducing the gripping force needed to achieve a reliable grip. Designs such as recessed car-door handles, reducing the chance of accidental operation, or simply the inconvenience of "snagging" the handle. Sufficient circumference to distribute the force comfortably and safely over the hand. An example where this requirement is almost the sole purpose for a handle's existence is the handle that consists of two pieces: a hollow wooden cylinder about the diameter of a finger and a bit longer than one hand-width, and a stiff wire that passes through the center of the cylinder, has two right angles, and is shaped into a hook at each end. This handle permits comfortable carrying, with otherwise bare hands, of a heavy package, suspended on a tight string that passes around the top and bottom of it: the string is strong enough to support it, but the pressure the string would exert on fingers that grasped it directly would often be unacceptable. Design to thwart unwanted access, for example, by
https://en.wikipedia.org/wiki/OPeNDAP
OPeNDAP is an acronym for "Open-source Project for a Network Data Access Protocol," an endeavor focused on enhancing the retrieval of remote, structured data through a Web-based architecture and a discipline-neutral Data Access Protocol (DAP). Widely used, especially in Earth science, the protocol is layered on HTTP, and its current specification is DAP4, though the previous DAP2 version remains broadly used. Developed and advanced (openly and collaboratively) by the non-profit OPeNDAP, Inc., DAP is intended to enable remote, selective data-retrieval as an easily invoked Web service. OPeNDAP, Inc. also develops and maintains zero-cost (reference) implementations of the DAP protocol in both server-side and client-side software. "OPeNDAP" often is used in place of "DAP" to denote the protocol but also may refer to an entire DAP-based data-retrieval architecture. Other DAP-centered architectures, such as THREDDS and ERDDAP, the NOAA GEO-IDE UAF ERDDAP exhibit significant interoperability with one another as well as with systems employing OPeNDAP's own (open-source) servers and software. A DAP client can be an ordinary browser or even a spreadsheet, though with limited functionality (see OPeNDAP's Web page on Available Client Software). More typically, DAP clients are: Data-analysis or data-visualization tools (such as MATLAB, IDL, Panoply, GrADS, Integrated Data Viewer, Ferret and ncBrowse) which their authors have adapted to enable DAP-based data input; Similarly adapted Web applications (such as Dapper Data Viewer, aka DChart) Similarly adapted end-user programs (in common languages) Regardless of their types, and whether developed commercially or by an end-user, clients almost universally link to DAP servers through libraries that implement the DAP2 or DAP4 protocol in one language or another. OPeNDAP offers open-source libraries in C++ and Java, but many clients rely on community developed libraries such as PyDAP or, especially, the NetCDF suite. Developed
https://en.wikipedia.org/wiki/Sudoku
Sudoku (; ; originally called Number Place) is a logic-based, combinatorial number-placement puzzle. In classic Sudoku, the objective is to fill a 9 × 9 grid with digits so that each column, each row, and each of the nine 3 × 3 subgrids that compose the grid (also called "boxes", "blocks", or "regions") contains all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for a well-posed puzzle has a single solution. French newspapers featured variations of the Sudoku puzzles in the 19th century, and the puzzle has appeared since 1979 in puzzle books under the name Number Place. However, the modern Sudoku only began to gain widespread popularity in 1986 when it was published by the Japanese puzzle company Nikoli under the name Sudoku, meaning "single number". It first appeared in a U.S. newspaper, and then The Times (London), in 2004, thanks to the efforts of Wayne Gould, who devised a computer program to rapidly produce unique puzzles. History Predecessors Number puzzles appeared in newspapers in the late 19th century, when French puzzle setters began experimenting with removing numbers from magic squares. Le Siècle, a Paris daily, published a partially completed 9×9 magic square with 3×3 subsquares on November 19, 1892. It was not a Sudoku because it contained double-digit numbers and required arithmetic rather than logic to solve, but it shared key characteristics: each row, column, and subsquare added up to the same number. On July 6, 1895, Le Siècle rival, La France, refined the puzzle so that it was almost a modern Sudoku and named it ('diabolical magic square'). It simplified the 9×9 magic square puzzle so that each row, column, and broken diagonals contained only the numbers 1–9, but did not mark the subsquares. Although they were unmarked, each 3×3 subsquare did indeed comprise the numbers 1–9, and the additional constraint on the broken diagonals led to only one solution. These weekly puzzles were a feature of French n
https://en.wikipedia.org/wiki/Avicide
An avicide is any substance (normally a chemical) used to kill birds. Commonly used avicides include strychnine (also used as rodenticide and predacide), DRC-1339 (3-chloro-4-methylaniline hydrochloride, Starlicide) and CPTH (3-chloro-p-toluidine, the free base of Starlicide), Avitrol (4-aminopyridine) and chloralose (also used as rodenticide). In the past, highly concentrated formulations of parathion in diesel oil were sprayed by aircraft over birds' nesting colonies. Avicides are banned in many countries because of their ecological impact, which is poorly studied. They are still used in the United States, Canada, Australia and New Zealand. The practice is criticized by animal rights advocates and those who kill birds with guns and traps. Pigeon fanciers sometimes poison problematic birds of prey, even in countries like Russia and Ukraine where avicides are illegal. See also Bird kill References External links 4-Aminopyridine Exposure of nontarget birds to DRC-1339 avicide in fall baited sunflower fields BIOONE Online Journals - BIOONE Online Journals Access Control E554-95 Guide for Use and Development of Strychnine as an Avicide (Withdrawn 2000) IngentaConnect DRC-1339 avicide fails to protect ripening sunflowers Biocides
https://en.wikipedia.org/wiki/STOS%20BASIC
STOS BASIC is a dialect of the BASIC programming language for the Atari ST personal computer. It was designed for creating games, but the set of high-level graphics and sound commands it offers is suitable for developing multimedia software without knowledge of the internals of the Atari ST. STOS BASIC was developed by Jawx–François Lionet, and Constantin Sotiropoulos–and published by Mandarin Software (now known as Europress Software). History Although the first version of STOS to be released in the UK (version 2.3) was released in late 1988 by Mandarin Software, a version had been released earlier in France. Version 2.3 was bundled with three complete games (Orbit, Zoltar and Bullet Train), and many accessories and utilities (such as sprite and music editors). Initially implemented as a BASIC interpreter, a compiler was soon released that enabled the user to compile the STOS Basic program into an executable file that ran a lot faster because it was compiled rather than interpreted. In order to be compatible with the compiler, STOS needed to be upgraded to version 2.4 (which came with the compiler). STOS 2.4 also fixed a few bugs and had faster floating point mathematics code, but the floating point numbers had a smaller range. STOS 2.5 was released to make STOS run on Atari STEs with TOS 1.06 (1.6), and then STOS 2.6 was needed to make STOS run on Atari STEs with TOS 1.62. STOS 2.7 was a compiler-only upgrade that made programs with the STOS tracker extension (used to play MOD music) compile. There was a 3rd-party hack called STOS 2.07 designed to make STOS run on even more TOS versions, and behave on the Atari Falcon. Around 2001 François Lionet released via the Clickteam website the source code of STOS BASIC. On the 4th of April, 2019 François Lionet announced the release of AMOS2 on his website Amos2.tech. AMOS2 replaces STOS and AMOS together, using JavaScript as its code interpreter, making the new development system independent and generally deployed
https://en.wikipedia.org/wiki/Universo%20Online
(Portuguese for "Online Universe") (known by the acronym UOL) is a Brazilian web content, products and services company. It belongs to Grupo Folha enterprise. In 2012, UOL was the fifth most visited website in Brazil, below only Google portals (Google Brasil, Google USA, YouTube) and Facebook. According to Ibope Nielsen Online, UOL is Brazil's largest internet portal with more than 50 million unique visitors and 6.7 billion page views every month. Overview UOL is the world's largest Portuguese speaking portal, which is organized in 42 thematic stations with more than 1,000 news sources and 7 million pages. The portal provides website hosting, data storage, publicity dealing, online payments and security systems. It also holds more than 300 thousand online shops, 23 million buyers and 4 million people selling goods and services in its portals UOL includes: UOL Cliques, ads and publicity portal. Radar de Descontos, group buying portal. Emprego Certo, jobs portal. Shopping UOL, online price comparing tool. UOL Segurança Online, online safety firm. Universidade UOL, online education portal. UOL Revelação Digital, online photo developing portal. Toda Oferta, buying and selling portal. UOL Wi-Fi, unlimited wireless broadband Internet access. PagSeguro, e-commerce tool in which shops and people can pay and cash online payments. UOL Mais, portal with unlimited space for videos, photos, audio and texts. UOL HOST, hosting and cloud computing firm. UOL Assistência Técnica, technical support services for computers, tablets, and smartphones. UOL DIVEO, online IT outsourcing firm. UOL Afiliados, membership program for subscribers and non-subscribers. The program pays UOL associate that remunerates websites and blogs that disclose ads. Each associate receives a quantity per clicks received in each ad or signature conversion. History UOL was established by Grupo Folha on April 28, 1996. After 7 months, UOL joined portal Brasil Online (BOL) from Editora Abril. However Editora Ab
https://en.wikipedia.org/wiki/Windows%20for%20Pen%20Computing
Windows for Pen Computing is a software suite for Windows 3.1x, that Microsoft designed to incorporate pen computing capabilities into the Windows operating environment. Windows for Pen Computing was the second major pen computing platform for x86 tablet PCs; GO Corporation released their operating system, PenPoint OS, shortly before Microsoft published Windows for Pen Computing 1.0 in 1992. The software features of Windows for Pen Computing 1.0 includes an on-screen keyboard, a notepad program for writing with the stylus, and a program for training the system to respond accurately to the user's handwriting. Microsoft included Windows for Pen Computing 1.0 in the Windows SDK, and the operating environment was also bundled with compatible devices. Microsoft published Windows 95 in 1995, and later released Pen Services for Windows 95, also known as Windows for Pen Computing 2.0, for this new operating system. Windows XP Tablet PC Edition superseded Windows for Pen Computing in 2002. Subsequent Windows versions, such as Windows Vista and Windows 7, supported pen computing intrinsically. See also Windows Ink Workspace References External links The Unknown History of Pen Computing contains a history of pen computing, including touch and gesture technology, from approximately 1917 to 1992. About Tablet Computing Old and New - an article that mentions Windows Pen in passing Annotated bibliography of references to handwriting recognition and pen computing Windows für Pen Computer Windows for Pen Computer (German link above translated by Google) Notes on the History of Pen-based Computing (YouTube) 1992 software Handwriting recognition Pen Computing Microsoft Tablet PC Tablet computers
https://en.wikipedia.org/wiki/Colors%20of%20noise
In audio engineering, electronics, physics, and many other fields, the color of noise or noise spectrum refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly different properties. For example, as audio signals they will sound differently to human ears, and as images they will have a visibly different texture. Therefore, each application typically requires noise of a specific color. This sense of 'color' for noise signals is similar to the concept of timbre in music (which is also called "tone color"; however, the latter is almost always used for sound, and may consider very detailed features of the spectrum). The practice of naming kinds of noise after colors started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was (incorrectly) assumed to have such a flat power spectrum over the visible range. Other color names, such as pink, red, and blue were then given to noise with other spectral profiles, often (but not always) in reference to the color of light with similar spectra. Some of those names have standard definitions in certain disciplines, while others are very informal and poorly defined. Many of these definitions assume a signal with components at all frequencies, with a power spectral density per unit of bandwidth proportional to 1/f β and hence they are examples of power-law noise. For instance, the spectral density of white noise is flat (β = 0), while flicker or pink noise has β = 1, and Brownian noise has β = 2. Technical definitions Various noise models are employed in analysis, many of which fall under the above categories. AR noise or "autoregressive noise" is such a model, and generates simple examples of the above noise types, and more. The Federal Standard 1037C Telecommunications Glossary defines white, pink, blue, and black noise. The color names for thes
https://en.wikipedia.org/wiki/Rocky%27s%20Boots
Rocky's Boots is an educational logic puzzle game by Warren Robinett and Leslie Grimm, published by The Learning Company in 1982. It was released for the Apple II, CoCo, Commodore 64, IBM PC and the IBM PCjr. It was followed by a more difficult sequel, Robot Odyssey. It won Software of the Year awards from Learning Magazine (1983), Parent's Choice magazine (1983), and Infoworld (1982, runner-up), and received the Gold Award (for selling 100,000 copies) from the Software Publishers Association. It was one of the first educational software products for personal computers to successfully use an interactive graphical simulation as a learning environment. Gameplay The object of the beginning part of Rocky's Boots is to use a mechanical boot to kick a series of objects (purple or green squares, diamonds, circles, or crosses) off a conveyor belt; each object will score some number of points, possibly negative. To ensure that the boot only kicks the positive objects, the player must connect a series of logic gates to the boot. The player is represented by an orange square, and picks up devices (the boot, logic gates, clackers, etc.) by moving their square over them and hitting the joystick button. When the boot has kicked all of the positive objects and none of the negative objects (obtaining a score of 24 points), Rocky (a raccoon) will appear and do a beeping dance. Later, the player finds that he can use all of the game's objects, including AND gates, OR gates, NOT gates, and flip-flops, in an open-ended area to design his own logic circuits and "games". The colors of orange and white were used to show the binary logic states of 1 and 0. As the circuits operated, the signals could be seen slowly propagating through the circuits, as if the electricity was liquid orange fire flowing through transparent pipes. Development Rocky's Boots was designed by Warren Robinett and Leslie Grimm. It was conceived as a sequel to Adventure. Robinett experienced constraints due to T
https://en.wikipedia.org/wiki/Lambert%20series
In mathematics, a Lambert series, named for Johann Heinrich Lambert, is a series taking the form It can be resumed formally by expanding the denominator: where the coefficients of the new series are given by the Dirichlet convolution of an with the constant function 1(n) = 1: This series may be inverted by means of the Möbius inversion formula, and is an example of a Möbius transform. Examples Since this last sum is a typical number-theoretic sum, almost any natural multiplicative function will be exactly summable when used in a Lambert series. Thus, for example, one has where is the number of positive divisors of the number n. For the higher order sum-of-divisor functions, one has where is any complex number and is the divisor function. In particular, for , the Lambert series one gets is which is (up to the factor of ) the logarithmic derivative of the usual generating function for partition numbers Additional Lambert series related to the previous identity include those for the variants of the Möbius function given below Related Lambert series over the Moebius function include the following identities for any prime : The proof of the first identity above follows from a multi-section (or bisection) identity of these Lambert series generating functions in the following form where we denote to be the Lambert series generating function of the arithmetic function f: The second identity in the previous equations follows from the fact that the coefficients of the left-hand-side sum are given by where the function is the multiplicative identity with respect to the operation of Dirichlet convolution of arithmetic functions. For Euler's totient function : For Von Mangoldt function : For Liouville's function : with the sum on the right similar to the Ramanujan theta function, or Jacobi theta function . Note that Lambert series in which the an are trigonometric functions, for example, an = sin(2n x), can be evaluated by various combinations of