source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Network%20block%20device
On Linux, network block device (NBD) is a network protocol that can be used to forward a block device (typically a hard disk or partition) from one machine to a second machine. As an example, a local machine can access a hard disk drive that is attached to another computer. The protocol was originally developed for Linux 2.1.55 and released in 1997. In 2011 the protocol was revised, formally documented, and is now developed as a collaborative open standard. There are several interoperable clients and servers. There are Linux-compatible NBD implementations for FreeBSD and other operating systems. The term 'network block device' is sometimes also used generically. Technically, a network block device is realized by three components: the server part, the client part, and the network between them. On the client machine, on which is the device node, a kernel driver controls the device. Whenever a program tries to access the device, the kernel driver forwards the request (if the client part is not fully implemented in the kernel it can be done with help of a userspace program) to the server machine, on which the data resides physically. On the server machine, requests from the client are handled by a userspace program. Network block device servers are typically implemented as a userspace program running on a general-purpose computer. All of the function specific to network block device servers can reside in a userspace process because the process communicates with the client via conventional sockets and accesses the storage via a conventional file system interface. The network block device client module is available on Unix-like operating systems, including Linux and Bitrig. Since the server is a userspace program, it can potentially run on every Unix-like platform; for example, NBD's server part has been ported to Solaris. Alternative protocols iSCSI: The "target-utils" iscsi package on many Linux distributions. NVMe-oF: an equivalent mechanism, exposing b
https://en.wikipedia.org/wiki/Modified%20Harvard%20architecture
A modified Harvard architecture is a variation of the Harvard computer architecture that, unlike the pure Harvard architecture, allows memory that contains instructions to be accessed as data. Most modern computers that are documented as Harvard architecture are, in fact, modified Harvard architecture. Harvard architecture The original Harvard architecture computer, the Harvard Mark I, employed entirely separate memory systems to store instructions and data. The CPU fetched the next instruction and loaded or stored data simultaneously and independently. This is in contrast to a von Neumann architecture computer, in which both instructions and data are stored in the same memory system and (without the complexity of a CPU cache) must be accessed in turn. The physical separation of instruction and data memory is sometimes held to be the distinguishing feature of modern Harvard architecture computers. With microcontrollers (entire computer systems integrated onto single chips), the use of different memory technologies for instructions (e.g. flash memory) and data (typically read/write memory) in von Neumann machines is becoming popular. The true distinction of a Harvard machine is that instruction and data memory occupy different address spaces. In other words, a memory address does not uniquely identify a storage location (as it does in a von Neumann machine); it is also necessary to know the memory space (instruction or data) to which the address belongs. Von Neumann architecture A computer with a von Neumann architecture has the advantage over Harvard machines as described above in that code can also be accessed and treated the same as data, and vice versa. This allows, for example, data to be read from disk storage into memory and then executed as code, or self-optimizing software systems using technologies such as just-in-time compilation to write machine code into their own memory and then later execute it. Another example is self-modifying code, which all
https://en.wikipedia.org/wiki/BlueHat
BlueHat (or Blue Hat or Blue-Hat) is a term used to refer to outside computer security consulting firms that are employed to bug test a system prior to its launch, looking for exploits so they can be closed. In particular, Microsoft uses the term to refer to the computer security professionals they invited to find the vulnerability of their products, such as Windows. Blue Hat Microsoft Hacker Conference The Blue Hat Microsoft Hacker Conference is an invitation-only conference created by Window Snyder that is intended to open communication between Microsoft engineers and hackers. The event has led to both mutual understanding and the occasional confrontation. Microsoft's developers were visibly uncomfortable when Metasploit was demonstrated. See also Hacker culture Hacker ethic Black hat hacker
https://en.wikipedia.org/wiki/List%20of%20Wenninger%20polyhedron%20models
This is an indexed list of the uniform and stellated polyhedra from the book Polyhedron Models, by Magnus Wenninger. The book was written as a guide book to building polyhedra as physical models. It includes templates of face elements for construction and helpful hints in building, and also brief descriptions on the theory behind these shapes. It contains the 75 nonprismatic uniform polyhedra, as well as 44 stellated forms of the convex regular and quasiregular polyhedra. Models listed here can be cited as "Wenninger Model Number N", or WN for brevity. The polyhedra are grouped in 5 tables: Regular (1–5), Semiregular (6–18), regular star polyhedra (20–22,41), Stellations and compounds (19–66), and uniform star polyhedra (67–119). The four regular star polyhedra are listed twice because they belong to both the uniform polyhedra and stellation groupings. Platonic solids (regular convex polyhedra) W1 to W5 Archimedean solids (Semiregular) W6 to W18 Kepler–Poinsot polyhedra (Regular star polyhedra) W20, W21, W22 and W41 Stellations: models W19 to W66 Stellations of octahedron Stellations of dodecahedron Stellations of icosahedron Stellations of cuboctahedron Stellations of icosidodecahedron Uniform nonconvex solids W67 to W119 See also List of uniform polyhedra The fifty nine icosahedra List of polyhedral stellations
https://en.wikipedia.org/wiki/Refined%20grains
Refined grains have been significantly modified from their natural composition, in contrast to whole grains. The modification process generally involves the mechanical removal of bran and germ, either through grinding or selective sifting. Overview A refined grain is defined as having undergone a process that removes the bran, germ and husk of the grain and leaves the endosperm, or starchy interior. Examples of refined grains include white bread, white flour, corn grits and white rice. Refined grains are milled which gives a finer texture and improved shelf life. Because the outer parts of the grain are removed and used for animal feed and non-food use, refined grains have been described as less sustainable than whole grains. After refinement of grains became prevalent in the early 20th-century, nutritional deficiencies (iron, thiamin, riboflavin and niacin) became more common in the United States. To correct this, the Congress passed the U.S. Enrichment Act of 1942 which requires that iron, niacin, thiamin and riboflavin have to be added to all refined grain products before they are sold. Folate (folic acid) was added in 1996. Refining grain includes mixing, bleaching, and brominating; additionally, folate, thiamin, riboflavin, niacin, and iron are added back in to nutritionally enrich the product. Enriched grains are refined grains that have been fortified with additional nutrients. Whole grains contain more dietary fiber than refined grains. After processing, fiber is not added back to enriched grains. Enriched grains are nutritionally comparable to whole grains but only in regard to their added nutrients. Whole grains contain higher amounts of minerals including chromium, magnesium, selenium, and zinc and vitamins such as Vitamin B6 and Vitamin E. Whole grains also provide phytochemicals which enriched grains lack. In the case of maize, the process of nixtamalization (a chemical form of refinement) yields a considerable improvement in the bioavailability of
https://en.wikipedia.org/wiki/3%20nm%20process
In semiconductor manufacturing, the 3 nm process is the next die shrink after the 5 nanometer MOSFET (metal–oxide–semiconductor field-effect transistor) technology node. South Korean chipmaker Samsung started shipping its 3 nm gate all around (GAA) process, named 3GAA, in mid-2022. On December 29, 2022, Taiwanese chip manufacturer TSMC announced that volume production using its 3 nm semiconductor node termed N3 is under way with good yields. An enhanced 3 nm chip process called N3E may start production in 2023. American manufacturer Intel plans to start 3 nm production in 2023. Samsung's 3 nm process is based on GAAFET (gate-all-around field-effect transistor) technology, a type of multi-gate MOSFET technology, while TSMC's 3 nm process still uses FinFET (fin field-effect transistor) technology, despite TSMC developing GAAFET transistors. Specifically, Samsung plans to use its own variant of GAAFET called MBCFET (multi-bridge channel field-effect transistor). Intel's process dubbed "Intel 3" without the "nm" suffix will use a refined, enhanced and optimized version of FinFET technology compared to its previous process nodes in terms of performance gained per watt, use of EUV lithography, and power and area improvement. The term "3 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 3 nm node is expected to have a contacted gate pitch of 48 nanometers and a tightest metal pitch of 24 nanometers. However, in real world commercial practice, "3 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption. T
https://en.wikipedia.org/wiki/Stratification%20%28mathematics%29
Stratification has several usages in mathematics. In mathematical logic In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form is stratified if and only if there is a stratification assignment S that fulfills the following conditions: If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short . If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short . The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories. In a specific set theory In New Foundations (NF) and related set theories, a formula in the language of first-order logic with equality and membership is said to be stratified if and only if there is a function which sends each variable appearing in (considered as an item of syntax) to a natural number (this works equally well if all integers are used) in such a way that any atomic formula appearing in satisfies and any atomic formula appearing in satisfies . It turns out that it is sufficient to require that these conditions be satisfied only when both variables in an atomic formula are bound in the set abstract under consideration. A set abstract satisfying this weaker condition is said to be
https://en.wikipedia.org/wiki/Web%20container
A web container (also known as a servlet container; and compare "webcontainer") is the component of a web server that interacts with Jakarta Servlets. A web container is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that the URL requester has the correct access-rights. A web container handles requests to servlets, Jakarta Server Pages (JSP) files, and other types of files that include server-side code. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet-management tasks. A web container implements the web component contract of the Jakarta EE architecture. This architecture specifies a runtime environment for additional web components, including security, concurrency, lifecycle management, transaction, deployment, and other services. List of Servlet containers The following is a list of applications which implement the Jakarta Servlet specification from Eclipse Foundation, divided depending on whether they are directly sold or not. Open source Web containers Apache Tomcat (formerly Jakarta Tomcat) is an open source web container available under the Apache Software License. Apache Tomcat 6 and above are operable as general application container (prior versions were web containers only) Apache Geronimo is a full Java EE 6 implementation by Apache Software Foundation. Enhydra, from Lutris Technologies. GlassFish from Eclipse Foundation (an application server, but includes a web container). Jaminid contains a higher abstraction than servlets. Jetty, from the Eclipse Foundation. Also supports SPDY and WebSocket protocols. Payara is another application server, derived from Glassfish. Winstone supports specification v2.5 as of 0.9, has a focus on minimal configuration and the ability to strip the container down to only what you need. Tiny Java Web Server (TJWS) 2.5 , small footprint, modular design. Virgo f
https://en.wikipedia.org/wiki/DirectPlay
DirectPlay is part of Microsoft's DirectX API. It is a network communication library intended for computer game development, although it can be used for other purposes. DirectPlay is a high-level software interface between applications and communication services that allows games to be connected over the Internet, a modem link, or a network. It features a set of tools that allow players to find game sessions and sites to manage the flow of information between hosts and players. It provides a way for applications to communicate with each other, regardless of the underlying online service or protocol. It also resolves many connectivity issues, such as Network Address Translation (NAT). Like the rest of DirectX, DirectPlay runs in COM and is accessed through component object model (COM) interfaces. By default, DirectPlay uses multi-threaded programming techniques and requires careful thought to avoid the usual threading issues. Since DirectX version 9, this issue can be alleviated at the expense of efficiency. Networking model Under the hood, DirectPlay is built on the User Datagram Protocol (UDP) to allow it speedy communication with other DirectPlay applications. It uses TCP and UDP ports 2300 to 2400 and 47624. DirectPlay sits on layers 4 and 5 of the OSI model. On layer 4, DirectPlay can handle the following tasks if requested by the application: Message ordering, which ensures that data arrives in the same order it was sent. Message reliability, which ensures that data is guaranteed to arrive. Message flow control, which ensures that data is only sent at the rate the receiver can receive it. On layer 5, DirectPlay always handles the following tasks: Connection initiation and termination. Interfaces The primary interfaces (methods of access) for DirectPlay are: IDirectPlay8Server, which allows access to server functionality IDirectPlay8Client, which allows access to client functionality IDirectPlay8Peer, which allows access to peer-to-peer functionality Seco
https://en.wikipedia.org/wiki/Complex%20programmable%20logic%20device
A complex programmable logic device (CPLD) is a programmable logic device with complexity between that of PALs and FPGAs, and architectural features of both. The main building block of the CPLD is a macrocell, which contains logic implementing disjunctive normal form expressions and more specialized logic operations. Features Some of the CPLD features are in common with PALs: Non-volatile configuration memory. Unlike many FPGAs, an external configuration ROM isn't required, and the CPLD can function immediately on system start-up. For many legacy CPLD devices, routing constrains most logic blocks to have input and output signals connected to external pins, reducing opportunities for internal state storage and deeply layered logic. This is usually not a factor for larger CPLDs and newer CPLD product families. Other features are in common with FPGAs: Large number of gates available. CPLDs typically have the equivalent of thousands to tens of thousands of logic gates, allowing implementation of moderately complicated data processing devices. PALs typically have a few hundred gate equivalents at most, while FPGAs typically range from tens of thousands to several million. Some provisions for logic more flexible than sum-of-product expressions, including complicated feedback paths between macro cells, and specialized logic for implementing various commonly used functions, such as integer arithmetic. The most noticeable difference between a large CPLD and a small FPGA is the presence of on-chip non-volatile memory in the CPLD, which allows CPLDs to be used for "boot loader" functions, before handing over control to other devices not having their own permanent program storage. A good example is where a CPLD is used to load configuration data for an FPGA from non-volatile memory. Distinctions CPLDs were an evolutionary step from even smaller devices that preceded them, PLAs (first shipped by Signetics), and PALs. These in turn were preceded by standard logic products
https://en.wikipedia.org/wiki/Signal-flow%20graph
A signal-flow graph or signal-flowgraph (SFG), invented by Claude Shannon, but often called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, and branches (edges, arcs, or arrows) represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs (also called digraphs), which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications. SFGs are most commonly used to represent signal flow in a physical system and its controller(s), forming a cyber-physical system. Among their other uses are the representation of signal flow in various electronic networks and amplifiers, digital filters, state-variable filters and some other types of analog filters. In nearly all literature, a signal-flow graph is associated with a set of linear equations. History Wai-Kai Chen wrote: "The concept of a signal-flow graph was originally worked out by Shannon [1942] in dealing with analog computers. The greatest credit for the formulation of signal-flow graphs is normally extended to Mason [1953], [1956]. He showed how to use the signal-flow graph technique to solve some difficult electronic problems in a relatively simple manner. The term signal flow graph was used because of its original application to electronic problems and the association with electronic signals and flowcharts of the systems under study." Lorens wrote: "Previous to Mason's work, C. E. Shannon worked out a number of the properties of what are now known as flow graphs. Unfortunately, the paper originally had a restricted classification and very few people had access to the material." "The rules for the evaluation of the graph determinant of a Mason Graph were first given and proven by Shannon [1942] using mathematical induction. His work remained essentially unknown even after Mason p
https://en.wikipedia.org/wiki/Quinarian%20system
The quinarian system was a method of zoological classification which was popular in the mid 19th century, especially among British naturalists. It was largely developed by the entomologist William Sharp Macleay in 1819. The system was further promoted in the works of Nicholas Aylward Vigors, William John Swainson and Johann Jakob Kaup. Swainson's work on ornithology gave wide publicity to the idea. The system had opponents even before the publication of Charles Darwin's On the Origin of Species (1859), which paved the way for evolutionary trees. Classification approach Quinarianism gets its name from the emphasis on the number five: it proposed that all taxa are divisible into five subgroups, and if fewer than five subgroups were known, quinarians believed that a missing subgroup remained to be found. Presumably this arose as a chance observation of some accidental analogies between different groups, but it was erected into a guiding principle by the quinarians. It became increasingly elaborate, proposing that each group of five classes could be arranged in a circle, with those closer together having greater affinities. Typically they were depicted with relatively advanced groups at the top, and supposedly degenerate forms towards the bottom. Each circle could touch or overlap with adjacent circles; the equivalent overlapping of actual groups in nature was called osculation. Another aspect of the system was the identification of analogies across groups: Quinarianism was not widely popular outside the United Kingdom (some followers like William Hincks persisted in Canada); it became unfashionable by the 1840s, during which time more complex "maps" were made by Hugh Edwin Strickland and Alfred Russel Wallace. Strickland and others specifically rejected the use of relations of "analogy" in constructing natural classifications. These systems were eventually discarded in favour of principles of genuinely natural classification, namely based on evolutionary relations
https://en.wikipedia.org/wiki/Bookmark%20manager
A bookmark manager is any software program or feature designed to store, organize, and display web bookmarks. The bookmarks feature included in each major web browser is a rudimentary bookmark manager. More capable bookmark managers are available online as web apps, mobile apps, or browser extensions, and may display bookmarks as text links or graphical tiles (often depicting icons). Social bookmarking websites are bookmark managers. Start page browser extensions, new tab page browser extensions, and some browser start pages, also have bookmark presentation and organization features, which are typically tile-based. Some more general programs, such as certain note taking apps, have bookmark management functionality built-in. See also Bookmark destinations Deep links Home pages Types of bookmark management Enterprise bookmarking Comparison of enterprise bookmarking platforms Social bookmarking List of social bookmarking websites Other weblink-based systems Search engine Comparison of search engines with social bookmarking systems Search engine results page Web directory Lists of websites
https://en.wikipedia.org/wiki/List%20of%20people%20with%20the%20most%20children
This is a list of mothers said to have given birth to 20 or more children and men said to have fathered more than 25 children. Mothers and couples This section lists mothers who gave birth to at least 20 children. Numbers in bold and italics are likely to be legendary or inexact, some of them having been recorded before the 19th century. Due to the fact that women bear the children and therefore cannot reproduce as often as men, their records are often shared with or exceeded by their partners. {| class="wikitable sortable" |- ! style="text-align:center;width:4%;"|Total children birthed ! style="width:20%;"|Mother or couple (if known) ! style="width:8%;"|Approximate year of last birth ! class="unsortable"|Notes |- !69 |Valentina and Feodor Vassilyev |1765 |A Russian woman named Valentina Vassilyeva and her husband Feodor Vassilyev are alleged to hold the record for the most children a couple has produced. She gave birth to a total of 69 children – sixteen pairs of twins, seven sets of triplets and four sets of quadruplets – between 1725 and 1765, a total of 27 births. 67 of the 69 children were said to have survived infancy. Allegedly Vassilyev also had six sets of twins and two sets of triplets with a second wife, for another 18 children in eight births; he fathered a total of 87 children. The claim is disputed as records at this time were not well kept. |- !57|Mr and Ms Kirillov |1755 |The first wife of peasant Yakov Kirillov from the village of Vvedensky, Russia, gave birth to 57 children in a total of 21 births. She had four sets of quadruplets, seven sets of triplets and ten sets of twins. All of the children were alive in 1755, when Kirillov, aged 60, was presented at court. As with the Vassilyev case, the truth of these claims has not been established, and is highly improbable. |- !53|Barbara and Adam Stratzmann |1498 |It is claimed that Barbara Stratzmann (c. 1448–1503) of Bönnigheim, Germany, gave birth to 53 children (38 sons and 15 daughters) in a total
https://en.wikipedia.org/wiki/Large%20numbers
Large numbers are numbers significantly larger than those typically used in everyday life (for instance in simple counting or in monetary transactions), appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts. Googology is the study of nomenclature and properties of large numbers. In the everyday world Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, or a 1 followed by nine zeros: 1 000 000 000. The reciprocal, 1.0 × 10−9, means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. In addition to scientific (powers of 10) notation, the following examples include (short scale) systematic nomenclature of large numbers. Examples of large numbers describing everyday real-world objects include: The number of cells in the human body (estimated at 3.72 × 1013), or 37.2 trillion The number of bits on a computer hard disk (, typically about 1013, 1–2 TB), or 10 trillion The number of neuronal connections in the human brain (estimated at 1014), or 100 trillion The Avogadro constant is the number of “elementary entities” (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 approximately , or 602.2 sextillion. The total number of DNA base pairs within the entire biomass on Earth, as a possible approximation of global biodiversity, is estimated at (5.3 ± 3.6) × 1037, or 53±36 undecillion The mass of Earth consists of about 4 × 1051, or 4 sexdecillion, nucleons The estimated number of atoms in the observable universe (1080), or 100 quinvigintillion The lower bound on the game-tree complexity of chess, also known as the “Shannon number” (estim
https://en.wikipedia.org/wiki/Multiscale%20geometric%20analysis
Multiscale geometric analysis or geometric multiscale analysis is an emerging area of high-dimensional signal processing and data analysis. See also Wavelet Scale space Multi-scale approaches Multiresolution analysis Singular value decomposition Compressed sensing Further reading Signal processing Spatial analysis
https://en.wikipedia.org/wiki/List%20of%20self-intersecting%20polygons
Self-intersecting polygons, crossed polygons, or self-crossing polygons are polygons some of whose edges cross each other. They contrast with simple polygons, whose edges never cross. Some types of self-intersecting polygons are: the crossed quadrilateral, with four edges the antiparallelogram, a crossed quadrilateral with alternate edges of equal length the crossed rectangle, an antiparallelogram whose edges are two opposite sides and the two diagonals of a rectangle, hence having two edges parallel Star polygons pentagram, with five edges Hexagram, with six edges heptagram, with seven edges octagram, with eight edges enneagram or nonagram, with nine edges decagram, with ten edges hendecagram, with eleven edges dodecagram, with twelve edges icositetragram, with twenty four edges 257-gram, with two hundred and fifty seven edges See also Complex polygon Geometric shapes Mathematics-related lists
https://en.wikipedia.org/wiki/Mobile%20phone
A mobile phone (or cellphone) is a portable telephone that can make and receive calls over a radio frequency link while the user is moving within a telephone service area, as opposed to a fixed-location phone (landline phone). The radio frequency link establishes a connection to the switching systems of a mobile phone operator, which provides access to the public switched telephone network (PSTN). Modern mobile telephone services use a cellular network architecture and therefore mobile telephones are called cellphones (or "cell phones") in North America. In addition to telephony, digital mobile phones support a variety of other services, such as text messaging, multimedia messagIng, email, Internet access (via LTE, 5G NR or Wi-Fi), short-range wireless communications (infrared, Bluetooth), satellite access (navigation, messaging connectivity), business applications, video games and digital photography. Mobile phones offering only basic capabilities are known as feature phones; mobile phones which offer greatly advanced computing capabilities are referred to as smartphones. The first handheld mobile phone was demonstrated by Martin Cooper of Motorola in New York City on 3 April 1973, using a handset weighing c. 2 kilograms (4.4 lbs). In 1979, Nippon Telegraph and Telephone (NTT) launched the world's first cellular network in Japan. In 1983, the DynaTAC 8000x was the first commercially available handheld mobile phone. From 1983 to 2014, worldwide mobile phone subscriptions grew to over seven billion; enough to provide one for every person on Earth. In the first quarter of 2016, the top smartphone developers worldwide were Samsung, Apple and Huawei; smartphone sales represented 78 percent of total mobile phone sales. For feature phones (slang: "dumbphones") , the top-selling brands were Samsung, Nokia and Alcatel. Mobile phones are considered an important human invention as it has been one of the most widely used and sold pieces of consumer technology. The growth in
https://en.wikipedia.org/wiki/Tektronix%20hex%20format
Tektronix hex format (TEK HEX) and Extended Tektronix hex format (EXT TEK HEX or XTEK) / Extended Tektronix Object Format are ASCII-based hexadecimal file formats, created by Tektronix, for conveying binary information for applications like programming microcontrollers, EPROMs, and other kinds of chips. Each line of a Tektronix hex file starts with a slash (/) character, whereas extended Tektronix hex files start with a percent (%) character. Tektronix hex format A line consists of four parts, excluding the initial '/' character: Address — 4 character (2 byte) field containing the address where the data is to be loaded into memory. This limits the address to a maximum value of FFFF16. Byte count — 2 character (1 byte) field containing the length of the data fields. Prefix checksum — 2 character (1 byte) field containing the checksum of the prefix. The prefix checksum is the 8-bit sum of the four-bit hexadecimal value of the six digits that make up the address and byte count. Data -- contains the data to be transferred, followed by a 2 character (1 byte) checksum. The data checksum is the 8-bit sum, modulo 256, of the 4-bit hexadecimal values of the digits that make up the data bytes. Extended Tektronix hex format A line consists of five parts, excluding the initial '%' character: Record Length — 2 character (1 byte) field that specifies the number of characters (not bytes) in the record, excluding the percent sign. Type — 1 character field, specifies whether the record is data (6) or termination (8). (6 record contains data, placed at the address specified. 8 termination record: The address field may optionally contain the address of the instruction to which control is passed ; there is no data field.) Checksum — 2 hex digits (1 byte, represents the sum of all the nibbles on the line, excluding the checksum itself. Address — 2 to N character field. The first character is how many characters are to follow for this field. The remaining characters contain
https://en.wikipedia.org/wiki/Multicast%20router%20discovery
Multicast router discovery (MRD) provides a general mechanism for the discovery of multicast routers on an IP network. For IPv4, the mechanism is based on IGMP. For IPv6 the mechanism is based on MLD. Multicast router discovery is defined by RFC 4286. Computer networking Internet Protocol
https://en.wikipedia.org/wiki/Nigel%20Scrutton
Nigel Shaun Scrutton (born 2 April 1964) is a British biochemist and biotechnology innovator known for his work on enzyme catalysis, biophysics and synthetic biology. He is Director of the UK Future Biomanufacturing Research Hub, Director of the Fine and Speciality Chemicals Synthetic Biology Research Centre (SYNBIOCHEM), and Co-founder, Director and Chief Scientific Officer of the 'fuels-from-biology' company C3 Biotechnologies Ltd. He is Professor of Enzymology and Biophysical Chemistry in the Department of Chemistry at the University of Manchester. He is former Director of the Manchester Institute of Biotechnology (MIB) (2010 to 2020). Early life and education Scrutton was born in Batley, West Riding of Yorkshire and was brought up in Cleckheaton where he went to Whitcliffe Mount School. Scrutton graduated from King's College London with a first class Bachelor of Science degree in Biochemistry in 1985. He was a Benefactors' Scholar at St John's College, Cambridge where he completed his doctoral research (PhD) in 1988 supervised by Richard Perham. He was a Research Fellow of St John's College, Cambridge (1989–92) and a Fellow / Director of Studies at Churchill College, Cambridge (1992–95). He was awarded a Doctor of Science (ScD) degree in 2003 by the University of Cambridge. Career and research Following his PhD, Scrutton was appointed as Lecturer (1995), then Reader (1997) and Professor (1999) at the University of Leicester before being appointed Professor at the University of Manchester in 2005. He has held successive research fellowships over 29 years from the Royal Commission for the Exhibition of 1851 (1851 Research Fellowship), St John's College, Cambridge, the Royal Society (Royal Society University Research Fellow and Royal Society Wolfson Research Merit Award), the Lister Institute of Preventive Medicine, the Biotechnology and Biological Sciences Research Council (BBSRC) and the Engineering and Physical Sciences Research Council (EPSRC). He has been V
https://en.wikipedia.org/wiki/Multidimensional%20signal%20processing
In signal processing, multidimensional signal processing covers all signal processing done using multidimensional signals and systems. While multidimensional signal processing is a subset of signal processing, it is unique in the sense that it deals specifically with data that can only be adequately detailed using more than one dimension. In m-D digital signal processing, useful data is sampled in more than one dimension. Examples of this are image processing and multi-sensor radar detection. Both of these examples use multiple sensors to sample signals and form images based on the manipulation of these multiple signals. Processing in multi-dimension (m-D) requires more complex algorithms, compared to the 1-D case, to handle calculations such as the fast Fourier transform due to more degrees of freedom. In some cases, m-D signals and systems can be simplified into single dimension signal processing methods, if the considered systems are separable. Typically, multidimensional signal processing is directly associated with digital signal processing because its complexity warrants the use of computer modelling and computation. A multidimensional signal is similar to a single dimensional signal as far as manipulations that can be performed, such as sampling, Fourier analysis, and filtering. The actual computations of these manipulations grow with the number of dimensions. Sampling Multidimensional sampling requires different analysis than typical 1-D sampling. Single dimension sampling is executed by selecting points along a continuous line and storing the values of this data stream. In the case of multidimensional sampling, the data is selected utilizing a lattice, which is a "pattern" based on the sampling vectors of the m-D data set. These vectors can be single dimensional or multidimensional depending on the data and the application. Multidimensional sampling is similar to classical sampling as it must adhere to the Nyquist–Shannon sampling theorem. It is affect
https://en.wikipedia.org/wiki/List%20of%20contributors%20to%20general%20relativity
This is a dynamic list of persons who have made major contributions to the (mainstream) development of general relativity, as acknowledged by standard texts on the subject. Some related lists are mentioned at the bottom of the page. A Peter C. Aichelburg (Aichelburg–Sexl ultraboost, generalized symmetries), Miguel Alcubierre (numerical relativity, Alcubierre drives), Richard L. Arnowitt (ADM formalism), Abhay Ashtekar (Ashtekar variables, dynamical horizons) B Robert M L Baker, Jr. (high-frequency gravitational waves), James M. Bardeen (Bardeen vacuum, black hole mechanics, gauge-invariant linear perturbations of Friedmann-Lemaître cosmologies), Barry Barish (LIGO builder, gravitational-waves observation), Robert Bartnik (existence of ADM mass for asymptotically flat vacuums, quasilocal mass), Jacob Bekenstein (black hole entropy), Vladimir A. Belinsky (BKL conjecture, inverse scattering transform solution generating methods), Peter G. Bergmann (constrained Hamiltonian dynamics), Bruno Bertotti (Bertotti–Robinson electrovacuum), Jiří Bičák (exact solutions of Einstein field equations), Heinz Billing (prototype of laser interferometric gravitational-wave detector), George David Birkhoff (Birkhoff's theorem), Hermann Bondi (gravitational radiation, Bondi radiation chart, Bondi mass–energy–momentum, LTB dust, maverick models), William B. Bonnor (Bonnor beam solution), Robert H. Boyer (Boyer–Lindquist coordinates), Vladimir Braginsky (gravitational-wave detector, quantum nondemolition (QND) measurement) Carl H. Brans (Brans–Dicke theory), Hubert Bray (Riemannian Penrose inequality), Hans Adolph Buchdahl (Buchdahl fluid, Buchdahl theorem), Claudio Bunster (BTZ black hole, Surface terms in Hamiltonian formulation), William L. Burke (Burke potential, textbook) C Bernard Carr (self-similarity hypothesis, primordial black holes), Brandon Carter (no-hair theorem, Carter constant, black-hole mechanics, variational principle for Ernst vacuums),
https://en.wikipedia.org/wiki/ScreenOS
ScreenOS is a real-time embedded operating system for the NetScreen range of hardware firewall devices from Juniper Networks. Features Beside transport level security ScreenOS also integrates these flow management applications: IP gateway VPN management – ICSA-certified IPSec IP packet inspection (low level) for protection against TCP/IP attacks Virtualization for network segmentation Possible NSA backdoor and 2015 "Unauthorized Code" incident In December 2015, Juniper Networks announced that it had found unauthorized code in ScreenOS that had been there since August 2012. The two backdoors it created would allow sophisticated hackers to control the firewall of un-patched Juniper Netscreen products and decrypt network traffic. At least one of the backdoors appeared likely to have been the effort of a governmental interest. There was speculation in the security field about whether it was the NSA. Many in the security industry praised Juniper for being transparent about the breach. WIRED speculated that the lack of details that were disclosed and the intentional use of a random number generator with known security flaws could suggest that it was planted intentionally. NSA and GCHQ A 2011 leaked NSA document says that GCHQ had current exploit capability against the following ScreenOS devices: NS5gt, N25, NS50, NS500, NS204, NS208, NS5200, NS5000, SSG5, SSG20, SSG140, ISG 1000, ISG 2000. The exploit capabilities seem consistent with the program codenamed FEEDTROUGH. Versions
https://en.wikipedia.org/wiki/Carleman%20linearization
In mathematics, Carleman linearization (or Carleman embedding) is a technique to transform a finite-dimensional nonlinear dynamical system into an infinite-dimensional linear system. It was introduced by the Swedish mathematician Torsten Carleman in 1932. Carleman linearization is related to composition operator and has been widely used in the study of dynamical systems. It also been used in many applied fields, such as in control theory and in quantum computing. Procedure Consider the following autonomous nonlinear system: where denotes the system state vector. Also, and 's are known analytic vector functions, and is the element of an unknown disturbance to the system. At the desired nominal point, the nonlinear functions in the above system can be approximated by Taylor expansion where is the partial derivative of with respect to at and denotes the Kronecker product. Without loss of generality, we assume that is at the origin. Applying Taylor approximation to the system, we obtain where and . Consequently, the following linear system for higher orders of the original states are obtained: where , and similarly . Employing Kronecker product operator, the approximated system is presented in the following form where , and and matrices are defined in (Hashemian and Armaou 2015). See also Carleman matrix Composition operator
https://en.wikipedia.org/wiki/Water%20activity
Water activity (aw) is the partial vapor pressure of water in a solution divided by the standard state partial vapor pressure of water. In the field of food science, the standard state is most often defined as pure water at the same temperature. Using this particular definition, pure distilled water has a water activity of exactly one. Water activity is the thermodynamic activity of water as solvent and the relative humidity of the surrounding air after equilibration. As temperature increases, aw typically increases, except in some products with crystalline salt or sugar. Water migrates from areas of high aw to areas of low aw. For example, if honey (aw ≈ 0.6) is exposed to humid air (aw ≈ 0.7), the honey absorbs water from the air. If salami (aw ≈ 0.87) is exposed to dry air (aw ≈ 0.5), the salami dries out, which could preserve it or spoil it. Lower aw substances tend to support fewer microorganisms since these get desiccated by the water migration. Formula The definition of is where is the partial water vapor pressure in equilibrium with the solution, and is the (partial) vapor pressure of pure water at the same temperature. An alternate definition can be where is the activity coefficient of water and is the mole fraction of water in the aqueous fraction. Relationship to relative humidity: The relative humidity (RH) of air in equilibrium with a sample is also called the Equilibrium Relative Humidity (ERH) and is usually given as a percentage. It is equal to water activity according to The estimated mold-free shelf life (MFSL) in days at 21 °C depends on water activity according to Uses Water activity is an important characteristic for food product design and food safety. Food product design Food designers use water activity to formulate shelf-stable food. If a product is kept below a certain water activity, then mold growth is inhibited. This results in a longer shelf life. Water activity values can also help limit moisture migration within a food
https://en.wikipedia.org/wiki/A%20New%20Kind%20of%20Science
A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science. Contents Computation and its implications The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. Simple programs The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements: Its operation can be completely explained by a simple graphical illustration. It can be completely explained in a few sentences of human language. It can be implemented in a computer language using just a few lines of code. The number of its possible variations is small enough so that all of them can be computed. Generally, simple programs tend to have a very simple abstract framework.
https://en.wikipedia.org/wiki/Wavefront%20coding
In optics and signal processing, wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of field of a digital imaging system such as a video camera. Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field. Encoding The wavefront of a light wave passing through the camera system is modulated using optical elements that introduce a spatially varying optical path length. The modulating elements must be placed at or near the plane of the aperture stop or pupil so that the same modulation is introduced for all field angles across the field-of-view. This modulation corresponds to a change in complex argument of the pupil function of such an imaging device, and it can be engineered with different goals in mind: e.g. extending the depth of focus. Linear phase mask Wavefront coding with linear phase masks works by creating an optical transfer function that encodes distance information. Cubic phase mask Wavefront Coding with cubic phase masks works to blur the image uniformly using a cubic shaped waveplate so that the intermediate image, the optical transfer function, is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise depending upon the physical characteristics of the processor. Dynamic range is sacrificed to extend the depth of field depending upon the type of filter used. It can also correct optical aberration. The mask was developed by using the ambiguity function and the stationary phase method History The technique was pioneered by radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. The University filed a patent on the invention. Cathey, Dowski and Merc Mercure founded a company to commercialize the method called CDM-Optics, and licensed the invention from the University. The company was acquired in
https://en.wikipedia.org/wiki/Taxonomic%20rank
In biology, taxonomic rank is the relative level of a group of organisms (a taxon) in an ancestral or hereditary hierarchy. A common system of biological classification (taxonomy) consists of species, genus, family, order, class, phylum, kingdom, and domain. While older approaches to taxonomic classification were phenomenological, forming groups on the basis of similarities in appearance, organic structure and behaviour, methods based on genetic analysis have opened the road to cladistics. A given rank subsumes less general categories under it, that is, more specific descriptions of life forms. Above it, each rank is classified within more general categories of organisms and groups of organisms related to each other through inheritance of traits or features from common ancestors. The rank of any species and the description of its genus is basic; which means that to identify a particular organism, it is usually not necessary to specify ranks other than these first two. Consider a particular species, the red fox, Vulpes vulpes: the specific name or specific epithet vulpes (small v) identifies a particular species in the genus Vulpes (capital V) which comprises all the "true" foxes. Their close relatives are all in the family Canidae, which includes dogs, wolves, jackals, and all foxes; the next higher major rank, the order Carnivora, includes caniforms (bears, seals, weasels, skunks, raccoons and all those mentioned above), and feliforms (cats, civets, hyenas, mongooses). Carnivorans are one group of the hairy, warm-blooded, nursing members of the class Mammalia, which are classified among animals with backbones in the phylum Chordata, and with them among all animals in the kingdom Animalia. Finally, at the highest rank all of these are grouped together with all other organisms possessing cell nuclei in the domain Eukarya. The International Code of Zoological Nomenclature defines rank as: "The level, for nomenclatural purposes, of a taxon in a taxonomic hierarchy (
https://en.wikipedia.org/wiki/Virtual%20instrumentation
Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments. Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation. The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular. Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems. The newly updated
https://en.wikipedia.org/wiki/Superorganism
A superorganism or supraorganism is a group of synergetically interacting organisms of the same species. A community of synergetically interacting organisms of different species is called a holobiont. Concept The term superorganism is used most often to describe a social unit of eusocial animals, where division of labour is highly specialised and where individuals are not able to survive by themselves for extended periods. Ants are the best-known example of such a superorganism. A superorganism can be defined as "a collection of agents which can act in concert to produce phenomena governed by the collective", phenomena being any activity "the hive wants" such as ants collecting food and avoiding predators, or bees choosing a new nest site. In challenging environments, micro organisms collaborate and evolve together to process unlikely sources of nutrients such as methane. This process called syntrophy ("eating together") might be linked to the evolution of eukaryote cells and involved in the emergence or maintenance of life forms in challenging environments on Earth and possibly other planets. Superorganisms tend to exhibit homeostasis, power law scaling, persistent disequilibrium and emergent behaviours. The term was coined in 1789 by James Hutton, the "father of geology", to refer to Earth in the context of geophysiology. The Gaia hypothesis of James Lovelock, and Lynn Margulis as well as the work of Hutton, Vladimir Vernadsky and Guy Murchie, have suggested that the biosphere itself can be considered a superorganism, although this has been disputed. This view relates to systems theory and the dynamics of a complex system. The concept of a superorganism raises the question of what is to be considered an individual. Toby Tyrrell's critique of the Gaia hypothesis argues that Earth's climate system does not resemble an animal's physiological system. Planetary biospheres are not tightly regulated in the same way that animal bodies are: "planets, unlike animals,
https://en.wikipedia.org/wiki/Notation%20in%20probability%20and%20statistics
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory Random variables are usually written in upper case roman letters: , , etc. Particular realizations of a random variable are written in corresponding lower case letters. For example, could be a sample corresponding to the random variable . A cumulative probability is formally written to differentiate the random variable from its realization. The probability is sometimes written to distinguish it from other functions and measure P so as to avoid having to define "P is a probability" and is short for , where is the event space and is a random variable. notation is used alternatively. or indicates the probability that events A and B both occur. The joint probability distribution of random variables X and Y is denoted as , while joint probability mass function or probability density function as and joint cumulative distribution function as . or indicates the probability of either event A or event B occurring ("or" in this case means one or the other or both). σ-algebras are usually written with uppercase calligraphic (e.g. for the set of sets on which we define the probability P) Probability density functions (pdfs) and probability mass functions are denoted by lowercase letters, e.g. , or . Cumulative distribution functions (cdfs) are denoted by uppercase letters, e.g. , or . Survival functions or complementary cumulative distribution functions are often denoted by placing an overbar over the symbol for the cumulative:, or denoted as , In particular, the pdf of the standard normal distribution is denoted by , and its cdf by . Some common operators: : expected value of X : variance of X : covariance of
https://en.wikipedia.org/wiki/Code%20of%20the%20Quipu
Code of the Quipu is a book on the Inca system of recording numbers and other information by means of a quipu, a system of knotted strings. It was written by mathematician Marcia Ascher and anthropologist Robert Ascher, and published as Code of the Quipu: A Study in Media, Mathematics, and Culture by the University of Michigan Press in 1981. Dover Books republished it with corrections in 1997 as Mathematics of the Incas: Code of the Quipu. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries. Topics The book describes (necessarily by inference, as there is no written record beyond the quipu the themselves) the uses of the quipu, for instance in accounting and taxation. Although 400 quipu are known to survive, the book's study is based on a selection of 191 of them, described in a companion databook. It analyzes the mathematical principles behind the use of the quipu, including a decimal form of positional notation, the concept of zero, rational numbers, and arithmetic, and the way the spatial relations between the strings of a quipu recorded hierarchical and categorical information. It argues that beyond its use in recording numbers, the quipu acted as a method for planning for future events, and as a writing system for the Inca, and that it provides a tangible representation of "insistence", the thematic concerns in Inca culture for symmetry and spatial and hierarchical connections. The initial chapters of the book provide an introduction to Inca society and the physical organization of a quipu (involving the colors, size, direction, and hierarchy of its strings), and discussions of repeated themes in Inca society and of the place of the quipu and its makers in that society. Later chapters discuss the mathematical structure of the quipu and of the information it stores, with reference to similarly-structured data in modern society and exercises that ask students to constr
https://en.wikipedia.org/wiki/Postglacial%20vegetation
Postglacial vegetation refers to plants that colonize the newly exposed substrate after a glacial retreat. The term "postglacial" typically refers to processes and events that occur after the departure of glacial ice or glacial climates. Climate Influence Climate change is the main force behind changes in species distribution and abundance. Repeated changes in climate throughout the Quaternary Period are thought to have had a significant impact on the current vegetation species diversity present today. Functional and phylogenetic diversity are considered to be closely related to changing climatic conditions, this indicates that trait differences are extremely important in long term responses to climate change. During the transition from the last glaciation of the Pleistocene to the Holocene period, climate warming resulted in the expansion of taller plants and larger seed bearing plants which resulted in lower proportions of vegetation regeneration. Hence, low temperatures can be strong environmental filters that prevent tall and large-seeded plants from establishing in postglacial environments. Throughout Europe vegetation dynamics within the first half of the Holocene appear to have been influenced mainly by climate and the reorganization of atmospheric circulation associated with the disappearance of the North American ice sheet. This is evident in the rapid increase of forestation and changing biomes during the postglacial period between 11500ka and 8000ka before the present. Vegetation development periods of post-glacial land forms on Ellesmere Island, Northern Canada, is assumed to have been at least ca. 20,000 years in duration. This slow progression is mostly due to climatic restrictions such as an estimated annual rainfall amount of only 64mm and a mean annual temperature of -19.7 degrees Celsius. The length in time of vegetation development observed on Ellesmere Island is evidence that post glacial vegetation development is much more restricted in the Ar
https://en.wikipedia.org/wiki/Music%20and%20mathematics
Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory. While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits "a remarkable array of number properties". History Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers". From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection. Time, rhythm, and meter Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics. The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3). Musical form Musical
https://en.wikipedia.org/wiki/Mills%27%20constant
In number theory, Mills' constant is defined as the smallest positive real number A such that the floor function of the double exponential function is a prime number for all positive natural numbers n. This constant is named after William Harold Mills who proved in 1947 the existence of A based on results of Guido Hoheisel and Albert Ingham on the prime gaps. Its value is unproven, but if the Riemann hypothesis is true, it is approximately 1.3063778838630806904686144926... . Mills primes The primes generated by Mills' constant are known as Mills primes; if the Riemann hypothesis is true, the sequence begins . If ai denotes the i th prime in this sequence, then ai can be calculated as the smallest prime number larger than . In order to ensure that rounding , for n = 1, 2, 3, …, produces this sequence of primes, it must be the case that . The Hoheisel–Ingham results guarantee that there exists a prime between any two sufficiently large cube numbers, which is sufficient to prove this inequality if we start from a sufficiently large first prime . The Riemann hypothesis implies that there exists a prime between any two consecutive cubes, allowing the sufficiently large condition to be removed, and allowing the sequence of Mills primes to begin at a1 = 2. For all a > , there is at least one prime between and . This upper bound is much too large to be practical, as it is infeasible to check every number below that figure. However, the value of Mills' constant can be verified by calculating the first prime in the sequence that is greater than that figure. As of April 2017, the 11th number in the sequence is the largest one that has been proved prime. It is and has 20562 digits. , the largest known Mills probable prime (under the Riemann hypothesis) is , which is 555,154 digits long. Numerical calculation By calculating the sequence of Mills primes, one can approximate Mills' constant as Caldwell and Cheng used this method to compute 6850 base 10 digits of Mills
https://en.wikipedia.org/wiki/List%20of%204000-series%20integrated%20circuits
The following is a list of CMOS 4000-series digital logic integrated circuits. In 1968, the original 4000-series was introduced by RCA. Although more recent parts are considerably faster, the 4000 devices operate over a wide power supply range (3V to 18V recommended range for "B" series) and are well suited to unregulated battery powered applications and interfacing with sensitive analogue electronics, where the slower operation may be an EMC advantage. The earlier datasheets included the internal schematics of the gate architectures and a number of novel designs are able to 'mis-use' this additional information to provide semi-analog functions for timing skew and linear signal amplification. Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 4000 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers, and not all devices are available from all sources or in all package sizes. Overview Non-exhaustive list of manufacturers which make or have made these kind of ICs. Current manufacturers of these ICs: Nexperia (spinoff from NXP) ON Semiconductor (acquired Motorola & Fairchild Semiconductor) Texas Instruments (acquired National Semiconductor) Former manufacturers of these ICs: Hitachi NXP (acquired Philips Semiconductors) RCA (defunct; first introduced this 4000-series family in 1968) Renesas Electronics (acquired Intersil) ST Microelectronics Toshiba Semiconductor VEB Kombinat Mikroelektronik (defunct; was active in the 1980s) Tesla Piešťany, s.p. (defunct; was active in the 1980s and 1990s) various manufacturers in the former Soviet Union (e.g. Angstrem, Mikron Group, Exiton, Splav, NZPP in Russia; Mezon in Moldavia; Integral in Byelorussia; Oktyabr in Ukraine; Billur in Azerbaijan) Logic gates Since there are numerous 4000-series parts, this section groups related combinational logic pa
https://en.wikipedia.org/wiki/Examples%20of%20Markov%20chains
This article contains examples of Markov chains and Markov processes in action. All examples are in the countable state space. For an overview of Markov chains in general state space, see Markov chains on a measurable state space. Discrete-time Board games played with dice A game of snakes and ladders or any other game whose moves are determined entirely by dice is a Markov chain, indeed, an absorbing Markov chain. This is in contrast to card games such as blackjack, where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It doesn't depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown (and hence which cards are no longer in the deck), so the next state (or hand) of the game is not independent of the past states. Random walk Markov chains A center-biased random walk Consider a random walk on the number line where, at each step, the position (call it x) may change by +1 (to the right) or −1 (to the left) with probabilities: (where c is a constant greater than 0) For example, if the constant, c, equals 1, the probabilities of a move to the left at positions x = −2,−1,0,1,2 are given by respectively. The random walk has a centering effect that weakens as c increases. Since the probabilities depend only on the current position (value of x) and not on any prior positions, this biased random walk satisfies the definition of a Markov chain. Gambling Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If represents the number of dollars you have after n tosses, with , then the sequence is a Markov p
https://en.wikipedia.org/wiki/Multidimensional%20empirical%20mode%20decomposition
In signal processing, multidimensional empirical mode decomposition (multidimensional EMD) is an extension of the one-dimensional (1-D) EMD algorithm to a signal encompassing multiple dimensions. The Hilbert–Huang empirical mode decomposition (EMD) process decomposes a signal into intrinsic mode functions combined with the Hilbert spectral analysis, known as the Hilbert–Huang transform (HHT). The multidimensional EMD extends the 1-D EMD algorithm into multiple-dimensional signals. This decomposition can be applied to image processing, audio signal processing, and various other multidimensional signals. Motivation Multidimensional empirical mode decomposition is a popular method because of its applications in many fields, such as texture analysis, financial applications, image processing, ocean engineering, seismic research, etc. Several methods of Empirical Mode Decomposition have been used to analyze characterization of multidimensional signals. Introduction to empirical mode decomposition (EMD) The empirical mode decomposition (EMD) method can extract global structure and deal with fractal-like signals. The EMD method was developed so that data can be examined in an adaptive time–frequency–amplitude space for nonlinear and non-stationary signals. The EMD method decomposes the input signal into several intrinsic mode functions (IMF) and a residue. The given equation will be as follows: where is the multi-component signal. is the intrinsic mode function, and represents the residue corresponding to intrinsic modes. Ensemble empirical mode decomposition The ensemble mean is an approach to improving the accuracy of measurements. Data is collected by separate observations, each of which contains different noise over an ensemble of universes. To generalize this ensemble idea, noise is introduced to the single data set, , as if separate observations were indeed being made as an analogue to a physical experiment that could be repeated many times. The added w
https://en.wikipedia.org/wiki/Reference%20designator
A reference designator unambiguously identifies the location of a component within an electrical schematic or on a printed circuit board. The reference designator usually consists of one or two letters followed by a number, e.g. R13, C1002. The number is sometimes followed by a letter, indicating that components are grouped or matched with each other, e.g. R17A, R17B. IEEE 315 contains a list of Class Designation Letters to use for electrical and electronic assemblies. For example, the letter R is a reference prefix for the resistors of an assembly, C for capacitors, K for relays. History IEEE 200-1975 or "Standard Reference Designations for Electrical and Electronics Parts and Equipments" is a standard that was used to define referencing naming systems for collections of electronic equipment. IEEE 200 was ratified in 1975. The IEEE renewed the standard in the 1990s, but withdrew it from active support shortly thereafter. This document also has an ANSI document number, ANSI Y32.16-1975. This standard codified information from, among other sources, a United States military standard MIL-STD-16 which dates back to at least the 1950s in American industry. To replace IEEE 200–1975, ASME, a standards body for mechanical engineers, initiated the new standard ASME Y14.44-2008. This standard, along with IEEE 315–1975, provide the electrical designer with guidance on how to properly reference and annotate everything from a single circuit board to a collection of complete enclosures. Definition ASME Y14.44-2008 and IEEE 315-1975 define how to reference and annotate components of electronic devices. It breaks down a system into units, and then any number of sub-assemblies. The unit is the highest level of demarcation in a system and is always a numeral. Subsequent demarcation are called assemblies and always have the Class Letter "A" as a prefix following by a sequential number starting with 1. Any number of sub-assemblies may be defined until finally reaching the co
https://en.wikipedia.org/wiki/IPSANET
IPSANET was a packet switching network written by I. P. Sharp Associates (IPSA). Operation began in May 1976. It initially used the IBM 3705 Communications Controller and Computer Automation LSI-2 computers as nodes. An Intel 80286 based-node was added in 1987. It was called the Beta node. The original purpose was to connect low-speed dumb terminals to a central time sharing host in Toronto. It was soon modified to allow a terminal to connect to an alternate host running the SHARP APL software under license. Terminals were initially either 2741-type machines based on the 14.8 characters/s IBM Selectric typewriter or 30 character/s ASCII machines. Link speed was limited to 9600 bit/s until about 1984. Other services including 2780/3780 Bisync support, remote printing, X.25 gateway and SDLC pipe lines were added in the 1978 to 1984 era. There was no general purpose data transport facility until the introduction of Network Shared Variable Processor (NSVP) in 1984. This allowed APL programs running on different hosts to communicate via Shared Variables. The Beta node improved performance and provided new services not tied to APL. An X.25 interface was the most important of these. It allowed connection to a host which was not running SHARP APL. IPSANET allowed for the development of an early yet advanced e-mail service, 666 BOX, which also became a major product for some time, originally hosted on IPSA's system, and later sold to end users to run on their own machines. NSVP allowed these remote e-mail systems to exchange traffic. The network reached its maximum size of about 300 nodes before it was shut down in 1993. External links IPSANET Archives Computer networking Packets (information technology)
https://en.wikipedia.org/wiki/Zero-crossing%20rate
The zero-crossing rate (ZCR) is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive. Its value has been widely used in both speech recognition and music information retrieval, being a key feature to classify percussive sounds. ZCR is defined formally as where is a signal of length and is an indicator function. In some cases only the "positive-going" or "negative-going" crossings are counted, rather than all the crossings, since between a pair of adjacent positive zero-crossings there must be a single negative zero-crossing. For monophonic tonal signals, the zero-crossing rate can be used as a primitive pitch detection algorithm. Zero crossing rates are also used for Voice activity detection (VAD), which determines whether human speech is present in an audio segment or not. See also Zero crossing Digital signal processing
https://en.wikipedia.org/wiki/Hachimoji%20DNA
Hachimoji DNA (from Japanese hachimoji, "eight letters") is a synthetic nucleic acid analog that uses four synthetic nucleotides in addition to the four present in the natural nucleic acids, DNA and RNA. This leads to four allowed base pairs: two unnatural base pairs formed by the synthetic nucleobases in addition to the two normal pairs. Hachimoji bases have been demonstrated in both DNA and RNA analogs, using deoxyribose and ribose respectively as the backbone sugar. Benefits of such a nucleic acid system may include an enhanced ability to store data, as well as insights into what may be possible in the search for extraterrestrial life. The hachimoji DNA system produced one type of catalytic RNA (ribozyme or aptamer) in vitro. Description Natural DNA is a molecule carrying the genetic instructions used in the growth, development, functioning, and reproduction of all known living organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life. DNA is a polynucleotide as it is composed of simpler monomeric units called nucleotides; when double-stranded, the two chains coil around each other to form a double helix. In natural DNA, each nucleotide is composed of one of four nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound to each other with hydrogen bonds, according to base pairing rules (A with T and C with G), to make double-stranded DNA. Hachimoji DNA is similar to natural DNA but differs in the number, and type, of nucleobases. Unn
https://en.wikipedia.org/wiki/Addition
Addition (usually signified by the plus symbol ) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or sum of those values combined. The example in the adjacent image shows two columns of three apples and two apples each, totaling at five apples. This observation is equivalent to the mathematical expression (that is, "3 plus 2 is equal to 5"). Besides counting items, addition can also be defined and executed without referring to concrete objects, using abstractions called numbers instead, such as integers, real numbers and complex numbers. Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as vectors, matrices, subspaces and subgroups. Addition has several important properties. It is commutative, meaning that the order of the operands does not matter, and it is associative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of is the same as counting (see Successor function). Addition of does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. Performing addition is one of the simplest numerical tasks to do. Addition of very small numbers is accessible to toddlers; the most basic task, , can be performed by infants as young as five months, and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. Notation and terminology Addition is written using the plus sign "+" between the terms;
https://en.wikipedia.org/wiki/Glugging
Glugging (also referred to as "the glug-glug process") is the physical phenomenon which occurs when a liquid is poured rapidly from a vessel with a narrow opening, such as a bottle. It is a facet of fluid dynamics. As liquid is poured from a bottle, the air pressure in the bottle is lowered, and air at higher pressure from outside the bottle is forced into the bottle, in the form of a bubble, impeding the flow of liquid. Once the bubble enters, more liquid escapes, and the process is repeated. The reciprocal action of glugging creates a rhythmic sound. The English word "glug" is onomatopoeic, describing this sound. Onomatopoeias in other languages include (German). Academic papers have been written about the physics of glugging, and about the impact of glugging sounds on consumers' perception of products such as wine. Research into glugging has been done using high-speed photography. Factors which affect glugging are the viscosity of the liquid, its carbonation, the size and shape of the container's neck and its opening (collectively referred to as "bottle geometry"), the angle at which the container is held, and the ratio of air to liquid in the bottle (which means that the rate and the sound of the glugging changes as the bottle empties).
https://en.wikipedia.org/wiki/SeaSeep
SeaSeep is a combination of 2D seismic data (a group of seismic lines acquired individually, as opposed to multiple closely space lines1), high resolution multibeam sonar which is an evolutionary advanced form of side-scan sonar, navigated piston coring (one of the more common sea floor sampling methods2), heat flow sampling (which serve a critical purpose in oil exploration and production3) and possibly gravity and magnetic data (refer to Dick Gibson's Primer on Gravity and Magnetics4). The term SeaSeep originally belonged to Black Gold Energy LLC5 and refers to a dataset that combines all of the available data into one integrated package that can be used in hydrocarbon exploration. With the acquisition of Black Gold Energy LLC by Niko Resources Ltd.6 in December 2009 the term now belongs to Niko Resources The concept of a SeaSeep dataset is the modern day offshore derivative of how many oil fields were found in the late 19th and early 20th century; by finding a large anticline structure with an associated oil seep. In the United States, many of the first commercial fields in California were found using this method including the Newhall Field discovered in 1876 and the Kern River Field discovered in 18997. Seeps have also been used to find offshore fields including the Cantarell Field in Mexico in 1976; the largest oil field in Mexico and one of the largest in the world. The field is named after a fisherman, Rudesindo Cantarell, who complained to PEMEX about his fishing nets being stained by oil seeps in the Bay of Campeche. The biological and geochemical manifestations of seepage leads to distinct bathymetrical features including positive relief mounds, pinnacles, mud volcanoes and negative relief pockmarks. These features can be detected by multibeam sonar and then sampled by navigated piston coring. Spec and proprietary multibeam seep mapping and core geochemistry by Texas A&M University's Geochemical & Environmental Research Group8 and later TDI Brooks9
https://en.wikipedia.org/wiki/Out-of-band%20data
In computer networking, out-of-band data is the data transferred through a stream that is independent from the main in-band data stream. An out-of-band data mechanism provides a conceptually independent channel, which allows any data sent via that mechanism to be kept separate from in-band data. The out-of-band data mechanism should be provided as an inherent characteristic of the data channel and transmission protocol, rather than requiring a separate channel and endpoints to be established. The term "out-of-band data" probably derives from out-of-band signaling, as used in the telecommunications industry. Example case Consider a networking application that tunnels data from a remote data source to a remote destination. The data being tunneled may consist of any bit patterns. The sending end of the tunnel may at times have conditions that it needs to notify the receiving end about. However, it cannot simply insert a message to the receiving end because that end will not be able to distinguish the message from data sent by the data source. By using an out-of-band mechanism, the sending end can send the message to the receiving end out of band. The receiving end will be notified in some fashion of the arrival of out-of-band data, and it can read the out of band data and know that this is a message intended for it from the sending end, independent of the data from the data source. Implementations It is possible to implement out-of-band data transmission using a physically separate channel, but most commonly out-of-band data is a feature provided by a transmission protocol using the same channel as normal data. A typical protocol might divide the data to be transmitted into blocks, with each block having a header word that identifies the type of data being sent, and a count of the data bytes or words to be sent in the block. The header will identify the data as being in-band or out-of-band, along with other identification and routing information. At the rece
https://en.wikipedia.org/wiki/Niven%27s%20constant
In number theory, Niven's constant, named after Ivan Niven, is the largest exponent appearing in the prime factorization of any natural number n "on average". More precisely, if we define H(1) = 1 and H(n) = the largest exponent appearing in the unique prime factorization of a natural number n > 1, then Niven's constant is given by where ζ is the Riemann zeta function. In the same paper Niven also proved that where h(1) = 1, h(n) = the smallest exponent appearing in the unique prime factorization of each natural number n > 1, o is little o notation, and the constant c is given by and consequently that
https://en.wikipedia.org/wiki/Hanany%E2%80%93Witten%20transition
In theoretical physics the Hanany–Witten transition, also called the Hanany–Witten effect, refers to any process in a superstring theory in which two p-branes cross resulting in the creation or destruction of a third p-brane. A special case of this process was first discovered by Amihay Hanany and Edward Witten in 1996. All other known cases of Hanany–Witten transitions are related to the original case via combinations of S-dualities and T-dualities. This effect can be expanded to string theory, 2 strings cross together resulting in the creation or destruction of a third string. The original effect The original Hanany–Witten transition was discovered in type IIB superstring theory in flat, 10-dimensional Minkowski space. They considered a configuration of NS5-branes, D5-branes and D3-branes which today is called a Hanany–Witten brane cartoon. They demonstrated that a subsector of the corresponding open string theory is described by a 3-dimensional Yang–Mills gauge theory. However they found that the string theory space of solutions, called the moduli space, only agreed with the known Yang-Mills moduli space if whenever an NS5-brane and a D5-brane cross, a D3-brane stretched between them is created or destroyed. They also presented various other arguments in support of their effect, such as a derivation from the worldvolume Wess–Zumino terms. This proof uses the fact that the flux from each brane renders the action of the other brane ill-defined if one does not include the D3-brane. The S-rule Furthermore, they discovered the S-rule, which states that in a supersymmetric configuration the number of D3-branes stretched between a D5-brane and an NS5-brane may only be equal to 0 or 1. Then the Hanany-Witten effect implies that after the D5-brane and the NS5-brane cross, if there was a single D3-brane stretched between them it will be destroyed, and if there was not one then one will be created. In other words, there cannot be more than one D3 brane that stre
https://en.wikipedia.org/wiki/Network%20security
Network security consists of the policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password. Network security concept Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan). Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevent
https://en.wikipedia.org/wiki/Unwired%20enterprise
An unwired enterprise is an organization that extends and supports the use of traditional thick client enterprise applications to a variety of mobile devices and their users throughout the organization. The abiding characteristic is seamless universal mobile access to critical applications and business data. Use By supporting mobile clients alongside more traditional desktop and laptop clients, an unwired enterprise attempts to increase productivity rates and speed the pace of many common business processes through anytime/anywhere accessibility. Furthermore, it is believed that supporting mobile access to enterprise applications can help facilitate cogent decision making by pulling business data in real time from server systems and making it available to the mobile workforce at the decision point. Even though the wireless network is quite ubiquitous, this type of client application requires built-in procedures to deal with any network unavailability seamlessly, without interfering with application core functionality. Pervasive broadband, simplified wireless integration and a common management system are technology trends driving more organizations toward an unwired enterprise due to lowering complexity and greater ease of use. Unwired enterprises may include office environments in which workers are untethered from traditional desktop clients and conduct all business and communication from a wide variety of wireless devices. In the unwired enterprise, client platform and operating system are deemphasized as focus shifts away from platform homogeneity to fluid and expedient data exchange and technology agnosticism. Open standards industry initiatives such as the Open Handset Alliance are designed to help mobile technology vendors deliver on this promise.
https://en.wikipedia.org/wiki/Shadow%20square
The shadow square, also known as an altitude scale, was an instrument used to determine the linear height of an object, in conjunction with the alidade, for angular observations. An early example was described in an Arabic treatise likely dating to 9th or 10th-century Baghdad. Shadow squares are often found on the backs of astrolabes. Uses The main use of a shadow square is to measure the linear height of an object using its shadow. It does so by simulating the ratio between an object, generally a gnomon, and its shadow. If the sun's ray is between 0 degrees and 45 degrees the umbra versa (Vertical axis) is used, between 45 degrees and 90 degrees the umbra recta (Horizontal axis) is used and when the sun's ray is at 45 degrees its shadow falls exactly on the umbra media (y=x) It was used during the time of medieval astronomy to determine the height of, and to track the movement of celestial bodies such as the sun when more advanced measurement methods were not available. These methods can still be used today to determine the altitude, with reference to the horizon, of any visible celestial body. Gnomon A gnomon is used along with a shadow box commonly. A gnomon is a stick placed vertically in a sunny place so that it casts a shadow that can be measured. By studying the shadow of the gnomon you can learn a lot of information about the motion of the sun. Gnomons were most likely independently discovered by many ancient civilizations, but it is known that they were used in the 5th century BC in Greece. Most likely for the measurement of the winter and summer solstices. "Herodotus says in his Histories written around 450 B.C., that the Greeks learned the use of the gnomon from the Babylonians. Examples If your shadow is 4 feet long in your own feet, then what is the altitude of the sun? This problem can be solved through the use of the shadow box. The shadow box is divided in half, one half is calibrated by sixes the other by tens. Because it is a shadow cast by the
https://en.wikipedia.org/wiki/DAvE%20%28Infineon%29
DAVE (Infineon) Digital Application Virtual Engineer (DAVE) is a C/C++-language software development and code generation tool for microcontroller applications. DAVE is a standalone system with automatic code generation modules. It is suited for the development of software drivers for Infineon microcontrollers and aids the developer with automatically created C-level templates and user-desired functionalities. The latest releases of DAVE include all required parts to develop code, compile and debug on the target for free (based on the ARM GCC tool suite). Together with several low-cost development boards, one can get involved in microcontroller design very easily. This makes Infineon microcontroller products also more useful to small companies and to home-use or DIY projects, similar to the established products of Atmel (AVR, SAM) and Microchip (PIC, PIC32) to name a few. DAVE was developed by Infineon Technologies. Therefore, the automatic code generator supports only Infineon microcontrollers. The user also has to get used to the concept of the Eclipse IDE. The generated code can be also used on other (often non-free) development environments from Keil, Tasking, and so on. Latest version 4 (beta) for ARM-based 32-bit Infineon processors The successor of the Eclipse-based development environment for C/C++ and/or GUI-based development using "Apps". It generates code for the latest XMC1xxx and XMC4xxx microcontrollers using Cortex-M processors. The code generation part is significantly improved. Besides the free DAVE development software, a DAVE SDK is a free development environment to set up its own "Apps" for DAVE. Details (downloads, getting started, tutorials, etc.) can be found on the website. After starting DAVE, an Eclipse environment appears. In the project browser, a standard C/C++ or a DAVE project can be set up by selecting one of the available processors of Infineon. The latter project setup allows the configuration of the selected MCU using a GUI-bas
https://en.wikipedia.org/wiki/List%20of%20tessellations
See also Uniform tiling Convex uniform honeycombs List of k-uniform tilings List of Euclidean uniform tilings Uniform tilings in hyperbolic plane Mathematics-related lists
https://en.wikipedia.org/wiki/Triple%20correlation
The triple correlation of an ordinary function on the real line is the integral of the product of that function with two independently shifted copies of itself: The Fourier transform of triple correlation is the bispectrum. The triple correlation extends the concept of autocorrelation, which correlates a function with a single shifted copy of itself and thereby enhances its latent periodicities. History The theory of the triple correlation was first investigated by statisticians examining the cumulant structure of non-Gaussian random processes. It was also independently studied by physicists as a tool for spectroscopy of laser beams. Hideya Gamo in 1963 described an apparatus for measuring the triple correlation of a laser beam, and also showed how phase information can be recovered from the real part of the bispectrum—up to sign reversal and linear offset. However, Gamo's method implicitly requires the Fourier transform to never be zero at any frequency. This requirement was relaxed, and the class of functions which are known to be uniquely identified by their triple (and higher-order) correlations was considerably expanded, by the study of Yellott and Iverson (1992). Yellott & Iverson also pointed out the connection between triple correlations and the visual texture discrimination theory proposed by Bela Julesz. Applications Triple correlation methods are frequently used in signal processing for treating signals that are corrupted by additive white Gaussian noise; in particular, triple correlation techniques are suitable when multiple observations of the signal are available and the signal may be translating in between the observations, e.g.,a sequence of images of an object translating on a noisy background. What makes the triple correlation particularly useful for such tasks are three properties: (1) it is invariant under translation of the underlying signal; (2) it is unbiased in additive Gaussian noise; and (3) it retains nearly all of the relevant
https://en.wikipedia.org/wiki/List%20of%20partition%20topics
Generally, a partition is a division of a whole into non-overlapping parts. Among the kinds of partitions considered in mathematics are partition of a set or an ordered partition of a set, partition of a graph, partition of an integer, partition of an interval, partition of unity, partition of a matrix; see block matrix, and partition of the sum of squares in statistics problems, especially in the analysis of variance, quotition and partition, two ways of viewing the operation of division of integers. Integer partitions Composition (number theory) Ewens's sampling formula Ferrers graph Glaisher's theorem Landau's function Partition function (number theory) Pentagonal number theorem Plane partition Quotition and partition Rank of a partition Crank of a partition Solid partition Young tableau Young's lattice Set partitions Bell number Bell polynomials Dobinski's formula Cumulant Data clustering Equivalence relation Exact cover Knuth's Algorithm X Dancing Links Exponential formula Faà di Bruno's formula Feshbach–Fano partitioning Foliation Frequency partition Graph partition Kernel of a function Lamination (topology) Matroid partitioning Multipartition Multiplicative partition Noncrossing partition Ordered partition of a set Partition calculus Partition function (quantum field theory) Partition function (statistical mechanics) Derivation of the partition function Partition of an interval Partition of a set Ordered partition Partition refinement Disjoint-set data structure Partition problem 3-partition problem Partition topology Quotition and partition Recursive partitioning Stirling number Stirling transform Stratification (mathematics) Tverberg partition Twelvefold way In probability and stochastic processes Chinese restaurant process Dobinski's formula Ewens's sampling formula Law of tota
https://en.wikipedia.org/wiki/Chronobiology
Chronobiology is a field of biology that examines timing processes, including periodic (cyclic) phenomena in living organisms, such as their adaptation to solar- and lunar-related rhythms. These cycles are known as biological rhythms. Chronobiology comes from the ancient Greek χρόνος (chrónos, meaning "time"), and biology, which pertains to the study, or science, of life. The related terms chronomics and chronome have been used in some cases to describe either the molecular mechanisms involved in chronobiological phenomena or the more quantitative aspects of chronobiology, particularly where comparison of cycles between organisms is required. Chronobiological studies include but are not limited to comparative anatomy, physiology, genetics, molecular biology and behavior of organisms related to their biological rhythms. Other aspects include epigenetics, development, reproduction, ecology and evolution. The subject Chronobiology studies variations of the timing and duration of biological activity in living organisms which occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms). The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks. The circadian rhythm can further be broken down into routine cycles during the 24-hour day: Diurnal, which describes organisms active during daytime Nocturnal, which describes organisms active in the night Crepuscular, which describes animals primarily ac
https://en.wikipedia.org/wiki/Jetronic
Jetronic is a trade name of a manifold injection technology for automotive petrol engines, developed and marketed by Robert Bosch GmbH from the 1960s onwards. Bosch licensed the concept to many automobile manufacturers. There are several variations of the technology offering technological development and refinement. D-Jetronic (1967–1979) Analogue fuel injection, 'D' is from meaning pressure. Inlet manifold vacuum is measured using a pressure sensor located in, or connected to the intake manifold, in order to calculate the duration of fuel injection pulses. Originally, this system was called Jetronic, but the name D-Jetronic was later created as a retronym to distinguish it from subsequent Jetronic iterations. D-Jetronic was essentially a further refinement of the Electrojector fuel delivery system developed by the Bendix Corporation in the late 1950s. Rather than choosing to eradicate the various reliability issues with the Electrojector system, Bendix instead licensed the design to Bosch. With the role of the Bendix system being largely forgotten D-Jetronic became known as the first widely successful precursor of modern electronic common rail systems; it had constant pressure fuel delivery to the injectors and pulsed injections, albeit grouped (2 groups of injectors pulsed together) rather than sequential (individual injector pulses) as on later systems. As in the Electrojector system, D-Jetronic used analogue circuitry, with no microprocessor nor digital logic, the ECU used about 25 transistors to perform all of the processing. Two important factors that led to the ultimate failure of the Electrojector system: the use of paper-wrapped capacitors unsuited to heat-cycling and amplitude modulation (tv/ham radio) signals to control the injectors were superseded. The still present lack of processing power and the unavailability of solid-state sensors meant that the vacuum sensor was a rather expensive precision instrument, rather like a barometer, with brass bello
https://en.wikipedia.org/wiki/Straightedge
A straightedge or straight edge is a tool used for drawing straight lines, or checking their straightness. If it has equally spaced markings along its length, it is usually called a ruler. Straightedges are used in the automotive service and machining industry to check the flatness of machined mating surfaces. They are also used in the decorating industry for cutting and hanging wallpaper. True straightness can in some cases be checked by using a laser line level as an optical straightedge: it can illuminate an accurately straight line on a flat surface such as the edge of a plank or shelf. A pair of straightedges called winding sticks are used in woodworking to make warping easier to perceive in pieces of wood. Three straight edges can be used to test and calibrate themselves to a certain extent, however this procedure does not control twist. For accurate calibration of a straight edge, a surface plate must be used. Compass-and-straightedge construction An idealized straightedge is used in compass-and-straightedge constructions in plane geometry. It may be used: Given two points, to draw the line connecting them Given a point and a circle, to draw either tangent Given two circles, to draw any of their common tangents Or any of the other numerous geometric constructions The idealized straightedge is: Infinitely long Infinitesimally thin (i.e. point width) Always assumed to be without graduations or marks, or the ability to mark Able to be aligned to two points with infinite precision to draw a line through them It may not be marked or used together with the compass so as to transfer the length of one segment to another. It is possible to do all compass and straightedge constructions without the straightedge. That is, it is possible, using only a compass, to find the intersection of two lines given two points on each, and to find the tangent points to circles. It is not, however, possible to do all constructions using only a straightedge. It is pos
https://en.wikipedia.org/wiki/Isothermal%20microcalorimetry
Isothermal microcalorimetry (IMC) is a laboratory method for real-time monitoring and dynamic analysis of chemical, physical and biological processes. Over a period of hours or days, IMC determines the onset, rate, extent and energetics of such processes for specimens in small ampoules (e.g. 3–20 ml) at a constant set temperature (c. 15 °C–150 °C). IMC accomplishes this dynamic analysis by measuring and recording vs. elapsed time the net rate of heat flow (μJ/s = μW) to or from the specimen ampoule, and the cumulative amount of heat (J) consumed or produced. IMC is a powerful and versatile analytical tool for four closely related reasons: All chemical and physical processes are either exothermic or endothermic—produce or consume heat. The rate of heat flow is proportional to the rate of the process taking place. IMC is sensitive enough to detect and follow either slow processes (reactions proceeding at a few % per year) in a few grams of material, or processes which generate minuscule amounts of heat (e.g. metabolism of a few thousand living cells). IMC instruments generally have a huge dynamic range—heat flows as low as ca. 1 μW and as high as ca. 50,000 μW can be measured by the same instrument. The IMC method of studying rates of processes is thus broadly applicable, provides real-time continuous data, and is sensitive. The measurement is simple to make, takes place unattended and is non-interfering (e.g. no fluorescent or radioactive markers are needed). However, there are two main caveats that must be heeded in use of IMC: Missed data: If externally prepared specimen ampoules are used, it takes ca. 40 minutes to slowly introduce an ampoule into the instrument without significant disturbance of the set temperature in the measurement module. Thus any processes taking place during this time are not monitored. Extraneous data: IMC records the aggregate net heat flow produced or consumed by all processes taking place within an ampoule. Therefore, in order
https://en.wikipedia.org/wiki/SolidRun
SolidRun is an Israeli company producing Embedded systems components, mainly mini computers, Single-board computers and computer-on-module devices. It is specially known for the CuBox family of mini-computers, and for producing motherboards and processing components such as the HummingBoard motherboard. Situated in Acre, Israel, SolidRun develops and manufactures products aimed both for the private entertainment sector, and for companies developing processor based products, notably components of "Internet of Things" technology systems. Within the scope of the IoT technology, SolidRun's mini computers are aimed to cover the intermediate sphere, between sensors and user devices, and between the larger network or Cloud framework. Within such a network, mini computers or system-on-module devices, act as mediators gathering and processing information from sensors or user devices and communicating with the network - this is also known as Edge computing. History SolidRun was founded in 2010 by co-founders Rabeeh Khoury (formally an engineer at Marvell Technology Group) and Kossay Omary. The goal of SolidRun has been to develop, produce and market components aimed for integration with IoT systems. The company today is situated in Acre in the Northern District of Israel, and headed by Dr. Atai Ziv (CEO). The major product development line aimed at the consumer market is the CuBox family of mini-computers. The first of which was announced in December 2011, followed by the development of the CuBox-i series, announced in November 2013. The most recent addition to the CuBox line has been the CuBoxTV (announced in December 2014), which has been marketed primarily for the home entertainment market. A further primary product developed by SolidRun is the Hummingboard, an uncased single-board computer, marketed to developers as an integrated processing component. SolidRun develops all of its products using Open-source software (such as Linux and OpenELEC), identifying itself a
https://en.wikipedia.org/wiki/Cryptomorphism
In mathematics, two objects, especially systems of axioms or semantics for them, are called cryptomorphic if they are equivalent but not obviously equivalent. In particular, two definitions or axiomatizations of the same object are "cryptomorphic" if it is not obvious that they define the same object. Examples of cryptomorphic definitions abound in matroid theory and others can be found elsewhere, e.g., in group theory the definition of a group by a single operation of division, which is not obviously equivalent to the usual three "operations" of identity element, inverse, and multiplication. This word is a play on the many morphisms in mathematics, but "cryptomorphism" is only very distantly related to "isomorphism", "homomorphism", or "morphisms". The equivalence may in a cryptomorphism, if it is not actual identity, be informal, or may be formalized in terms of a bijection or equivalence of categories between the mathematical objects defined by the two cryptomorphic axiom systems. Etymology The word was coined by Garrett Birkhoff before 1967, for use in the third edition of his book Lattice Theory. Birkhoff did not give it a formal definition, though others working in the field have made some attempts since. Use in matroid theory Its informal sense was popularized (and greatly expanded in scope) by Gian-Carlo Rota in the context of matroid theory: there are dozens of equivalent axiomatic approaches to matroids, but two different systems of axioms often look very different. In his 1997 book Indiscrete Thoughts, Rota describes the situation as follows: Though there are many cryptomorphic concepts in mathematics outside of matroid theory and universal algebra, the word has not caught on among mathematicians generally. It is, however, in fairly wide use among researchers in matroid theory. See also Combinatorial class, an equivalence among combinatorial enumeration problems hinting at the existence of a cryptomorphism
https://en.wikipedia.org/wiki/Magic%20angle
The magic angle is a precisely defined angle, the value of which is approximately 54.7356°. The magic angle is a root of a second-order Legendre polynomial, , and so any interaction which depends on this second-order Legendre polynomial vanishes at the magic angle. This property makes the magic angle of particular importance in magic angle spinning solid-state NMR spectroscopy. In magnetic resonance imaging, structures with ordered collagen, such as tendons and ligaments, oriented at the magic angle may appear hyperintense in some sequences; this is called the magic angle artifact or effect. Mathematical definition The magic angle θm is where arccos and arctan are the inverse cosine and tangent functions respectively. θm is the angle between the space diagonal of a cube and any of its three connecting edges, see image. Another representation of the magic angle is half of the opening angle formed when a cube is rotated from its space diagonal axis, which may be represented as arccos − or 2 arctan  radians ≈ 109.4712°. This double magic angle is directly related to tetrahedral molecular geometry and is the angle between two vertices and the exact center of a tetrahedron (i.e., the edge central angle also known as the tetrahedral angle). Magic angle and nuclear magnetic resonance In nuclear magnetic resonance (NMR) spectroscopy, three prominent nuclear magnetic interactions, dipolar coupling, chemical shift anisotropy (CSA), and first-order quadrupolar coupling, depend on the orientation of the interaction tensor with the external magnetic field. By spinning the sample around a given axis, their average angular dependence becomes: where θ is the angle between the principal axis of the interaction and the magnetic field, θr is the angle of the axis of rotation relative to the magnetic field and β is the (arbitrary) angle between the axis of rotation and principal axis of the interaction. For dipolar couplings, the principal axis corresponds to the internucl
https://en.wikipedia.org/wiki/Colony%20picker
A colony picker is an instrument used to automatically identify microbial colonies growing on a solid medium, pick them and duplicate them either onto solid or liquid media. It is used in research laboratories as well as in industrial environments such as food testing and in microbiological cultures. Uses In food safety and in clinical diagnosis colony picking is used to isolate individual colonies for identification. Colony pickers automate this procedure, saving costs and personnel and reducing human error. In the drug discovery process they are used for screening purposes by picking thousands of microbial colonies and transferring them for further testing. Other uses include cloning procedures and DNA sequencing. as add-on Colony pickers are sold either as stand-alone instruments or as add-ons to liquid handling robots, using the robot as the actuator and adding a camera and image analysis capabilities. This strategy lowers the price of the system considerably and adds reusability as the robot can still be used for other purposes.
https://en.wikipedia.org/wiki/C-slowing
C-slow retiming is a technique used in conjunction with retiming to improve throughput of a digital circuit. Each register in a circuit is replaced by a set of C registers (in series). This creates a circuit with C independent threads, as if the new circuit contained C copies of the original circuit. A single computation of the original circuit takes C times as many clock cycles to compute in the new circuit. C-slowing by itself increases latency, but throughput remains the same. Increasing the number of registers allows optimization of the circuit through retiming to reduce the clock period of the circuit. In the best case, the clock period can be reduced by a factor of C. Reducing the clock period of the circuit reduces latency and increases throughput. Thus, for computations that can be multi-threaded, combining C-slowing with retiming can increase the throughput of the circuit, with little, or in the best case, no increase in latency. Since registers are relatively plentiful in FPGAs, this technique is typically applied to circuits implemented with FPGAs. See also Pipelining Barrel processor Resources PipeRoute: A Pipelining-Aware Router for Reconfigurable Architectures Simple Symmetric Multithreading in Xilinx FPGAs Post Placement C-Slow Retiming for Xilinx Virtex (.ppt) Post Placement C-Slow Retiming for Xilinx Virtex (.pdf) Exploration of RaPiD-style Pipelined FPGA Interconnects Time and Area Efficient Pattern Matching on FPGAs Gate arrays
https://en.wikipedia.org/wiki/List%20of%20long%20mathematical%20proofs
This is a list of unusually long mathematical proofs. Such proofs often use computational proof methods and may be considered non-surveyable. , the longest mathematical proof, measured by number of published journal pages, is the classification of finite simple groups with well over 10000 pages. There are several proofs that would be far longer than this if the details of the computer calculations they depend on were published in full. Long proofs The length of unusually long proofs has increased with time. As a rough rule of thumb, 100 pages in 1900, or 200 pages in 1950, or 500 pages in 2000 is unusually long for a proof. 1799 The Abel–Ruffini theorem was nearly proved by Paolo Ruffini, but his proof, spanning 500 pages, was mostly ignored and later, in 1824, Niels Henrik Abel published a proof that required just six pages. 1890 Killing's classification of simple complex Lie algebras, including his discovery of the exceptional Lie algebras, took 180 pages in 4 papers. 1894 The ruler-and-compass construction of a polygon of 65537 sides by Johann Gustav Hermes took over 200 pages. 1905 Emanuel Lasker's original proof of the Lasker–Noether theorem took 98 pages, but has since been simplified: modern proofs are less than a page long. 1963 Odd order theorem by Feit and Thompson was 255 pages long, which at the time was over 10 times as long as what had previously been considered a long paper in group theory. 1964 Resolution of singularities. Hironaka's original proof was 216 pages long; it has since been simplified considerably down to about 10 or 20 pages. 1966 Abyhankar's proof of resolution of singularities for 3-folds in characteristic greater than 6 covered about 500 pages in several papers. In 2009, Cutkosky simplified this to about 40 pages. 1966 Discrete series representations of Lie groups. Harish-Chandra's construction of these involved a long series of papers totaling around 500 pages. His later work on the Plancherel theorem for semisimple groups added a
https://en.wikipedia.org/wiki/Radio-frequency%20engineering
Radio-frequency (RF) engineering is a subset of electronic engineering involving the application of transmission line, waveguide, antenna and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz. It is incorporated into almost everything that transmits or receives a radio wave, which includes, but is not limited to, mobile phones, radios, WiFi, and two-way radios. RF engineering is a highly specialized field that typically includes the following areas of expertise: Design of antenna systems to provide radiative coverage of a specified geographical area by an electromagnetic field or to provide specified sensitivity to an electromagnetic field impinging on the antenna. Design of coupling and transmission line structures to transport RF energy without radiation. Application of circuit elements and transmission line structures in the design of oscillators, amplifiers, mixers, detectors, combiners, filters, impedance transforming networks and other devices. Verification and measurement of performance of radio frequency devices and systems. To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics, physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design. Radio electronics Radio electronics is concerned with electronic circuits which receive or transmit radio signals. Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit. List of radio electronics topics: RF oscillators: Phase-locked loop, voltage-controlled oscillator Tr
https://en.wikipedia.org/wiki/Multiplication%20theorem
In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises. Finite characteristic The multiplication theorem takes two common forms. In the first case, a finite number of terms are added or multiplied to give the relation. In the second case, an infinite number of terms are added or multiplied. The finite form typically occurs only for the gamma and related functions, for which the identity follows from a p-adic relation over a finite field. For example, the multiplication theorem for the gamma function follows from the Chowla–Selberg formula, which follows from the theory of complex multiplication. The infinite sums are much more common, and follow from characteristic zero relations on the hypergeometric series. The following tabulates the various appearances of the multiplication theorem for finite characteristic; the characteristic zero relations are given further down. In all cases, n and k are non-negative integers. For the special case of n = 2, the theorem is commonly referred to as the duplication formula. Gamma function–Legendre formula The duplication formula and the multiplication theorem for the gamma function are the prototypical examples. The duplication formula for the gamma function is It is also called the Legendre duplication formula or Legendre relation, in honor of Adrien-Marie Legendre. The multiplication theorem is for integer k ≥ 1, and is sometimes called Gauss's multiplication formula, in honour of Carl Friedrich Gauss. The multiplication theorem for the gamma functions can be understood to be a special case, for the trivial Dirichlet charac
https://en.wikipedia.org/wiki/Integrated%20stress%20response
The integrated stress response is a cellular stress response conserved in eukaryotic cells that downregulates protein synthesis and upregulates specific genes in response to internal or environmental stresses. Background The integrated stress response can be triggered within a cell due to either extrinsic or intrinsic conditions. Extrinsic factors include hypoxia, amino acid deprivation, glucose deprivation, viral infection and presence of oxidants. The main intrinsic factor is endoplasmic reticulum stress due to the accumulation of unfolded proteins. It has also been observed that the integrated stress response may trigger due to oncogene activation. The integrated stress response will either cause the expression of genes that fix the damage in the cell due to the stressful conditions, or it will cause a cascade of events leading to apoptosis, which occurs when the cell cannot be brought back into homeostasis. eIF2 protein complex Stress signals can cause protein kinases, known as EIF-2 kinases, to phosphorylate the α subunit of a protein complex called translation initiation factor 2 (eIF2), resulting in the gene ATF4 being turned on, which will further affect gene expression. eIF2 consists of three subunits: eIF2α, eIF2β and eIF2γ. eIF2α contains two binding sites, one for phosphorylation and one for RNA binding. The kinases work to phosphorylate serine 51 on the α subunit, which is a reversible action. In a cell experiencing normal conditions, eIF2 aids in the initiation of mRNA translation and recognizing the AUG start codon. However, once eIF2α is phosphorylated, the complex’s activity reduces, causing reduction in translation initiation and protein synthesis, while promoting expression of the ATF4 gene. Protein kinases There are four known mammalian protein kinases that phosphorylate eIF2α, including PKR-like ER kinase (PERK, EIF2AK3), heme-regulated eIF2α kinase (HRI, EIF2AK1), general control non-depressible 2 (GCN2, EIF2AK4) and double stranded RNA
https://en.wikipedia.org/wiki/CHMOS
CHMOS refers to one of a series of Intel CMOS processes developed from their HMOS process. CHMOS stands for "complementary high-performance metal-oxide-silicon. It was first developed in 1981. CHMOS was used in the Intel 80C51BH, a new version of their standard MCS-51 microcontroller. The chip was also used in later versions of Intel 8086, and the 80C88, which were fully static version of the Intel 8088. The Intel 80386 was made in 1.5 µm CHMOS III, and later in 1.0 µm CHMOS IV. CHMOS III used 1.5 micron lithography, p-well processing, n-well processing, and two layers of metal. CHMOS III-E used for the 12.5 MHz Intel 80C186 microprocessor. This technology uses 1 µm process for the EPROM. CHMOS IV (H stands for High Speed) used 1.0 µm lithography. Many versions of the Intel 80486 were made in 1.0 µm CHMOS IV. Intel uses this technology on these 80C186EB and 80C188EB embedded processors. CHMOS V used 0.8 µm lithography and 3 metal layers, and was used in later versions of the 80386, 80486, and i860. See also Depletion-load NMOS logic#Further development
https://en.wikipedia.org/wiki/Josiah%20Willard%20Gibbs%20Lectureship
The Josiah Willard Gibbs Lectureship (also called the Gibbs Lecture) of the American Mathematical Society is an annually awarded mathematical prize, named in honor of Josiah Willard Gibbs. The prize is intended not only for mathematicians, but also for physicists, chemists, biologists, physicians, and other scientists who have made important applications of mathematics. The purpose of the prize is to recognize outstanding achievement in applied mathematics and "to enable the public and the academic community to become aware of the contribution that mathematics is making to present-day thinking and to modern civilization." The prize winner gives a lecture, which is subsequently published in the Bulletin of the American Mathematical Society. Prize winners See also Colloquium Lectures (AMS) List of mathematics awards
https://en.wikipedia.org/wiki/Glossary%20of%20electrical%20and%20electronics%20engineering
This glossary of electrical and electronics engineering is a list of definitions of terms and concepts related specifically to electrical engineering and electronics engineering. For terms related to engineering in general, see Glossary of engineering. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Glossary of engineering Glossary of civil engineering Glossary of mechanical engineering Glossary of structural engineering
https://en.wikipedia.org/wiki/Uniform%20tilings%20in%20hyperbolic%20plane
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry. Uniform tilings can be identified by their vertex configuration, a sequence of numbers representing the number of sides of the polygons around each vertex. For example, 7.7.7 represents the heptagonal tiling which has 3 heptagons around each vertex. It is also regular since all the polygons are the same size, so it can also be given the Schläfli symbol {7,3}. Uniform tilings may be regular (if also face- and edge-transitive), quasi-regular (if edge-transitive but not face-transitive) or semi-regular (if neither edge- nor face-transitive). For right triangles (p q 2), there are two regular tilings, represented by Schläfli symbol {p,q} and {q,p}. Wythoff construction There are an infinite number of uniform tilings based on the Schwarz triangles (p q r) where  +  +  < 1, where p, q, r are each orders of reflection symmetry at three points of the fundamental domain triangle – the symmetry group is a hyperbolic triangle group. Each symmetry family contains 7 uniform tilings, defined by a Wythoff symbol or Coxeter-Dynkin diagram, 7 representing combinations of 3 active mirrors. An 8th represents an alternation operation, deleting alternate vertices from the highest form with all mirrors active. Families with r = 2 contain regular hyperbolic tilings, defined by a Coxeter group such as [7,3], [8,3], [9,3], ... [5,4], [6,4], .... Hyperbolic families with r = 3 or higher are given by (p q r) and include (4 3 3), (5 3 3), (6 3 3) ... (4 4 3), (5 4 3), ... (4 4 4).... Hyperbolic triangles (p q r) define compact uniform hyperbolic til
https://en.wikipedia.org/wiki/Titanium%20oxide
Titanium oxide may refer to: Titanium dioxide (titanium(IV) oxide), TiO2 Titanium(II) oxide (titanium monoxide), TiO, a non-stoichiometric oxide Titanium(III) oxide (dititanium trioxide), Ti2O3 Ti3O Ti2O δ-TiOx (x= 0.68–0.75) TinO2n−1 where n ranges from 3–9 inclusive, e.g. Ti3O5, Ti4O7, etc. Reduced titanium oxides A common reduced titanium oxide is TiO, also known as titanium monoxide. It can be prepared from titanium dioxide and titanium metal at 1500 °C. Ti3O5, Ti4O7, and Ti5O9 are non-stoichiometric oxides. These compounds are typically formed at high temperatures in the presence of excess oxygen. As a result, they exhibit unique structural and electronic properties, and have been studied for their potential use in various applications, including in gas sensors, lithium-ion batteries, and photocatalysis.
https://en.wikipedia.org/wiki/Frame-dragging
Frame-dragging is an effect on spacetime, predicted by Albert Einstein's general theory of relativity, that is due to non-static stationary distributions of mass–energy. A stationary field is one that is in a steady state, but the masses causing that field may be non-static ⁠— rotating, for instance. More generally, the subject that deals with the effects caused by mass–energy currents is known as gravitoelectromagnetism, which is analogous to the magnetism of classical electromagnetism. The first frame-dragging effect was derived in 1918, in the framework of general relativity, by the Austrian physicists Josef Lense and Hans Thirring, and is also known as the Lense–Thirring effect. They predicted that the rotation of a massive object would distort the spacetime metric, making the orbit of a nearby test particle precess. This does not happen in Newtonian mechanics for which the gravitational field of a body depends only on its mass, not on its rotation. The Lense–Thirring effect is very small – about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive. In 2015, new general-relativistic extensions of Newtonian rotation laws were formulated to describe geometric dragging of frames which incorporates a newly discovered antidragging effect. Effects Rotational frame-dragging (the Lense–Thirring effect) appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Under the Lense–Thirring effect, the frame of reference in which a clock ticks the fastest is one which is revolving around the object as viewed by a distant observer. This also means that light traveling in the direction of rotation of the object will move past the massive object faster than light moving against the rotation, as seen by a distant observer. It is now the best known frame-dragging effect, partly thanks to the Gravity Probe B experiment. Qualitatively, frame-d
https://en.wikipedia.org/wiki/Coefficient
In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or an expression. It may be a number (dimensionless), in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression (including variables such as , and ). When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial has coefficients 2, −1, and 3, and the powers of the variable in the polynomial have coefficient parameters , , and . The , also known as constant term or simply constant is the quantity not attached to variables in an expression. For example, the constant coefficients of the expressions above are the number 3 and the parameter c, respectively. The coefficient attached to the highest degree of the variable in a polynomial is referred to as the leading coefficient. For example, in the expressions above, the leading coefficients are 2 and a, respectively. In the context of differential equations, an equation can often be written as equating to zero a polynomial in the unknown functions and their derivatives. In this case, the coefficients of the differential equation are the coefficients of this polynomial, and are generally non-constant functions. A coefficient is a constant coefficient when it is a constant function. For avoiding confusion, the coefficient that is not attached to unknown functions and their derivative is generally called the constant term rather the constant coefficient. In particular, in a linear differential equation with constant coefficient, the constant term is generally not supposed to be a constant function. Terminology and definition In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or any expression. For example, in the polynomial with variables an
https://en.wikipedia.org/wiki/Waru%20Waru
Waru Waru is an Aymara term for the agricultural technique developed by pre-Hispanic people in the Andes region of South America from Ecuador to Bolivia; this regional agricultural technique is also referred to as camellones in Spanish. Functionally similar agricultural techniques have been developed in other parts of the world, all of which fall under the broad category of raised field agriculture. This type of altiplano field agriculture consists of parallel canals alternated by raised planting beds, which would be strategically located on floodplains or near a water source so that the fields could be properly irrigated. These flooded fields were composed of soil that was rich in nutrients due to the presence of aquatic plants and other organic materials. Through the process of mounding up this soil to create planting beds, natural, recyclable fertilizer was made available in a region where nitrogen-rich soils were rare. By trapping solar radiation during the day, this raised field agricultural method also protected crops from freezing overnight. These raised planting beds were irrigated very efficiently by the adjacent canals which extended the growing season significantly, allowing for more food yield. Waru Waru were able to yield larger amounts of food than previous agricultural methods due to the overall efficiency of the system. This technique is dated to around 300 B.C., and is most commonly associated with the Tiwanaku culture of the Lake Titicaca region in southern Bolivia, who used this method to grow crops like potatoes and quinoa. This type of agriculture also created artificial ecosystems, which attracted other food sources such as fish and lake birds. Past cultures in the Lake Titicaca region likely utilized these additional resources as a subsistence method. It combines raised beds with irrigation channels to prevent damage by soil erosion during floods. These fields ensure both collecting of water (either fluvial water, rainwater or phreatic
https://en.wikipedia.org/wiki/Ichnotaxon
An ichnotaxon (plural ichnotaxa) is "a taxon based on the fossilized work of an organism", i.e. the non-human equivalent of an artifact. Ichnotaxa comes from the Greek , ichnos meaning track and , taxis meaning ordering. Ichnotaxa are names used to identify and distinguish morphologically distinctive ichnofossils, more commonly known as trace fossils. They are assigned genus and species ranks by ichnologists, much like organisms in Linnaean taxonomy. These are known as ichnogenera and ichnospecies, respectively. "Ichnogenus" and "ichnospecies" are commonly abbreviated as "igen." and "isp.". The binomial names of ichnospecies and their genera are to be written in italics. Most researchers classify trace fossils only as far as the ichnogenus rank, based upon trace fossils that resemble each other in morphology but have subtle differences. Some authors have constructed detailed hierarchies up to ichnosuperclass, recognizing such fine detail as to identify ichnosuperorder and ichnoinfraclass, but such attempts are controversial. Naming Due to the chaotic nature of trace fossil classification, several ichnogenera hold names normally affiliated with animal body fossils or plant fossils. For example, many ichnogenera are named with the suffix -phycus due to misidentification as algae. Edward Hitchcock was the first to use the now common -ichnus suffix in 1858, with Cochlichnus. History Due to trace fossils' history of being difficult to classify, there have been several attempts to enforce consistency in the naming of ichnotaxa. In 1961, the International Commission on Zoological Nomenclature ruled that most trace fossil taxa named after 1930 would be no longer available. See also Bird ichnology Trace fossil classification Glossary of scientific naming
https://en.wikipedia.org/wiki/Recurrence%20period%20density%20entropy
Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal. Overview Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal. The RPDE value is a scalar in the range zero to one. For purely periodic signals, , whereas for purely i.i.d., uniform white noise, . Method description The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors: for each value xn in the time series, where M is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point in the phase space, an -neighbourhood (an m-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P(T). The normalised entropy of this density: is the RPDE value, where is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE i
https://en.wikipedia.org/wiki/Carl%20Theodore%20Heisel
Carl Theodore Heisel (1852–1937) was a mathematical crank who wrote several books in the 1930s challenging accepted mathematical truths. Among his claims is that he found a way to square the circle. He is credited with 24 works in 62 publications. Heisel did not charge money for his books; he gave thousands of them away for free. Because of this, they are available at many libraries and universities. Heisel's books have historic and monetary value. Paul Halmos referred to one of Heisel's works as a "classic crank book." Selected works
https://en.wikipedia.org/wiki/Index%20of%20cryptography%20articles
Articles related to cryptography include: A A5/1 • A5/2 • ABA digital signature guidelines • ABC (stream cipher) • Abraham Sinkov • Acoustic cryptanalysis • Adaptive chosen-ciphertext attack • Adaptive chosen plaintext and chosen ciphertext attack • Advantage (cryptography) • ADFGVX cipher • Adi Shamir • Advanced Access Content System • Advanced Encryption Standard • Advanced Encryption Standard process • Adversary • AEAD block cipher modes of operation • Affine cipher • Agnes Meyer Driscoll • AKA (security) • Akelarre (cipher) • Alan Turing • Alastair Denniston • Al Bhed language • Alex Biryukov • Alfred Menezes • Algebraic Eraser • Algorithmically random sequence • Alice and Bob • All-or-nothing transform • Alphabetum Kaldeorum • Alternating step generator • American Cryptogram Association • AN/CYZ-10 • Anonymous publication • Anonymous remailer • Antoni Palluth • Anubis (cipher) • Argon2 • ARIA (cipher) • Arlington Hall • Arne Beurling • Arnold Cipher • Array controller based encryption • Arthur Scherbius • Arvid Gerhard Damm • Asiacrypt • Atbash • Attribute-based encryption • Attack model • Auguste Kerckhoffs • Authenticated encryption • Authentication • Authorization certificate • Autokey cipher • Avalanche effect B B-Dienst • Babington Plot • Baby-step giant-step • Bacon's cipher • Banburismus • Bart Preneel • BaseKing • BassOmatic • BATON • BB84 • Beale ciphers • BEAR and LION ciphers • Beaufort cipher • Beaumanor Hall • Bent function • Berlekamp–Massey algorithm • Bernstein v. United States • BestCrypt • Biclique attack • BID/60 • BID 770 • Bifid cipher • Bill Weisband • Binary Goppa code • Biometric word list • Birthday attack • Bit-flipping attack • BitTorrent protocol encryption • Biuro Szyfrów • Black Chamber • Blaise de Vigenère • Bletchley Park • Blind credential • Blinding (cryp
https://en.wikipedia.org/wiki/Coremark
CoreMark is a benchmark that measures the performance of central processing units (CPU) used in embedded systems. It was developed in 2009 by Shay Gal-On at EEMBC and is intended to become an industry standard, replacing the Dhrystone benchmark. The code is written in C and contains implementations of the following algorithms: list processing (find and sort), matrix manipulation (common matrix operations), state machine (determine if an input stream contains valid numbers), and CRC. The code is under the Apache License 2.0 and is free of cost to use, but ownership is retained by the Consortium and publication of modified versions under the CoreMark name prohibited. Issues addressed by CoreMark The CRC algorithm serves a dual function; it provides a workload commonly seen in embedded applications and ensures correct operation of the CoreMark benchmark, essentially providing a self-checking mechanism. Specifically, to verify correct operation, a 16-bit CRC is performed on the data contained in elements of the linked list. To ensure compilers cannot pre-compute the results at compile time every operation in the benchmark derives a value that is not available at compile time. Furthermore, all code used within the timed portion of the benchmark is part of the benchmark itself (no library calls). CoreMark versus Dhrystone CoreMark draws on the strengths that made Dhrystone so resilient - it is small, portable, easy to understand, free, and displays a single number benchmark score. Unlike Dhrystone, CoreMark has specific run and reporting rules, and was designed to avoid the well understood issues that have been cited with Dhrystone. Major portions of Dhrystone are susceptible to a compiler’s ability to optimize the work away; thus it is more a compiler benchmark than a hardware benchmark. This also makes it very difficult to compare results when different compilers/flags are used. Library calls are made within the timed portion of Dhrystone. Typically, those library
https://en.wikipedia.org/wiki/Sensing%20of%20phage-triggered%20ion%20cascades
Sensing of phage-triggered ion cascades (SEPTIC) is a prompt bacterium identification method based on fluctuation-enhanced sensing in fluid medium. The advantages of SEPTIC are the specificity and speed (needs only a few minutes) offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing, and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different bacteria within a few minutes measurement window. The mechanism SEPTIC utilizes bacteriophages as indicators to trigger an ionic response by the bacteria during phage infection. Microscopic metal electrodes detect the random fluctuations of the electrochemical potential due to the stochastic fluctuations of the ionic concentration gradient caused by the phage infection of bacteria. The electrode pair in the electrolyte with different local ion concentrations at the vicinity of electrodes form an electrochemical cell that produces a voltage depending on the instantaneous ratio of local concentrations. While the concentrations are fluctuating, an alternating random voltage difference will appear between the electrodes. According to the experimental studies, whenever there is an ongoing phage infection, the power density spectrum of the measured electronic noise will have a noise spectrum while, without phage infection, it is a 1/f noise spectrum. In order to have a high sensitivity, a DC electrical field attracts the infected bacteria (which are charged due to ion imbalance) to the electrode with the relevant polarization. Advantages The advantages of SEPTIC are the specificity and speed (needs only a few minutes) offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing, and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different b
https://en.wikipedia.org/wiki/Exterior%20calculus%20identities
This article summarizes several identities in exterior calculus. Notation The following summarizes short definitions and notations that are used in this article. Manifold , are -dimensional smooth manifolds, where . That is, differentiable manifolds that can be differentiated enough times for the purposes on this page. , denote one point on each of the manifolds. The boundary of a manifold is a manifold , which has dimension . An orientation on induces an orientation on . We usually denote a submanifold by . Tangent and cotangent bundles , denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold . , denote the tangent spaces of , at the points , , respectively. denotes the cotangent space of at the point . Sections of the tangent bundles, also known as vector fields, are typically denoted as such that at a point we have . Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as such that at a point we have . An alternative notation for is . Differential k-forms Differential -forms, which we refer to simply as -forms here, are differential forms defined on . We denote the set of all -forms as . For we usually write , , . -forms are just scalar functions on . denotes the constant -form equal to everywhere. Omitted elements of a sequence When we are given inputs and a -form we denote omission of the th entry by writing Exterior product The exterior product is also known as the wedge product. It is denoted by . The exterior product of a -form and an -form produce a -form . It can be written using the set of all permutations of such that as Directional derivative The directional derivative of a 0-form along a section is a 0-form denoted Exterior derivative The exterior derivative is defined for all . We generally omit the subscript when it is clear from the context. For a -form we have as the -form that gives the directi
https://en.wikipedia.org/wiki/Terminal%20%28electronics%29
A terminal is the point at which a conductor from a component, device or network comes to an end. Terminal may also refer to an electrical connector at this endpoint, acting as the reusable interface to a conductor and creating a point where external circuits can be connected. A terminal may simply be the end of a wire or it may be fitted with a connector or fastener. In network analysis, terminal means a point at which connections can be made to a network in theory and does not necessarily refer to any physical object. In this context, especially in older documents, it is sometimes called a pole. On circuit diagrams, terminals for external connections are denoted by empty circles. They are distinguished from nodes or junctions which are entirely internal to the circuit, and are denoted by solid circles. All electrochemical cells have two terminals (electrodes) which are referred to as the anode and cathode or positive (+) and negative (-). On many dry batteries, the positive terminal (cathode) is a protruding metal cap and the negative terminal (anode) is a flat metal disc . In a galvanic cell such as a common AA battery, electrons flow from the negative terminal to the positive terminal, while the conventional current is opposite to this. Types of terminals Connectors Line splices Terminal strip, also known as a tag board or tag strip Solder cups or buckets Wire wrap connections (wire to board) Crimp terminals (ring, spade, fork, bullet, blade) Turret terminals for surface-mount circuits Crocodile clips Screw terminals and terminal blocks Wire nuts, a type of twist-on wire connector Leads on electronic components Battery terminals, often using screws or springs Electrical polarity See also Electrical connector - many terminals fall under this category Electrical termination - a method of signal conditioning
https://en.wikipedia.org/wiki/Structured%20ASIC%20platform
Structured ASIC is an intermediate technology between ASIC and FPGA, offering high performance, a characteristic of ASIC, and low NRE cost, a characteristic of FPGA. Using Structured ASIC allows products to be introduced quickly to market, to have lower cost and to be designed with ease. In a FPGA, interconnects and logic blocks are programmable after fabrication, offering high flexibility of design and ease of debugging in prototyping. However, the capability of FPGAs to implement large circuits is limited, in both size and speed, due to complexity in programmable routing, and significant space occupied by programming elements, e.g. SRAMs, MUXes. On the other hand, ASIC design flow is expensive. Every different design needs a complete different set of masks. The Structured ASIC is a solution between these two. It has basically the same structure as a FPGA, but being mask-programmable instead of field-programmable, by configuring one or several via layers between metal layers. Every SRAM configuration bit can be replaced by a choice of putting a via or not between metal contacts. A number of commercial vendors have introduced structured ASIC products. They have a wide range of configurability, from a single via layer to 6 metal and 6 via layers. Altera's Hardcopy-II, eASIC's Nextreme are examples of commercial structured ASICs. See also Gate array Altera Corp - "HardCopy II Structured ASICs" eASIC Corp - "Nextreme Structured ASIC"
https://en.wikipedia.org/wiki/Fluctuation-enhanced%20sensing
Fluctuation-enhanced sensing (FES) is a specific type of chemical or biological sensing where the stochastic component, noise, of the sensor signal is analyzed. The stages following the sensor in a FES system typically contain filters and preamplifier(s) to extract and amplify the stochastic signal components, which are usually microscopic temporal fluctuations that are orders of magnitude weaker than the sensor signal. Then selected statistical properties of the amplified noise are analyzed, and a corresponding pattern is generated as the stochastic fingerprint of the sensed agent. Often the power density spectrum of the stochastic signal is used as output pattern however FES has been proven effective with more advanced methods, too, such as higher-order statistics. History During the 1990s, several authors (for example, Bruno Neri and coworkers, Peter Gottwald and Bela Szentpali) had proposed using the spectrum of measured noise to obtain information about ambient chemical conditions. However, the first systematic proposal for a generic electronic nose utilizing chemical sensors in FES mode, and the related mathematical analysis with experimental demonstration, were carried out only in 1999 by Laszlo B. Kish, Robert Vajtai and C.G. Granqvist at Uppsala University. The name "fluctuation-enhanced sensing" was created by John Audia (United States Navy), in 2001, after learning about the published scheme. In 2003, Alexander Vidybida from Bogolyubov Institute for Theoretical Physics of the National Academy of Sciences of Ukraine has proven mathematically that adsorption–desorption fluctuations during odor primary reception can be used for improving selectivity. During the years, FES has been developed and demonstrated in many studies with various types of sensors and agents in chemical and biological systems. Bacteria have also been detected and identified by FES, either by their odor in air, or by the "SEPTIC" method in liquid phase. In the period of 2006–2009 Sig
https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practica
https://en.wikipedia.org/wiki/Asano%20contraction
In complex analysis, a discipline in mathematics, and in statistical physics, the Asano contraction or Asano–Ruelle contraction is a transformation on a separately affine multivariate polynomial. It was first presented in 1970 by Taro Asano to prove the Lee–Yang theorem in the Heisenberg spin model case. This also yielded a simple proof of the Lee–Yang theorem in the Ising model. David Ruelle proved a general theorem relating the location of the roots of a contracted polynomial to that of the original. Asano contractions have also been used to study polynomials in graph theory. Definition Let be a polynomial which, when viewed as a function of only one of these variables is an affine function. Such functions are called separately affine. For example, is the general form of a separately affine function in two variables. Any separately affine function can be written in terms of any two of its variables as . The Asano contraction sends to . Location of zeroes Asano contractions are often used in the context of theorems about the location of roots. Asano originally used them because they preserve the property of having no roots when all the variables have magnitude greater than 1. Ruelle provided a more general relationship which allowed the contractions to be used in more applications. He showed that if there are closed sets not containing 0 such that cannot vanish unless for some index , then can only vanish if for some index or where . Ruelle and others have used this theorem to relate the zeroes of the partition function to zeroes of the partition function of its subsystems. Use Asano contractions can be used in statistical physics to gain information about a system from its subsystems. For example, suppose we have a system with a finite set of particles with magnetic spin either 1 or -1. For each site, we have a complex variable Then we can define a separately affine polynomial where , and is the energy of the state where only the sites in have
https://en.wikipedia.org/wiki/Food%20studies
Food studies is the critical examination of food and its contexts within science, art, history, society, and other fields. It is distinctive from other food-related areas of study such as nutrition, agriculture, gastronomy, and culinary arts in that it tends to look beyond the consumption, production, and aesthetic appreciation of food and tries to illuminate food as it relates to a vast number of academic fields. It is thus a field that involves and attracts philosophers, historians, scientists, literary scholars, sociologists, art historians, anthropologists, and others. State of the field This is an interdisciplinary and emerging field, and as such there is a substantial crossover between academic and popular work. Practitioners reference best-selling authors, such as the journalist Michael Pollan, as well as scholars, such as the historian Warren Belasco and the anthropologist Sidney Mintz. While this makes the discipline somewhat volatile, it also makes it interesting and engaging. The journalist Paul Levy has noted, for example, that "Food studies is a subject so much in its infancy that it would be foolish to try to define it or in any way circumscribe it, because the topic, discipline or method you rule out today might be tomorrow's big thing." Research questions Qualitative research questions include: What impact does food have on the environment? What are the ethics of eating? How does food contribute to systems of oppression? How are foods symbolic markers of identity? At the same time practitioners may ask seemingly basic questions that are nonetheless fundamental to human existence. Who chooses what we eat and why? How are foods traditionally prepared—and where is the boundary between authentic culinary heritage and invented traditions? How is food integrated into classrooms? There are also questions of the spatialization of foodways and the relationship to place. This has led to the development of the concept of "foodscape" – introduced i
https://en.wikipedia.org/wiki/Pairwise%20error%20probability
Pairwise error probability is the error probability that for a transmitted signal () its corresponding but distorted version () will be received. This type of probability is called ″pair-wise error probability″ because the probability exists with a pair of signal vectors in a signal constellation. It's mainly used in communication systems. Expansion of the definition In general, the received signal is a distorted version of the transmitted signal. Thus, we introduce the symbol error probability, which is the probability that the demodulator will make a wrong estimation of the transmitted symbol based on the received symbol, which is defined as follows: where is the size of signal constellation. The pairwise error probability is defined as the probability that, when is transmitted, is received. can be expressed as the probability that at least one is closer than to . Using the upper bound to the probability of a union of events, it can be written: Finally: Closed form computation For the simple case of the additive white Gaussian noise (AWGN) channel: The PEP can be computed in closed form as follows: is a Gaussian random variable with mean 0 and variance . For a zero mean, variance Gaussian random variable: Hence, See also Signal processing Telecommunication Electrical engineering Random variable
https://en.wikipedia.org/wiki/List%20of%20Feynman%20diagrams
This is a list of common Feynman diagrams. Particle physics Physics-related lists
https://en.wikipedia.org/wiki/List%20of%20PowerPC-based%20game%20consoles
There are several ways in which game consoles can be categorized. One is by its console generation, and another is by its computer architecture. Game consoles have long used specialized and customized computer hardware with the base in some standardized processor instruction set architecture. In this case, it is PowerPC and Power ISA, processor architectures initially developed in the early 1990s by the AIM alliance, i.e. Apple, IBM, and Motorola. Even though these consoles share much in regard to instruction set architecture, game consoles are still highly specialized computers so it is not common for games to be readily portable or compatible between devices. Only Nintendo has kept a level of portability between their consoles, and even there it is not universal. The first devices used standard processors, but later consoles used bespoke processors with special features, primarily developed by or in cooperation with IBM for the explicit purpose of being in a game console. In this regard, these computers can be considered "embedded". All three major consoles of the seventh generation were PowerPC based. As of 2019, no PowerPC-based game consoles are currently in production. The most recent release, Nintendo's Wii U, has since been discontinued and succeeded by the Nintendo Switch (which uses a Nvidia Tegra ARM processor). The PlayStation 3, the last PowerPC-based game console to remain in production, was discontinued in 2017. List See also PowerPC applications List of PowerPC processors
https://en.wikipedia.org/wiki/Perennation
In botany, perennation is the ability of organisms, particularly plants, to survive from one germinating season to another, especially under unfavourable conditions such as drought or winter cold. It typically involves development of a perennating organ, which stores enough nutrients to sustain the organism during the unfavourable season, and develops into one or more new plants the following year. Common forms of perennating organs are storage organs (e.g. tubers, rhizomes and corm), and buds. Perennation is closely related with vegetative reproduction, as the organisms commonly use the same organs for both survival and reproduction. See also Overwintering Plant pathology Sclerotium Turion (botany)
https://en.wikipedia.org/wiki/Process%20map
Process map is a global-system process model that is used to outline the processes that make up the business system and how they interact with each other. Process map shows the processes as objects, which means it is a static and non-algorithmic view of the processes. It should be differentiated from a detailed process model, which shows a dynamic and algorithmic view of the processes, usually known as a process flow diagram. There are different notation standards that can be used for modelling process maps, but the most notable ones are TOGAF Event Diagram, Eriksson-Penker notation, and ARIS Value Added Chain. Global process models Global characteristics of the business system are captured by global or system models. Global process models are presented using different methodologies and sometimes under different names. Most notably, they are named process map in Visual Paradigm and MMABP, value-added chain in ARIS, and process diagram in Eriksson-Penker notation – which can easily lead to the confusion with process flow (detailed process model). Global models are mainly object-oriented and present a static view of the business system, they do not describe dynamic aspects of processes. A process map shows the presence of processes and their mutual relationships. The requirement for the global perspective of the system as a supplementary to the internal process logic description results from the necessity of taking into consideration not only the internal process logic but also its significant surroundings. The algorithmic process model cannot take the place of this perspective since it represents the system model of the process. The detailed process model and the global process model represent different perspectives on the same business system, so these models must be mutually consistent. A macro process map represents the major processes required to deliver a product or service to the customer. These macro process maps can be further detailed in sub-diagrams. It
https://en.wikipedia.org/wiki/Animal
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
https://en.wikipedia.org/wiki/Field%20metabolic%20rate
Field metabolic rate (FMR) refers to a measurement of the metabolic rate of a free-living animal. Method Measurement of the Field metabolic rate is made using the doubly labeled water method, although alternative techniques, such as monitoring heart rates, can also be used. The advantages and disadvantages of the alternative approaches have been reviewed by Butler, et al. Several summary reviews have been published.
https://en.wikipedia.org/wiki/General-Purpose%20Serial%20Interface
General-Purpose Serial Interface, also known as GPSI, 7-wire interface, or 7WS, is a 7 wire communications interface. It is used as an interface between Ethernet MAC and PHY blocks. Data is received and transmitted using separate data paths (TXD, RXD) and separate data clocks (TXCLK, RXCLK). Other signals consist of transmit enable (TXEN), receive carrier sense (CRS), and collision (COL). See also Media-independent interface (MII)
https://en.wikipedia.org/wiki/Left%20and%20right%20%28algebra%29
In algebra, the terms left and right denote the order of a binary operation (usually, but not always, called "multiplication") in non-commutative algebraic structures. A binary operation ∗ is usually written in the infix form: The argument  is placed on the left side, and the argument  is on the right side. Even if the symbol of the operation is omitted, the order of and does matter (unless ∗ is commutative). A two-sided property is fulfilled on both sides. A one-sided property is related to one (unspecified) of two sides. Although the terms are similar, left–right distinction in algebraic parlance is not related either to left and right limits in calculus, or to left and right in geometry. Binary operation as an operator A binary operation  may be considered as a family of unary operators through currying: , depending on  as a parameter – this is the family of right operations. Similarly, defines the family of left operations parametrized with . If for some , the left operation  is the identity operation, then is called a left identity. Similarly, if , then is a right identity. In ring theory, a subring which is invariant under any left multiplication in a ring is called a left ideal. Similarly, a right multiplication-invariant subring is a right ideal. Left and right modules Over non-commutative rings, the left–right distinction is applied to modules, namely to specify the side where a scalar (module element) appears in the scalar multiplication. The distinction is not purely syntactical because one gets two different associativity rules (the lowest row in the table) which link multiplication in a module with multiplication in a ring. A bimodule is simultaneously a left and right module, with two different scalar multiplication operations, obeying an associativity condition on them. Other examples Left eigenvectors Left and right group actions In category theory In category theory the usage of "left" and "right" has some algebraic resemblanc
https://en.wikipedia.org/wiki/Video%20line%20selector
A video line selector is an electronic circuit or device for picking a line from an analog video signal. The input of the circuit is connected to an analog video source, the output triggers an oscilloscope, so display the selected line on the oscilloscope or similar device. Properties Video line selectors are circuits or units of other devices, fitted to the demand of the unit or a separate device for use in workshops, production and laboratories. They contain analog and digital circuits and an internal or external DC power supply. There's a video signal input, sometimes an output to prevent reflexions of the video signal and the cause of shadows of the video picture, also a trigger output. There is also an input or adjust for the line number(s) to be picked out and as an option an automatic or manual setting to fit other video standards and non-interlaced video. Video line selectors do not need all the picture signal, just the synchronisation signals are needed. Sometimes inputs for H- and V-sync were installed, only. Setup and References The video signal input is 75 Ω terminated or connected to the video output for a monitor. The amplified video signal is connected to the inputs of the H- und V-sync detector circuits. The H-sync detector outputs the horizontal synchronisation pulse filtered from the video signal. This is the line synchronisation and makes the lines fit vertically. The V-sync detector filters the vertical synchronisation and makes the picture fit the same position on the screen than the previous one. Both synchronisation output pulses are fed to a digital synchron counter. The V-sync resets the counter. The H-sync is being counted. On every frame picture, the counter is being reset and the lines were counted. Most often interlaced video was used, spitting up a picture in the odd numbered lines, followed by the even-numbered lines in a half picture each. (→deninterlacing). Interlace video requires a V-sync detector which detects first a second