source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/SQL%20Server%20Notification%20Services
SQL Server Notification Services is a platform developed by Microsoft for the development and deployment of notification applications based on SQL Server technology and the Microsoft .NET Framework. Notification Services offers a scalable server engine on which to run notification applications, with multi-server capability-providing flexibility and scalability for deploying applications. Notification Services was designed to ease the pain of developing and deploying notification applications that generate personalized, timely information to subscribers. To design, code and test all of the components that make up a robust Notification Services Application-such as notification scheduling, failure detection, retry logic, time zone management, notification grouping, and queue management, adding Notification Services to software applications can be a daunting task. Background Over the years the term Notification applications has been superseded with the term Complex Event Processing (CEP). The idea is that the user defines a set of Rules (or Queries) in advance, and then push data through those rules. Should the data fit any of the criteria of the Rules then some action is fired. For example: A rule may state "If car speed through sensor is above 100 km/h, take photo and record" otherwise all other data is discarded. This approach is much faster than the traditional OLTP design of; Insert the row(s) into the database while constantly polling the data to see if something relevant has happened. It is especially suited to situations where you have high speed inputs, a fixed set of fairly simple queries and may not need to keep all the data. e.g.: Some industries measure the voltage, current and other attributes of hundreds of electric motors in their conveyor belts, 100 times each second. Then compare each measurement to its average, plant operators are alerted should a sudden change occur. Release history SQL Server Notification Services was one of the many component
https://en.wikipedia.org/wiki/Parks%E2%80%93McClellan%20filter%20design%20algorithm
The Parks–McClellan algorithm, published by James McClellan and Thomas Parks in 1972, is an iterative algorithm for finding the optimal Chebyshev finite impulse response (FIR) filter. The Parks–McClellan algorithm is utilized to design and implement efficient and optimal FIR filters. It uses an indirect method for finding the optimal filter coefficients. The goal of the algorithm is to minimize the error in the pass and stop bands by utilizing the Chebyshev approximation. The Parks–McClellan algorithm is a variation of the Remez exchange algorithm, with the change that it is specifically designed for FIR filters. It has become a standard method for FIR filter design. History of optimal FIR filter design In the 1960s, researchers within the field of analog filter design were using the Chebyshev approximation for filter design. During this time, it was well known that the best filters contain an equiripple characteristic in their frequency response magnitude and the elliptic filter (or Cauer filter) was optimal with regards to the Chebyshev approximation. When the digital filter revolution began in the 1960s, researchers used a bilinear transform to produce infinite impulse response (IIR) digital elliptic filters. They also recognized the potential for designing FIR filters to accomplish the same filtering task and soon the search was on for the optimal FIR filter using the Chebyshev approximation. It was well known in both mathematics and engineering that the optimal response would exhibit an equiripple behavior and that the number of ripples could be counted using the Chebyshev approximation. Several attempts to produce a design program for the optimal Chebyshev FIR filter were undertaken in the period between 1962 and 1971. Despite the numerous attempts, most did not succeed, usually due to problems in the algorithmic implementation or problem formulation. Otto Herrmann, for example, proposed a method for designing equiripple filters with restricted band edges.
https://en.wikipedia.org/wiki/Remez%20algorithm
The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L∞ sense. It is sometimes referred to as Remes algorithm or Reme algorithm. A typical example of a Chebyshev space is the subspace of Chebyshev polynomials of order n in the space of real continuous functions on an interval, C[a, b]. The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function. In this case, the form of the solution is precised by the equioscillation theorem. Procedure The Remez algorithm starts with the function to be approximated and a set of sample points in the approximation interval, usually the extrema of Chebyshev polynomial linearly mapped to the interval. The steps are: Solve the linear system of equations (where ), for the unknowns and E. Use the as coefficients to form a polynomial . Find the set of points of local maximum error . If the errors at every are of equal magnitude and alternate in sign, then is the minimax approximation polynomial. If not, replace with and repeat the steps above. The result is called the polynomial of best approximation or the minimax approximation algorithm. A review of technicalities in implementing the Remez algorithm is given by W. Fraser. Choice of initialization The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function f by the Lagrange interpolant Ln(f), it can be shown that this initial approximation is bounded by with the norm or Lebesgue constant of the Lagrange interpolation operator Ln of the nodes (t1, ..., tn + 1) being T being the zeros of the Chebyshev p
https://en.wikipedia.org/wiki/Systems%20modeling%20language
The systems modeling language (SysML) is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism. The language's extensions were designed to support systems engineering activities. Contrast with UML SysML offers several systems engineering specific improvements over UML, which has been developed as a software modeling language. These improvements include the following: SysML's diagrams express system engineering concepts better due to the removal of UML's software-centric restrictions and adds two new diagram types, requirement and parametric diagrams. The former can be used for requirements engineering; the latter can be used for performance analysis and quantitative analysis. Consequent to these enhancements, SysML is able to model a wide range of systems, which may include hardware, software, information, processes, personnel, and facilities. SysML is a comparatively small language that is easier to learn and apply. Since SysML removes many of UML's software-centric constructs, the overall language is smaller both in diagram types and total constructs. SysML allocation tables support common kinds of allocations. Whereas UML provides only limited support for tabular notations, SysML furnishes flexible allocation tables that support requirements allocation, functional allocation, and structural allocation. This capability facilitates automated verification and validation (V&V) and gap analysis. SysML model management constructs support models, views, and viewpoints. These constructs extend UML's capabilities and are architecturally aligned with IEEE-Std-1471-2000 (IEEE
https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20theorem
In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found. The theorem also generalizes to the -gamma function. Statement of the theorem For every there is no non-zero polynomial such that where is the gamma function. For example, define by Then the equation is called an algebraic differential equation, which, in this case, has the solutions and — the Bessel functions of the first and second kind respectively. Hence, we say that and are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, , is not differentially algebraic and is therefore transcendentally transcendental. Proof Let and assume that a non-zero polynomial exists such that As a non-zero polynomial in can never give rise to the zero function on any non-empty open domain of (by the fundamental theorem of algebra), we may suppose, without loss of generality, that contains a monomial term having a non-zero power of one of the indeterminates . Assume also that has the lowest possible overall degree with respect to the lexicographic ordering For example, because the highest power of in any monomial term of the first polynomial is smaller than that of the second polynomial. Next, observe that for all we have: If we define a second polynomial by the transformation then we obtain the following algebraic differential equation for : Furthermore, if is the highest-degree monomial term in , then the highest-degree monomial term i
https://en.wikipedia.org/wiki/Euglena%20gracilis
Euglena gracilis is a freshwater species of single-celled alga in the genus Euglena. It has secondary chloroplasts, and is a mixotroph able to feed by photosynthesis or phagocytosis. It has a highly flexible cell surface, allowing it to change shape from a thin cell up to 100 µm long, to a sphere of approximately 20 µm. Each cell has two flagella, only one of which emerges from the flagellar pocket (reservoir) in the anterior of the cell, and can move by swimming, or by so-called "euglenoid" movement across surfaces. E. gracilis has been used extensively in the laboratory as a model organism, particularly for studying cell biology, biochemistry. Other areas of their use range from complex biological processes as photosynthesis, photoreception, phototaxic responses, nutritional studies and the relationship of molecular structure to the biological function of subcellular particles from a fundamental point of view. Euglena gracilis, is the most studied member of the Euglenaceae. In recent years, Euglena has been found to be an excellent tool for investigations of fundamental biology and even as an aid in clinical diagnosis since they are used to measure vitamin B12 by bioassay. Euglena has several chloroplasts surrounded by three membranes and with pyrenoids. These chloroplasts are of green algal origin. The main storage product is paramylon, a β-1,3 polymer of glucose stored in the form of granules in the cytoplasm. A red eyespot (stigma) is located near the base of the reservoir and this filters the light and focuses it on the paraflagellar body, and is involved in the phototaxis of this alga. Taxonomy A morphological and molecular study of the Euglenozoa put E. gracilis in close kinship with the species Khawkinea quartana, with Peranema trichophorum basal to both, although a later molecular analysis showed that E. gracilis was, in fact, more closely related to Astasia longa than to certain other species recognized as Euglena. The transcriptome of E. gracilis wa
https://en.wikipedia.org/wiki/AMX192
AMX192 (often referred to simply as AMX) is an analog lighting communications protocol used to control stage lighting. It was developed by Strand Century in the late 1970s. Originally, AMX192 was only capable of controlling 192 discrete channels of lighting. Later, multiple AMX192 streams were supported by some lighting desks. AMX192 has mostly been replaced in favour of DMX, and is typically only found in legacy hardware. History The name AMX192 is derived from analog multiplexing and the maximum number of controllable lighting channels (192). AMX was developed to address a significant problems in controlling dimmers. For many years, in order to send a control signal from a lighting control unit to the dimmer units, the only method available was to provide a dedicated wire from the control unit to each dimmer (analogue control) where the voltage present on the wire was varied by the control unit to set the output level of the dimmer. In the late 1970s, the AMX192 serial analogue multiplexing standard was developed in the US, permitting one cable to control several dimmers. At about the same time, D54 was developed in the United Kingdom, and differed from AMX192 in that it used an embedded clocking scheme. AMX192 used a separate differential clock with a driver circuit similar to RS-485, but current limited on each leg with 100Ω resistors. See also Dimmer Lighting control console Lighting control system DMX512 RDM External links Strand Lighting Corporate University of Exeter - Strand Archive Stage lighting Network protocols
https://en.wikipedia.org/wiki/Internetowa%20encyklopedia%20PWN
Internetowa encyklopedia PWN (Polish for Internet PWN Encyclopedia) is a free online Polish-language encyclopedia published by Wydawnictwo Naukowe PWN. It contains some 80,000 entries and 5,000 illustrations. External links Internetowa encyklopedia PWN Online encyclopedias Polish online encyclopedias Polish Scientific Publishers PWN books
https://en.wikipedia.org/wiki/Perpetual%20access
Perpetual access is the stated continuous access of licensed electronic material after is it no longer accessible through an active paid subscription either through the library or publisher action. In many cases, the two parties involved in the license agree that it is necessary for the license to retain access to these materials after the license has lapsed. Other terms for perpetual access or similar trains of thought are ‘post-cancellation access’ and ‘continuing access.' In the licensing of software products, a perpetual license means that a software application is sold on a one-time basis and the licensee can then use a copy of the software forever. The license holder has indefinite access to a specific version of a software program by paying for it only once. Perpetual access is a term that is used within the library community to describe the ability to retain access to electronic journals after the contractual agreement for these materials has passed. Typically when a library licenses access to an electronic journal, the journal's content remains in the possession of the licensor. The library often purchases the rights to all back issues as well as new issues. When the license expires, access to all the journal's contents is lost. In a typical print model, the library purchases the journals and retains them for the duration of the contract but also after the contract expires. In order to retain access to journals that were released during the term of a license for digital electronic journals, the library must obtain perpetual access rights. The ability to maintain perpetual access can be seen in the shift from print to electronic material, as apparent in both user demand and advantages of non-print material. Electronic materials rely on a relationship between library and publisher, with a distinct dynamic over the publisher's control of the licensed material. This in turn causes issues when the paid for subscription with a publisher ends and the use of
https://en.wikipedia.org/wiki/Directory%20harvest%20attack
A directory harvest attack (DHA) is a technique used by spammers in an attempt to find valid/existent e-mail addresses at a domain by using brute force. The attack is usually carried out by way of a standard dictionary attack, where valid e-mail addresses are found by brute force guessing valid e-mail addresses at a domain using different permutations of common usernames. These attacks are more effective for finding e-mail addresses of companies since they are likely to have a standard format for official e-mail aliases (i.e. jdoe@example.domain, johnd@example.domain, or johndoe@example.domain). There are two main techniques for generating the addresses that a DHA targets. In the first, the spammer creates a list of all possible combinations of letters and numbers up to a maximum length and then appends the domain name. This would be described as a standard brute force attack. This technique would be impractical for usernames longer than 5-7 characters. For example, one would have to try 368 (nearly 3 trillion) e-mail addresses to exhaust all 8-character sequences. The other, more targeted technique, is to create a list that combines common first name and surnames and initials (as in the example above). This would be considered a standard dictionary attack when guessing usernames for e-mail addresses. The success of a directory harvest attack relies on the recipient e-mail server rejecting e-mail sent to invalid recipient e-mail addresses during the Simple Mail Transfer Protocol (SMTP) session. Any addresses to which email is accepted are considered valid and are added to the spammer's list (which is commonly sold between spammers). Although the attack could also rely on Delivery Status Notifications (DSNs) to be sent to the sender address to notify of delivery failures, directory harvest attacks likely don't use a valid sender e-mail address. The actual e-mail message generated to the recipient addresses will usually be a short random phrase such as "hello", s
https://en.wikipedia.org/wiki/Stain
A stain is a discoloration that can be clearly distinguished from the surface, material, or medium it is found upon. They are caused by the chemical or physical interaction of two dissimilar materials. Accidental staining may make materials appear used, degraded or permanently unclean. Intentional staining is used in biochemical research and for artistic effect, such as wood staining, rust staining and stained glass. Types There can be intentional stains (such as wood stains or paint), indicative stains (such as food coloring dye, or adding a substance to make bacteria visible under a microscope), natural stains (such as rust on iron or a patina on bronze), and accidental stains such as ketchup and synthetic oil on clothing. Different types of material can be stained by different substances, and stain resistance is an important characteristic in modern textile engineering. Formation The primary method of stain formation is surface stains, where the staining substance is spilled out onto the surface or material and is trapped in the fibers, pores, indentations, or other capillary structures on the surface. The material that is trapped coats the underlying material, and the stain reflects backlight according to its own color. Applying paint, spilled food, and wood stains are of this nature. A secondary method of stain involves a chemical or molecular reaction between the material and the staining material. Many types of natural stains fall into this category. Finally, there can also be molecular attraction between the material and the staining material, involving being held in a covalent bond and showing the color of the bound substance. Properties In many cases, stains are affected by heat and may become reactive enough to bond with the underlying material. Applied heat, such as from ironing, dry cleaning or sunlight, can cause a chemical reaction on an otherwise removable stain, turning it into a chemical. Removal Various laundry techniques exist to attempt t
https://en.wikipedia.org/wiki/Wetted%20perimeter
The wetted perimeter is the perimeter of the cross sectional area that is "wet". The length of line of the intersection of channel wetted surface with a cross sectional plane normal to the flow direction. The term wetted perimeter is common in civil engineering, environmental engineering, hydrology, geomorphology, and heat transfer applications; it is associated with the hydraulic diameter or hydraulic radius. Engineers commonly cite the cross sectional area of a river. The wetted perimeter can be defined mathematically as where li is the length of each surface in contact with the aqueous body. In open channel flow, the wetted perimeter is defined as the surface of the channel bottom and sides in direct contact with the aqueous body. Friction losses typically increase with an increasing wetted perimeter, resulting in a decrease in head. In a practical experiment, one is able to measure the wetted perimeter with a tape measure weighted down to the river bed to get a more accurate measurement. When a channel is much wider than it is deep, the wetted perimeter approximates the channel width. See also Hydrological transport model Manning formula Hydraulic radius References Earth sciences Environmental engineering Environmental science Fluid dynamics Geomorphology Hydraulic engineering Hydrology Length
https://en.wikipedia.org/wiki/List%20mining
List mining can be defined as the use, for purposes of scientific research, of messages sent to Internet-based electronic mailing lists. List mining raises novel issues in Internet research ethics. These ethical issues are especially important for health related lists. Some questions that need to be considered by a Research Ethics Committee (or an Institutional Review Board) when reviewing research proposals that involve list mining include these: Are participants in mailing lists "research subjects"? Should those participants in a health related electronic mailing list who were the original sources of messages sent to such lists be regarded as "research subjects"? If so, then several ethical issues need to be considered. These include those pertaining to privacy, informed consent, whether the research is intrusive and has potential for harm, and whether the list should be perceived as "private" or "public" space. Are participants in mailing lists "published authors"? Should those who were the sources of messages sent to such lists be regarded as "published authors"? Or, perhaps, as "amateur authors"? If so, there are issues of copyright and proper attribution to be considered if messages sent to such lists are cited verbatim. Even short excerpts from such messages raise such issues. Are participants in mailing lists "members of a community"? Participants on mailing lists such as electronic support groups may regard themselves as members of an online "community". Are they? To provide an answer to this question, characteristics of various types of communities need to be defined and considered. For example, if one defining characteristic of a community is "self-identification as community", then virtual groups often have this characteristic. However, if "geographic localization" or "legitimate political authority" are considered to be other defining characteristics of a community, then virtual groups rarely or never possess this characteristic. Of particular imp
https://en.wikipedia.org/wiki/TestDisk
TestDisk is a free and open-source data recovery utility that helps users recover lost partitions or repair corrupted filesystems. TestDisk can collect detailed information about a corrupted drive, which can then be sent to a technician for further analysis. TestDisk supports DOS, Microsoft Windows (i.e. NT 4.0, 2000, XP, Server 2003, Server 2008, Vista, Windows 7, Windows 8.1, Windows 10), Linux, FreeBSD, NetBSD, OpenBSD, SunOS, and MacOS. TestDisk handles non-partitioned and partitioned media. In particular, it recognizes the GUID Partition Table (GPT), Apple partition map, PC/Intel BIOS partition tables, Sun Solaris slice and Xbox fixed partitioning scheme. TestDisk uses a command line user interface. TestDisk can recover deleted files with 97% accuracy. Features TestDisk can recover deleted partitions, rebuild partition tables or rewrite the master boot record (MBR). Partition recovery TestDisk retrieves the LBA size and CHS geometry of attached data storage devices (i.e. hard disks, memory cards, USB flash drives, and virtual disk images) from the BIOS or the operating system. The geometry information is required for a successful recovery. TestDisk reads sectors on the storage device to determine if the partition table or filesystem on it requires repair (see next section). TestDisk is able to recognize the following partition table formats: Apple partition map GUID Partition Table Humax PC/Intel Partition Table (master boot record) Sun Solaris slice Xbox fixed partitioning scheme Non-partitioned media TestDisk can perform deeper checks to locate partitions that have been deleted from the partition table. However, it is up to the user to look over the list of possible partitions found by TestDisk and to select those that they wish to recover. After partitions are located, TestDisk can rebuild the partition table and rewrite the MBR. Filesystem repair TestDisk can deal with some specific logical filesystem corruption. File recovery When a file is
https://en.wikipedia.org/wiki/SerDes
A Serializer/Deserializer (SerDes) is a pair of functional blocks commonly used in high speed communications to compensate for limited input/output. These blocks convert data between serial data and parallel interfaces in each direction. The term "SerDes" generically refers to interfaces used in various technologies and applications. The primary use of a SerDes is to provide data transmission over a single line or a differential pair in order to minimize the number of I/O pins and interconnects. Generic function The basic SerDes function is made up of two functional blocks: the Parallel In Serial Out (PISO) block (aka Parallel-to-Serial converter) and the Serial In Parallel Out (SIPO) block (aka Serial-to-Parallel converter). There are 4 different SerDes architectures: (1) Parallel clock SerDes, (2) Embedded clock SerDes, (3) 8b/10b SerDes, (4) Bit interleaved SerDes. The PISO (Parallel Input, Serial Output) block typically has a parallel clock input, a set of data input lines, and input data latches. It may use an internal or external phase-locked loop (PLL) to multiply the incoming parallel clock up to the serial frequency. The simplest form of the PISO has a single shift register that receives the parallel data once per parallel clock, and shifts it out at the higher serial clock rate. Implementations may also make use of a double-buffered register to avoid metastability when transferring data between clock domains. The SIPO (Serial Input, Parallel Output) block typically has a receive clock output, a set of data output lines and output data latches. The receive clock may have been recovered from the data by the serial clock recovery technique. However, SerDes which do not transmit a clock use reference clock to lock the PLL to the correct Tx frequency, avoiding low harmonic frequencies present in the data stream. The SIPO block then divides the incoming clock down to the parallel rate. Implementations typically have two registers connected as a double buff
https://en.wikipedia.org/wiki/Encyclopedia%20of%20Earth
The Encyclopedia of Earth (abbreviated EoE) is an electronic reference about the Earth, its natural environments, and their interaction with society. The Encyclopedia is described as a free, fully searchable collection of articles written by scholars, professionals, educators, and other approved experts, who collaborate and review each other's work. The articles are written in non-technical language and are intended to be useful to students, educators, scholars, and professionals, as well as to the general public. The authors, editors, and even copy editors are attributed on the articles with links to biographical pages on those individuals. The Encyclopedia of Earth is a component of the larger Earth Portal (part of the Digital Universe project), which is a constellation of subject-specific information portals that contain news services, structured metadata, a federated environmental search engine, and other information resources. The technology platform for the Encyclopedia of Earth is a modified version of MediaWiki, which is closed to all but approved users. Once an article is reviewed and approved it is published to a public site. The EoE was launched in September 2006 with about 360 articles, and as of November 30, 2010 had 7,678 articles. Authoring and publishing process Contributors to the Encyclopedia of Earth are made up of scientists, educators, and professionals within the environmental field. Contributors are vetted by the Environmental Information Coalition (EIC) Stewardship Committee, the governing body of the Encyclopedia of Earth, before they are given access to the author's wiki. Within the wiki, where they operate under their real names and are given attribution for the published articles. Articles are written, edited, and published in a two-step process: Content for the Encyclopedia is created, maintained, and governed by group of experts via a restricted-access wiki that uses a modified version of MediaWiki. Upon completion, content is rev
https://en.wikipedia.org/wiki/.NET%20Remoting
.NET Remoting is a Microsoft application programming interface (API) for interprocess communication released in 2002 with the 1.0 version of .NET Framework. It is one in a series of Microsoft technologies that began in 1990 with the first version of Object Linking and Embedding (OLE) for 16-bit Windows. Intermediate steps in the development of these technologies were Component Object Model (COM) released in 1993 and updated in 1995 as COM-95, Distributed Component Object Model (DCOM), released in 1997 (and renamed ActiveX), and COM+ with its Microsoft Transaction Server (MTS), released in 2000. It is now superseded by Windows Communication Foundation (WCF), which is part of the .NET Framework 3.0. Like its family members and similar technologies such as Common Object Request Broker Architecture (CORBA) and Java's remote method invocation (RMI), .NET Remoting is complex, yet its essence is straightforward. With the assistance of operating system and network agents, a client process sends a message to a server process and receives a reply. Overview .NET Remoting allows an application to make an object (termed remotable object) available across remoting boundaries, which includes different appdomains, processes or even different computers connected by a network. The .NET Remoting runtime hosts the listener for requests to the object in the appdomain of the server application. On the client end, any requests to the remotable object are proxied by the .NET Remoting runtime over Channel objects, that encapsulate the actual transport mode, including TCP streams, HTTP streams and named pipes. As a result, by instantiating proper Channel objects, a .NET Remoting application can be made to support different communication protocols without recompiling the application. The runtime itself manages the act of serialization and marshalling of objects across the client and server appdomains. .NET Remoting makes a reference of a remotable object available to a client application,
https://en.wikipedia.org/wiki/Intertubercular%20plane
A lower transverse plane midway between the upper transverse and the upper border of the pubic symphysis; this is termed the intertubercular plane (or transtubercular), since it practically corresponds to that passing through the iliac tubercles; behind, its plane cuts the body of the fifth lumbar vertebra. Additional images See also Transpyloric plane References External links http://medical-dictionary.thefreedictionary.com/_/viewer.aspx?path=dorland&name=plane(2).jpg Anatomical planes
https://en.wikipedia.org/wiki/Transpyloric%20plane
The transpyloric plane, also known as Addison's plane, is an imaginary horizontal plane, located halfway between the suprasternal notch of the manubrium and the upper border of the symphysis pubis at the level of the first lumbar vertebrae, L1. It lies roughly a hand's breadth beneath the xiphisternum or midway between the xiphisternum and the umbilicus. The plane in most cases cuts through the pylorus of the stomach, the tips of the ninth costal cartilages and the lower border of the first lumbar vertebra. Structures crossed The transpyloric plane is clinically notable because it passes through several important abdominal structures. It also divides the supracolic and infracolic compartments, with the liver, spleen and gastric fundus above it and the small intestine and colon below it. Lumbar vertebra and spinal cord The first lumbar vertebra lies at the level of the transpyloric plane. Despite the conus medullaris, the end of the spinal cord, being understood to terminate at the level of the transpyloric plane, there is significant variability. Up to 40% of people have spinal cords ending below the transpyloric plane. Stomach The transpyloric plane passes through the pylorus of the stomach, despite it being suspended by the lesser and greater omentum and being relatively mobile. Duodenum The horizontal part of the duodenum slopes upwards to the left of the vertical midline, following which the vertical ascending part of the duodenum reaches the transpyloric plane. It ends in the duodenojejunal junction, which lies approximately 2.5 cm to the left of the midline and just below the transpyloric plane. Pancreas The neck of pancreas lies on the transpyloric plane, whilst the body and tail are to the left and above it. Gallbladder The fundus of the gallbladder projects from the liver's inferior border at the intersection of the transpyloric plane and the right lateral midline. Kidneys Despite the right kidney lying 1 cm lower than the left (right just below and
https://en.wikipedia.org/wiki/Henri%20Poincar%C3%A9%20Prize
The Henri Poincaré Prize is awarded every three years since 1997 for exceptional achievements in mathematical physics and foundational contributions leading to new developments in the field. The prize is sponsored by the Daniel Iagolnitzer Foundation and is awarded to approximately three scientists at the International Congress on Mathematical Physics. The prize was also established to support promising young researchers that already made outstanding contributions in mathematical physics. Prize recipients See also Henri Poincaré List of physics awards List of mathematics awards References External links Webpage of the prize Daniel Iagolnitzer Foundation Physics awards Research awards Mathematical physics Triennial events
https://en.wikipedia.org/wiki/History%20of%20machine%20translation
Machine translation is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another. In the 1950s, machine translation became a reality in research, although references to the subject can be found as early as the 17th century. The Georgetown experiment, which involved successful fully automatic translation of more than sixty Russian sentences into English in 1954, was one of the earliest recorded projects. Researchers of the Georgetown experiment asserted their belief that machine translation would be a solved problem within three to five years. In the Soviet Union, similar experiments were performed shortly after. Consequently, the success of the experiment ushered in an era of significant funding for machine translation research in the United States. The achieved progress was much slower than expected; in 1966, the ALPAC report found that ten years of research had not fulfilled the expectations of the Georgetown experiment and resulted in dramatically reduced funding. Interest grew in statistical models for machine translation, which became more common and also less expensive in the 1980s as available computational power increased. Although there exists no autonomous system of "fully automatic high quality translation of unrestricted text," there are many programs now available that are capable of providing useful output within strict constraints. Several of these programs are available online, such as Google Translate and the SYSTRAN system that powers AltaVista's BabelFish (which was replaced by Microsoft Bing translator in May 2012). The beginning The origins of machine translation can be traced back to the work of Al-Kindi, a 9th-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine translation. The idea of machine translation late
https://en.wikipedia.org/wiki/Jawed%20Siddiqi
Jawed Siddiqi FBCS is a Pakistani British computer scientist and software engineer. He is professor emeritus of software engineering at Sheffield Hallam University, England. He is the president of NCUP National Council of University Professors in the UK. Education and academic career Siddiqi received a BSc degree in mathematics from the University of London, followed by an MSc and PhD in computer science at the University of Aston, Birmingham. During 1991–1993, he was a visiting researcher at the Centre for Requirements and Foundation at the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science), working with Professor Joseph Goguen in the area of requirements engineering. Siddiqi has been involved with the BCS Formal Aspects of Computing Science (FACS) Specialist Group for many years. Currently he is chair of the group. Siddiqi is also an executive member of the IEEE Technical Council on Software Engineering (TCSE). Siddiqi is a British computer scientist, fellow of the British Computer Society, a member of the IEEE, and a member of the ACM. He is a co-editor of Formal Methods: State of the Art and New Directions. Fighting racism Siddiqi has for three decades has been involved in countering racism and fighting for social justice. He was a founding member and chair of the North Staffordshire Racial Equality Council, executive member of the West Midlands Regional Board for Commission for Racial Equality, secretary of the Black Justice Project and chair of Sheffield Racial Harassment Project. He has written about and been invited to speak on countering racism particularly structural racism. He is the vice chair of The Monitoring Group (TMG). TMG works with all sections of the black and Asian communities to that are facing hostility, abuse and violence from racists. It has been involved in several high-profile cases: the Stephen Lawrence family, Sarfraz Najeib family and Zahid Mubarek family. Public service Siddiqi was
https://en.wikipedia.org/wiki/Ampliphase
Ampliphase is the brand name of an amplitude modulation system achieved by summing phase modulated carriers. This modulation and amplifier technology family was originally marketed by RCA for AM broadcast transmitters. The Ampliphase system was not developed by RCA, but by McClatchy Broadcasting in the mid-1930s. McClatchy Broadcasting acquired the technology via patent acquisition. The Ampliphase design was originally proposed by H. Chireix in 1935 and termed "Outphasing" by him. He sold the patent to McClatchy Broadcasting that later sold the patent to RCA. RCA turned "Outphasing" transmitters into a mass-produced product. RCA's first transmitters using this modulation system were at the 50,000 watt level but later lower power transmitters such as 10 kw and 5 kw were made. McClatchy Broadcasting was a former group owner of AM, FM and TV stations as well as a California publisher of newspapers. McClatchy Broadcasting should not be confused with the present-day McClatchey Broadcasting LLC, a different corporate entity. Only one known transmitter of this type is still in use. KFBK in California maintains an RCA BTA-50H (the "last gasp" of the Ampliphase concept) as an auxiliary transmitter. Radio Caroline has a working RCA BTA-50H on display aboard its radio ship Ross Revenge, however this transmitter has fallen out of use and is unlikely to be put back on the air since the Ross Revenge currently broadcasts via relay to a more efficient land-based transmitter. How it works The system takes a carrier signal and splits it into two identical signals. The signals are first phase shifted 135 degrees from each other (to provide a base power output with zero modulation from the transmitter). Each signal is then phase modulated by the audio signal: one signal is positively phase modulated while the other is negatively phase modulated. The two signals are then amplified to a desired power. Finally, the two signals are summed in the final output filter stage of the tran
https://en.wikipedia.org/wiki/Flash%20ADC
A flash ADC (also known as a direct-conversion ADC) is a type of analog-to-digital converter that uses a linear voltage ladder with a comparator at each "rung" of the ladder to compare the input voltage to successive reference voltages. Often these reference ladders are constructed of many resistors; however, modern implementations show that capacitive voltage division is also possible. The output of these comparators is generally fed into a digital encoder, which converts the inputs into a binary value (the collected outputs from the comparators can be thought of as a unary value). Benefits and drawbacks Flash converters are high-speed compared to many other ADCs, which usually narrow in on the "correct" answer over a series of stages. However, compared to these, a flash converter is also quite simple and, apart from the analog comparators, only requires logic for the final conversion to binary. For best accuracy, a track-and-hold circuit is often inserted in front of the ADC input. This is needed for many ADC types (like successive approximation ADC), but for flash ADCs, there is no real need for this because the comparators are the sampling devices. A flash converter requires many comparators compared to other ADCs, especially as the precision increases. For example, a flash converter requires comparators for an n-bit conversion. The size, power consumption, and cost of all those comparators make flash converters generally impractical for precisions much greater than 8 bits (255 comparators). In place of these comparators, most other ADCs substitute more complex logic and/or analog circuitry that can be scaled more easily for increased precision. Implementation Flash ADCs have been implemented in many technologies, varying from silicon-based bipolar (BJT) and complementary metal–oxide FETs (CMOS) technologies to rarely used III-V technologies. This type of ADC is often used as a first medium-sized analog circuit verification. The earliest implementations c
https://en.wikipedia.org/wiki/Voltage%20ladder
A voltage ladder is a simple electronic circuit consisting of several resistors connected in series with a voltage placed across the entire resistor network, a generalisation of a two-resistor voltage divider. Connections to the nodes provide access to the voltages available. Voltage ladders are useful for providing a set of successive voltage references, for instance for a flash analog-to-digital converter. Operation A voltage drop occurs across each resistor in the network causing each successive "rung" of the ladder (each node of the circuit) to have a higher voltage than the previous one. Since the ladder is a series circuit, the current is the same throughout, and is given by the total voltage divided by the total series resistance (V/Req). The voltage drop across any one resistor is I×Rn, where I is the current calculated above, and Rn is the resistance of the resistor in question. The voltage referenced to ground at any node is simply the sum of the voltages dropped by each resistor between that node and ground. Alternatively node voltages can be calculated using voltage division: the voltage drop across any resistor is V×Rn/Req where V is the total voltage, Req is the total (equivalent) resistance, and Rn is the resistance of the resistor in question. The voltage of a node referenced to ground is the sum of the drops across all the resistors, but it's now easier to consider all these resistors as a single equivalent resistance RT, which is simply the sum of all the resistances between the node and ground, so the node voltage is given by V×RT/Req. References Analog circuits
https://en.wikipedia.org/wiki/MSN%20Chat
MSN Chat was the Microsoft Network version of IRCX (Internet Relay Chat extensions by Microsoft), which replaced Microsoft Chat, a set of Exchange-based IRCX servers first available in the Microsoft Comic Chat client, although Comic Chat was not required to connect. History Client Compatibility According to the MSN Chat website, the following were required to use the MSN Chat Service: Windows 95 or later Internet Explorer 4.0 or later OR; Netscape Navigator 4.x The Microsoft Network Chat Control was developed as an ActiveX Component Object Model (COM) Object. ActiveX, being a Microsoft technology provided limited compatibility for other products. The other major platforms beside Internet Explorer that MSN Chat was supported on, was Netscape Navigator and MSNTV (formerly known as WebTV). To ensure the MSN Chat network was only being connected to by authorized clients, Microsoft created and implemented a SASL based Security Service Provider authentication package known as GateKeeper. This used a randomized session key to authorize users not using the Microsoft Passport (now Microsoft account) system. Microsoft used another SSP known as GateKeeperPassport, that worked from the same method but required certain attributes related to the user's account. Defeating the "Authentication Challenge" There have been various methods through the use of mIRC to access the MSN Chat Network. Most of the methods were through the use of the MSN Chat Control itself, yet others were more complicated. In the beginning, shortly after the move from Microsoft Chat, the MSN Chat Network could be directly connected to through any IRC Client to irc.msn.com on port 6667. Perhaps because of abuse or other factors, such as the desire to authenticate users based on their Microsoft Passport, Microsoft implemented GateKeeper and GateKeeperPassport, and integrated both into their chat control. The weakness of GateKeeper and the fact the early MSN Chat Controls (1.0−3.0) had public functions
https://en.wikipedia.org/wiki/Causal%20consistency
Causal consistency is one of the major memory consistency models. In concurrent programming, where concurrent processes are accessing a shared memory, a consistency model restricts which accesses are legal. This is useful for defining correct data structures in distributed shared memory or distributed transactions. Causal Consistency is “Available under Partition”, meaning that a process can read and write the memory (memory is Available) even while there is no functioning network connection (network is Partitioned) between processes; it is an asynchronous model. Contrast to strong consistency models, such as sequential consistency or linearizability, which cannot be both safe and live under partition, and are slow to respond because they require synchronisation. Causal consistency was proposed in 1990s as a weaker consistency model for shared memory models. Causal consistency is closely related to the concept of Causal Broadcast in communication protocols. In these models, a distributed execution is represented as a partial order, based on Lamport's happened-before concept of potential causality. Causal consistency is a useful consistency model because it matches programmers' intuitions about time, is more available than strong consistency models, yet provides more useful guarantees than eventual consistency. For instance, in distributed databases, causal consistency supports the ordering of operations, in contrast to eventual consistency. Also, causal consistency helps with the development of abstract data types such as queues or counters. Since time and ordering are so fundamental to our intuition, it is hard to reason about a system that does not enforce causal consistency. However, many distributed databases lack this guarantee, even ones that provide serialisability. Spanner does guarantee causal consistency, but it also forces strong consistency, thus eschewing availability under partition. More available databases that ensure causal consistenc
https://en.wikipedia.org/wiki/Log%20analysis
In computer log management and intelligence, log analysis (or system and network log analysis) is an art and science seeking to make sense of computer-generated records (also called log or audit trail records). The process of creating such records is called data logging. Typical reasons why people perform log analysis are: Compliance with security policies Compliance with audit or regulation System troubleshooting Forensics (during investigations or in response to a subpoena) Security incident response Understanding online user behavior Logs are emitted by network devices, operating systems, applications and all manner of intelligent or programmable devices. A stream of messages in time sequence often comprises a log. Logs may be directed to files and stored on disk or directed as a network stream to a log collector. Log messages must usually be interpreted concerning the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user login, or a systems error). Logs are often created by software developers to aid in the debugging of the operation of an application or understanding how users are interacting with a system, such as a search engine. The syntax and semantics of data within log messages are usually application or vendor-specific. The terminology may also vary; for example, the authentication of a user to an application may be described as a log in, a logon, a user connection or an authentication event. Hence, log analysis must interpret messages within the context of an application, vendor, system or configuration to make useful comparisons to messages from different log sources. Log message format or content may not always be fully documented. A task of the log analyst is to induce the system to emit the full range of messages to understand the complete domain from which the messages must be interpreted. A log analyst may map varying terminology from different log sources into a uni
https://en.wikipedia.org/wiki/Photoionization%20detector
A photoionization detector or PID is a type of gas detector. Typical photoionization detectors measure volatile organic compounds and other gases in concentrations from sub parts per billion to 10 000 parts per million (ppm). The photoionization detector is an efficient and inexpensive detector for many gas and vapor analytes. PIDs produce instantaneous readings, operate continuously, and are commonly used as detectors for gas chromatography or as hand-held portable instruments. Hand-held, battery-operated versions are widely used in military, industrial, and confined working facilities for health and safety. Their primary use is for monitoring possible worker exposure to volatile organic compounds (VOCs) such as solvents, fuels, degreasers, plastics & their precursors, heat transfer fluids, lubricants, etc. during manufacturing processes and waste handling. Portable PIDs are used for monitoring: Industrial hygiene and safety Environmental contamination and remediation Hazardous materials handling Ammonia detection Lower explosive limit measurements Arson investigation Indoor air quality Cleanroom facility maintenance Principle In a photoionization detector high-energy photons, typically in the vacuum ultraviolet (VUV) range, break molecules into positively charged ions. As compounds enter the detector they are bombarded by high-energy UV photons and are ionized when they absorb the UV light, resulting in ejection of electrons and the formation of positively charged ions. The ions produce an electric current, which is the signal output of the detector. The greater the concentration of the component, the more ions are produced, and the greater the current. The current is amplified and displayed on an ammeter or digital concentration display. The ions can undergo numerous reactions including reaction with oxygen or water vapor, rearrangement, and fragmentation. A few of them may recapture an electron within the detector to reform their original molecules; h
https://en.wikipedia.org/wiki/Bureau%20of%20Ships
The United States Navy's Bureau of Ships (BuShips) was established by Congress on 20 June 1940, by a law which consolidated the functions of the Bureau of Construction and Repair (BuC&R) and the Bureau of Engineering (BuEng). The new bureau was to be headed by a chief and deputy-chief, one selected from the Engineering Corps (Marine Engineer) and the other from the Construction Corps (Naval Architect). The chief of the former Bureau of Engineering, Rear Admiral Samuel M. "Mike" Robinson, was named BuShips' first chief, while the former chief of the Bureau of Construction & Repair, Rear Admiral Alexander H. Van Keuren, was named as BuShips' first Deputy-Chief. The bureau's responsibilities included supervising the design, construction, conversion, procurement, maintenance, and repair of ships and other craft for the Navy; managing shipyards, repair facilities, laboratories, and shore stations; developing specifications for fuels and lubricants; and conducting salvage operations. BuShips was abolished by DOD Order of 9 March 1966, as part of the general overhaul of the Navy's bureau system of material support. BuShips was succeeded by the Naval Ship Systems Command (NAVSHIPS), known as the Naval Sea Systems Command or NAVSEA since 1974. Origins The Bureau of Ships had its origins when , first of the s to be delivered, was found to be heavier than designed and dangerously top-heavy in early 1939. It was determined that an underestimate by BuEng of the weight of a new machinery design was responsible, and that BuC&R did not have sufficient authority to detect or correct the error during the design process. Initially, Acting Secretary of the Navy Charles Edison proposed consolidation of the design divisions of the two bureaus. When the bureau chiefs could not agree on how to do this, he replaced both chiefs in September 1939. The consolidation was finally effected by a law passed by Congress on 20 June 1940. History The Bureau of Ships was initially organized in fi
https://en.wikipedia.org/wiki/John%20Elder%20Professor%20of%20Naval%20Architecture%20and%20Ocean%20Engineering
The John Elder Professor of Naval Architecture and Ocean Engineering at the University of Glasgow, Scotland, was established in 1883 and endowed by Isabella Elder (1828-1905) in honor of her husband, John Elder. John Elder was a renowned marine engineer and shipbuilder of Randolph, Elder & Co. (1824-1869). John Elder Professors of Naval Architecture and Ocean Engineering Francis Elgar LLD (1883) Philip Jenkins (1886) Sir John Harvard Biles DSc LLD (1891) Percy Archibald Hillhouse DSc (1921-1942) Andrew McCance Robb DSc LLD (1944) John Farquhar Christie Conn DSc (1957) Douglas Faulkner BSc PhD RCNC FEng (1973) Nigel D P Barltrop BSc CEng SICE MRINA See also List of Professorships at the University of Glasgow References Who, What and Where: The History and Constitution of the University of Glasgow compiled by Michael Moss, Moira Rankin and Lesley Richmond Elder Professor of Naval Architecture and Ocean Engineering, Glasgow 1883 establishments in Scotland Naval architecture Marine engineering
https://en.wikipedia.org/wiki/Segal%27s%20conjecture
Segal's Burnside ring conjecture, or, more briefly, the Segal conjecture, is a theorem in homotopy theory, a branch of mathematics. The theorem relates the Burnside ring of a finite group G to the stable cohomotopy of the classifying space BG. The conjecture was made in the mid 1970s by Graeme Segal and proved in 1984 by Gunnar Carlsson. , this statement is still commonly referred to as the Segal conjecture, even though it now has the status of a theorem. Statement of the theorem The Segal conjecture has several different formulations, not all of which are equivalent. Here is a weak form: there exists, for every finite group G, an isomorphism Here, lim denotes the inverse limit, S* denotes the stable cohomotopy ring, B denotes the classifying space, the superscript k denotes the k-skeleton, and the subscript + denotes the addition of a disjoint basepoint. On the right-hand side, the hat denotes the completion of the Burnside ring with respect to its augmentation ideal. The Burnside ring The Burnside ring of a finite group G is constructed from the category of finite G-sets as a Grothendieck group. More precisely, let M(G) be the commutative monoid of isomorphism classes of finite G-sets, with addition the disjoint union of G-sets and identity element the empty set (which is a G-set in a unique way). Then A(G), the Grothendieck group of M(G), is an abelian group. It is in fact a free abelian group with basis elements represented by the G-sets G/H, where H varies over the subgroups of G. (Note that H is not assumed here to be a normal subgroup of G, for while G/H is not a group in this case, it is still a G-set.) The ring structure on A(G) is induced by the direct product of G-sets; the multiplicative identity is the (isomorphism class of any) one-point set, which becomes a G-set in a unique way. The Burnside ring is the analogue of the representation ring in the category of finite sets, as opposed to the category of finite-dimensional vector spaces over a field
https://en.wikipedia.org/wiki/HP%20250
The HP 250 was a multiuser business computer by Hewlett-Packard running HP 250 BASIC language as its OS with access to HP's IMAGE database management. It was produced by the General Systems Division (GSD), but was a major repackaging of desktop workstation HP 9835 which had been sold in small business configurations. The HP 9835's processor was initially used in the first HP 250s. The HP 250 borrowed the embedded keyboard design from the HP 300 and added a wider slide-able and tilt-able monitor with screen labeled function keys buttons physically placed just below on-screen labels (a configuration now used in ATMs and gas pumps) built into a large desk design. Though the HP 250 had a different processor and operating system, it used similar interface cards to the HP 300, and then later also the HP 3000 models 30, 33, 40, 42, 44, and 48: HP-IB channel (GIC), Network, and serial (MUX) cards. Usually the HP250 was a small HP-IB single channel system (limited to seven HP-IB devices per GIC at a less than 1 MHz bandwidth). Initially the HP 250 was like the HP300 as a single user, floppy based computer system. Later a multi-user ability was added, and the HP300's embedded hard drive was installed as a boot drive. Additionally, drivers were made available to connect and use more HP-IB devices: hard disc and tape drives, plus impact and matrix printers. This gave some business-growth scale-ability to the HP250 product line. The HP 250 was advertised in 1978 and was promoted more in Europe as an easy-to-use, small space, low cost business system, and thus sold better in Europe. The next-gen HP 250 was the HP 260 which lost the table, embedded keyboard, and CRT for a small stand-alone box. HP systems moved away from all-in-one table top designs to having the system in a remote secure location, and remotely connecting user's terminals and peripherals out to in their work area. In those days, RS-232 cables ran from desk side terminals (262x low cost terminals) to the HP 25
https://en.wikipedia.org/wiki/Effective%20evolutionary%20time
The hypothesis of effective evolutionary time attempts to explain gradients, in particular latitudinal gradients, in species diversity. It was originally named "time hypothesis". Background Low (warm) latitudes contain significantly more species than high (cold) latitudes. This has been shown for many animal and plant groups, although exceptions exist (see latitudinal gradients in species diversity). An example of an exception is helminths of marine mammals, which have the greatest diversity in northern temperate seas, possibly because of small population densities of hosts in tropical seas that prevented the evolution of a rich helminth fauna, or because they originated in temperate seas and had more time for speciations there. It has become more and more apparent that species diversity is best correlated with environmental temperature and more generally environmental energy. These findings are the basis of the hypothesis of effective evolutionary time. Species have accumulated fastest in areas where temperatures are highest. Mutation rates and speed of selection due to faster physiological rates are highest, and generation times which also determine speed of selection, are smallest at high temperatures. This leads to a faster accumulation of species, which are absorbed into the abundantly available vacant niches, in the tropics. Vacant niches are available at all latitudes, and differences in the number of such niches can therefore not be the limiting factor for species richness. The hypothesis also incorporates a time factor: habitats with a long undisturbed evolutionary history will have greater diversity than habitats exposed to disturbances in evolutionary history. The hypothesis of effective evolutionary time offers a causal explanation of diversity gradients, although it is recognized that many other factors can also contribute to and modulate them. Historical aspects Some aspects of the hypothesis are based on earlier studies. Bernhard Rensch, for exam
https://en.wikipedia.org/wiki/Parametric%20oscillator
A parametric oscillator is a driven harmonic oscillator in which the oscillations are driven by varying some parameters of the system at some frequencies, typically different from the natural frequency of the oscillator. A simple example of a parametric oscillator is a child pumping a playground swing by periodically standing and squatting to increase the size of the swing's oscillations. The child's motions vary the moment of inertia of the swing as a pendulum. The "pump" motions of the child must be at twice the frequency of the swing's oscillations. Examples of parameters that may be varied are the oscillator's resonance frequency and damping . Parametric oscillators are used in several areas of physics. The classical varactor parametric oscillator consists of a semiconductor varactor diode connected to a resonant circuit or cavity resonator. It is driven by varying the diode's capacitance by applying a varying bias voltage. The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide/YAG-based parametric oscillators operate in the same fashion. Another important example is the optical parametric oscillator, which converts an input laser light wave into two output waves of lower frequency (). When operated at pump levels below oscillation, the parametric oscillator can amplify a signal, forming a parametric amplifier (paramp). Varactor parametric amplifiers were developed as low-noise amplifiers in the radio and microwave frequency range. The advantage of a parametric amplifier is that it has much lower noise than an amplifier based on a gain device like a transistor or vacuum tube. This is because in the parametric amplifier a reactance is varied instead of a (noise-producing) resistance. They are used in very low noise radio receivers in radio telescopes and spacecraft communication antennas. Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at o
https://en.wikipedia.org/wiki/Predrag%20Cvitanovi%C4%87
Predrag Cvitanović (; born April 1, 1946) is a theoretical physicist regarded for his work in nonlinear dynamics, particularly his contributions to periodic orbit theory. Life Cvitanović earned his B.S. from MIT in 1969 and his Ph.D. at Cornell University in 1973. Before joining the physics department at the Georgia Institute of Technology he was the director of the Center for Chaos and Turbulence Studies of the Niels Bohr Institute in Copenhagen. Cvitanović is a member of the Royal Danish Academy of Sciences and Letters, a corresponding member of Croatian Academy of Sciences and Arts, a recipient of the Research Prize of the Danish Physical Society, and a fellow of the American Physical Society. In 2009 Cvitanović was the recipient of the prestigious Alexander von Humboldt Prize for his work in turbulence theory. He currently holds the Glen P. Robinson Chair in Non-Linear Science in from Georgia Institute of Technology. Scientific work Perhaps his best-known work is his introduction of cycle expansions— that is, expansions based on using periodic orbit theory—to approximate chaotic dynamics in a controlled perturbative way. This technique has proven to be widely useful for diagnosing and quantifying chaotic dynamics in problems ranging from atomic physics to neurophysiology. This theory has been applied by Cvitanović and others to fluid turbulence. Another well-known result is the Feigenbaum-Cvitanović functional equation. Books P. Cvitanović, R. Artuso, R. Mainieri, G. Tanner and G. Vattay, "Chaos: Classical and Quantum" Niels Bohr Institute, Copenhagen 2005 P. Cvitanović, "Group Theory: Birdtracks, Lie's, and Exceptional Groups" Princeton University Press, Princeton 2008, available online at http://birdtracks.eu/ See also E7½ Feigenbaum function Penrose graphical notation References External links Cvitanović's web page at Georgia Tech Conference in honor of Cvitanović's 60th birthday A collection of Cvitanović's quotes 1946 births Cornell Univers
https://en.wikipedia.org/wiki/Permeation
In physics and engineering, permeation (also called imbuing) is the penetration of a permeate (a fluid such as a liquid, gas, or vapor) through a solid. It is directly related to the concentration gradient of the permeate, a material's intrinsic permeability, and the materials' mass diffusivity. Permeation is modeled by equations such as Fick's laws of diffusion, and can be measured using tools such as a minipermeameter. Description The process of permeation involves the diffusion of molecules, called the permeant, through a membrane or interface. Permeation works through diffusion; the permeant will move from high concentration to low concentration across the interface. A material can be semipermeable, with the presence of a semipermeable membrane. Only molecules or ions with certain properties will be able to diffuse across such a membrane. This is a very important mechanism in biology where fluids inside a blood vessel need to be regulated and controlled. Permeation can occur through most materials including metals, ceramics and polymers. However, the permeability of metals is much lower than that of ceramics and polymers due to their crystal structure and porosity. Permeation is something that must be considered carefully in many polymer applications, due to their high permeability. Permeability depends on the temperature of the interaction as well as the characteristics of both the polymer and the permeant component. Through the process of sorption, molecules of the permeant can be either absorbed or desorbed at the interface. The permeation of a material can be measured through numerous methods that quantify the permeability of a substance through a specific material. Permeability due to diffusion is measured in SI units of mol/(m・s・Pa) although Barrers are also commonly used. Permeability due to diffusion is not to be confused with Permeability (earth sciences) due to fluid flow in porous solids measured in Darcy. Related terms Permeant: The substanc
https://en.wikipedia.org/wiki/List%20of%20triangle%20inequalities
In geometry, triangle inequalities are inequalities involving the parameters of triangles, that hold for every triangle, or for every triangle meeting certain conditions. The inequalities give an ordering of two different values: they are of the form "less than", "less than or equal to", "greater than", or "greater than or equal to". The parameters in a triangle inequality can be the side lengths, the semiperimeter, the angle measures, the values of trigonometric functions of those angles, the area of the triangle, the medians of the sides, the altitudes, the lengths of the internal angle bisectors from each angle to the opposite side, the perpendicular bisectors of the sides, the distance from an arbitrary point to another point, the inradius, the exradii, the circumradius, and/or other quantities. Unless otherwise specified, this article deals with triangles in the Euclidean plane. Main parameters and notation The parameters most commonly appearing in triangle inequalities are: the side lengths a, b, and c; the semiperimeter s = (a + b + c) / 2 (half the perimeter p); the angle measures A, B, and C of the angles of the vertices opposite the respective sides a, b, and c (with the vertices denoted with the same symbols as their angle measures); the values of trigonometric functions of the angles; the area T of the triangle; the medians ma, mb, and mc of the sides (each being the length of the line segment from the midpoint of the side to the opposite vertex); the altitudes ha, hb, and hc (each being the length of a segment perpendicular to one side and reaching from that side (or possibly the extension of that side) to the opposite vertex); the lengths of the internal angle bisectors ta, tb, and tc (each being a segment from a vertex to the opposite side and bisecting the vertex's angle); the perpendicular bisectors pa, pb, and pc of the sides (each being the length of a segment perpendicular to one side at its midpoint and reaching to one of the other sides); t
https://en.wikipedia.org/wiki/Thomson%20%28unit%29
The thomson (symbol: Th) is a unit that has appeared infrequently in scientific literature relating to the field of mass spectrometry as a unit of mass-to-charge ratio. The unit was proposed by Cooks and Rockwood naming it in honour of J. J. Thomson who measured the mass-to-charge ratio of electrons and ions. Definition The thomson is defined as where Da is the symbol for the unit dalton (also called the unified atomic mass unit, symbol u), and e is the elementary charge which is the unit of electric charge in the system of Hartree atomic units. For example, the ion C7H72+ has a mass of 91 Da. Its charge number is +2, and hence its charge is 2e. The ion will be observed at 45.5 Th in a mass spectrum. The thomson allows for negative values for negatively charged ions. For example, the benzoate anion would be observed at −121 Th since the charge is −e. Use The thomson has been used by some mass spectrometrists, for example Alexander Makarov—the inventor of the Orbitrap—in a scientific poster, and a 2015 presentation. Other uses of the thomson include papers, and (notably) one book. The journal Rapid Communications in Mass Spectrometry (in which the original article appeared) states that "the thomson (Th) may be used for such purposes as a unit of mass-to-charge ratio although it is not currently approved by IUPAP or IUPAC." Even so, the term has been called "controversial" by RCM's former Editor-in Chief (in a review the Hoffman text cited above). The book, Mass Spectrometry Desk Reference, argues against the use of the thomson. However, the editor-in-chief of the Journal of the Mass Spectrometry Society of Japan has written an editorial in support of the thomson unit. The thomson is not an SI unit, nor has it been defined by IUPAC. Since 2013, the thomson is deprecated by IUPAC (Definitions of Terms Relating to Mass Spectrometry). Since 2014, Rapid Communications in Mass Spectrometry regards the thomson as a "term that should be avoided in mass spectrometry p
https://en.wikipedia.org/wiki/Geomagnetic%20latitude
Geomagnetic latitude, or magnetic latitude (MLAT), is a parameter analogous to geographic latitude, except that, instead of being defined relative to the geographic poles, it is defined by the axis of the geomagnetic dipole, which can be accurately extracted from the International Geomagnetic Reference Field (IGRF). See also Earth's magnetic field Geomagnetic equator Ionosphere L-shell Magnetosphere World Magnetic Model (WMM) References External links Space Weather: Maps of Geomagnetic Latitude (Northwest Research Associates) Tips on Viewing the Aurora (SWPC) Magnetic Field Calculator (NCEI) Ionospheric Electrodynamics Using Magnetic Apex Coordinates (Journal of Geomagnetism and Geoelectricity) Geomagnetism Geographic coordinate systems
https://en.wikipedia.org/wiki/Geocast
Geocast refers to the delivery of information to a subset of destinations in a wireless peer-to-peer network identified by their geographical locations. It is used by some mobile ad hoc network routing protocols, but not applicable to Internet routing. Geographic addressing A geographic destination address is expressed in three ways: point, circle (with center point and radius), and polygon (a list of points, e.g., P(1), P(2), ..., P(n–1), P(n)). A geographic router (Geo Router) calculates its service area (geographic area it serves) as the union of the geographic areas covered by the networks attached to it. This service area is approximated by a single closed polygon. Geo Routers exchange service area polygons to build routing tables. The routers are organized in a hierarchy. Applications Geographic addressing and routing has many potential applications in geographic messaging, geographic advertising, delivery of geographically restricted services, and presence discovery of a service or mobile network participant in a limited geographic area (see Navas, Imieliński, 'GeoCast - Geographic Addressing and Routing'.) See also Abiding Geocast / Stored Geocast References External links RFC 2009 GPS-Based Addressing and Routing A Survey of Geocast Routing Protocols Efficient Point to Multipoint Transfers Across Datacenters Ad hoc routing protocols
https://en.wikipedia.org/wiki/Unrestricted%20grammar
In automata theory, the class of unrestricted grammars (also called semi-Thue, type-0 or phrase structure grammars) is the most general class of grammars in the Chomsky hierarchy. No restrictions are made on the productions of an unrestricted grammar, other than each of their left-hand sides being non-empty. This grammar class can generate arbitrary recursively enumerable languages. Formal definition An unrestricted grammar is a formal grammar , where is a finite set of nonterminal symbols, is a finite set of terminal symbols with and disjoint, is a finite set of production rules of the form where and are strings of symbols in and is not the empty string, and is a specially designated start symbol. As the name implies, there are no real restrictions on the types of production rules that unrestricted grammars can have. Equivalence to Turing machines The unrestricted grammars characterize the recursively enumerable languages. This is the same as saying that for every unrestricted grammar there exists some Turing machine capable of recognizing and vice versa. Given an unrestricted grammar, such a Turing machine is simple enough to construct, as a two-tape nondeterministic Turing machine. The first tape contains the input word to be tested, and the second tape is used by the machine to generate sentential forms from . The Turing machine then does the following: Start at the left of the second tape and repeatedly choose to move right or select the current position on the tape. Nondeterministically choose a production from the productions in . If appears at some position on the second tape, replace by at that point, possibly shifting the symbols on the tape left or right depending on the relative lengths of and (e.g. if is longer than , shift the tape symbols left). Compare the resulting sentential form on tape 2 to the word on tape 1. If they match, then the Turing machine accepts the word. If they don't, the Turing machine w
https://en.wikipedia.org/wiki/Hyper-encryption
Hyper-encryption is a form of encryption invented by Michael O. Rabin which uses a high-bandwidth source of public random bits, together with a secret key that is shared by only the sender and recipient(s) of the message. It uses the assumptions of Ueli Maurer's bounded-storage model as the basis of its secrecy. Although everyone can see the data, decryption by adversaries without the secret key is still not feasible, because of the space limitations of storing enough data to mount an attack against the system. Unlike almost all other cryptosystems except the one-time pad, hyper-encryption can be proved to be information-theoretically secure, provided the storage bound cannot be surpassed. Moreover, if the necessary public information cannot be stored at the time of transmission, the plaintext can be shown to be impossible to recover, regardless of the computational capacity available to an adversary in the future, even if they have access to the secret key at that future time. A highly energy-efficient implementation of a hyper-encryption chip was demonstrated by Krishna Palem et al. using the Probabilistic CMOS or PCMOS technology and was shown to be ~205 times more efficient in terms of Energy-Performance-Product. See also Perfect forward secrecy Randomness extractor References Further reading Y. Z. Ding and M. O. Rabin. Hyper-encryption and everlasting security. In 19th Annual Symposium on Theoretical Aspects of Computer Science (STACS), volume 2285 of Lecture Notes in Computer Science, pp. 1–26. Springer-Verlag, 2002. Jason K. Juang, Practical Implementation and Analysis of Hyper-Encryption. Masters dissertation, MIT Department of Electrical Engineering and Computer Science, 2009-05-22. External links </ref> , video of a lecture by Professor Michael O. Rabin. Cryptography Information theory
https://en.wikipedia.org/wiki/GRASP%20%28object-oriented%20design%29
General Responsibility Assignment Software Patterns (or Principles), abbreviated GRASP, is a set of "nine fundamental principles in object design and responsibility assignment" first published by Craig Larman in his 1997 book Applying UML and Patterns. The different patterns and principles used in GRASP are controller, creator, indirection, information expert, low coupling, high cohesion, polymorphism, protected variations, and pure fabrication. All these patterns solve some software problems common to many software development projects. These techniques have not been invented to create new ways of working, but to better document and standardize old, tried-and-tested programming principles in object-oriented design. Larman states that "the critical design tool for software development is a mind well educated in design principles. It is not UML or any other technology." Thus, the GRASP principles are really a mental toolset, a learning aid to help in the design of object-oriented software. Patterns In object-oriented design, a pattern is a named description of a problem and solution that can be applied in new contexts; ideally, a pattern advises us on how to apply its solution in varying circumstances and considers the forces and trade-offs. Many patterns, given a specific category of problem, guide the assignment of responsibilities to objects. Information expert Problem: What is a basic principle by which to assign responsibilities to objects? Solution: Assign responsibility to the class that has the information needed to fulfill it. Information expert (also expert or the expert principle) is a principle used to determine where to delegate responsibilities such as methods, computed fields, and so on. Using the principle of information expert, a general approach to assigning responsibilities is to look at a given responsibility, determine the information needed to fulfill it, and then determine where that information is stored. This will lead to placing t
https://en.wikipedia.org/wiki/Truncated%20regression%20model
Truncated regression models are a class of models in which the sample has been truncated for certain ranges of the dependent variable. That means observations with values in the dependent variable below or above certain thresholds are systematically excluded from the sample. Therefore, whole observations are missing, so that neither the dependent nor the independent variable is known. This is in contrast to censored regression models where only the value of the dependent variable is clustered at a lower threshold, an upper threshold, or both, while the value for independent variables is available. Sample truncation is a pervasive issue in quantitative social sciences when using observational data, and consequently the development of suitable estimation techniques has long been of interest in econometrics and related disciplines. In the 1970s, James Heckman noted the similarity between truncated and otherwise non-randomly selected samples, and developed the Heckman correction. Estimation of truncated regression models is usually done via parametric maximum likelihood method. More recently, various semi-parametric and non-parametric generalisation were proposed in the literature, e.g., based on the local least squares approach or the local maximum likelihood approach, which are kernel based methods. See also Censored regression model Sampling bias Truncated distribution References Further reading Actuarial science Single-equation methods (econometrics) Regression models Mathematical and quantitative methods (economics)
https://en.wikipedia.org/wiki/Weyl%27s%20inequality
In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix. Weyl's inequality about perturbation Let and be n×n Hermitian matrices, with their respective eigenvalues ordered as follows: Then the following inequalities hold: and, more generally, In particular, if is positive definite then plugging into the above inequalities leads to Note that these eigenvalues can be ordered, because they are real (as eigenvalues of Hermitian matrices). Weyl's inequality between eigenvalues and singular values Let have singular values and eigenvalues ordered so that . Then For , with equality for . Applications Estimating perturbations of the spectrum Assume that is small in the sense that its spectral norm satisfies for some small . Then it follows that all the eigenvalues of are bounded in absolute value by . Applying Weyl's inequality, it follows that the spectra of the Hermitian matrices M and N are close in the sense that Note, however, that this eigenvalue perturbation bound is generally false for non-Hermitian matrices (or more accurately, for non-normal matrices). For a counterexample, let be arbitrarily small, and consider whose eigenvalues and do not satisfy . Weyl's inequality for singular values Let be a matrix with . Its singular values are the positive eigenvalues of the Hermitian augmented matrix Therefore, Weyl's eigenvalue perturbation inequality for Hermitian matrices extends naturally to perturbation of singular values. This result gives the bound for the perturbation in the singular values of a matrix due to an additive perturbation : where we note that the largest singular value coincides with the spectral norm . Notes References Matrix Theory, Joel N. Franklin, (Dover Publications, 1993) "Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialglei
https://en.wikipedia.org/wiki/Pelvic%20pain
Pelvic pain is pain in the area of the pelvis. Acute pain is more common than chronic pain. If the pain lasts for more than six months, it is deemed to be chronic pelvic pain. It can affect both the male and female pelvis. Common causes in include: endometriosis in women, bowel adhesions, irritable bowel syndrome, and interstitial cystitis. The cause may also be a number of poorly understood conditions that may represent abnormal psychoneuromuscular function. The role of the nervous system in the genesis and moderation of pain is explored. The importance of psychological factors is discussed, both as a primary cause of pain and as a factor which affects the pain experience. As with other chronic syndromes, the biopsychosocial model offers a way of integrating physical causes of pain with psychological and social factors. Terminology Pelvic pain is a general term that may have many causes, listed below. The subcategorical term urologic chronic pelvic pain syndrome (UCPPS) is an umbrella term adopted for use in research into urologic pain syndromes associated with the male and female pelvis. UCPPS specifically refers to chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) in men and interstitial cystitis or painful bladder syndrome (IC/PBS) in women. Cause Genital pain and pelvic pain can arise from a variety of conditions, crimes, trauma, medical treatments, physical diseases, mental illness and infections. In some instances the pain is consensual and self-induced. Self-induced pain can be a cause for concern and may require a psychiatric evaluation. Female Many different conditions can cause female pelvic pain including: Related to pregnancy Pelvic girdle pain Ectopic pregnancy—a pregnancy implanted outside the uterus. Gynecologic (from more common to less common) Dysmenorrhea—pain during the menstrual period. Endometriosis—pain caused by uterine tissue that is outside the uterus. Endometriosis can be visually confirmed by laparoscopy in approxim
https://en.wikipedia.org/wiki/Umbilic%20torus
The umbilic torus or umbilic bracelet is a single-edged 3-dimensional shape. The lone edge goes three times around the ring before returning to the starting point. The shape also has a single external face. A cross section of the surface forms a deltoid. The umbilic torus occurs in the mathematical subject of singularity theory, in particular in the classification of umbilical points which are determined by real cubic forms . The equivalence classes of such cubics form a three-dimensional real projective space and the subset of parabolic forms define a surface – the umbilic torus. Christopher Zeeman named this set the umbilic bracelet in 1976. The torus is defined by the following set of parametric equations. In sculpture John Robinson created a sculpture Eternity based on the shape in 1989, this had a triangular cross-section rather than a deltoid of a true Umbilic bracelet. This appeared on the cover of Geometric Differentiation by Ian R. Porteous. Helaman Ferguson has created a 27-inch (69 centimeters) bronze sculpture, Umbilic Torus, and it is his most widely known piece of art. In 2010, it was announced that Jim Simons had commissioned an Umbilic Torus sculpture to be constructed outside the Math and Physics buildings at Stony Brook University, in proximity to the Simons Center for Geometry and Physics. The torus is made out of cast bronze, and is mounted on a stainless steel column. The total weight of the sculpture is 65 tonnes, and has a height of . The torus has a diameter of , the same diameter as the granite base. Various mathematical formulas defining the torus are inscribed on the base. Installation was completed in September, 2012. In literature In the short story What Dead Men Tell by Theodore Sturgeon, the main action takes place in a seemingly endless corridor with the cross section of an equilateral triangle. At the end the protagonist speculates that the corridor is actually a triangular shape twisted back on itself like a Möbius strip but
https://en.wikipedia.org/wiki/Baudline
The baudline time-frequency browser is a signal analysis tool designed for scientific visualization. It runs on several Unix-like operating systems under the X Window System. Baudline is useful for real-time spectral monitoring, collected signals analysis, generating test signals, making distortion measurements, and playing back audio files. Applications Acoustic cryptanalysis Audio codec lossy compression analysis Audio signal processing Bioacoustics research Data acquisition (DAQ) Gravitational Wave analysis Infrasound monitoring Musical acoustics radar Seismic data processing SETI Signal analysis Software Defined Radio Spectral analysis Very low frequency (VLF) reception WWV frequency measurement Features Spectrogram, Spectrum, Waveform, and Histogram displays Fourier, Correlation, and Raster transforms SNR, THD, SINAD, ENOB, SFDR distortion measurements Channel equalization Function generator Digital down converter Audio playing with real-time DSP effects like speed control, pitch scaling, frequency shifting, matrix surround panning, filtering, and digital gain boost Audio recording of multiple channels JACK Audio Connection Kit sound server support Import AIFF, AU, WAV, FLAC, MP3, Ogg Vorbis, AVI, MOV, and other file formats License The old baudline version comes with no warranty and is free to download. The binaries may be used for any purpose, though no form of redistribution is permitted. The new baudline version is available via a subscription model and site license. See also Linux audio software List of information graphics software List of numerical analysis software Digital signal processing References External links User discussion group at Google Groups SigBlips DSP Engineering Time–frequency analysis Linux media players Acoustics software Numerical software Unix software Audio software with JACK support Science software for Linux Science software for macOS Audio software for Linux
https://en.wikipedia.org/wiki/Genghis%20Khan%20%28video%20game%29
Genghis Khan, original full title , is a 1987 turn-based strategy game developed by Koei, originally released for the NEC PC-9801, MSX and Sharp X68000 in 1988, the DOS and NES in 1990, and the Amiga in 1990. It is actually the second game in the series, after a 1985 Aoki Ōkami to Shiroki Mejika, also for PC-88, PC-98, and MSX. Plot The game takes the player inside the virtual life of either Genghis Khan or one of his archrivals. The player must arrange marriages, father children, appoint family members to governmental positions, and fight in order to conquer the Old World. Armies must be drafted and soldiers must be trained if the player is to rule the lands from England to Japan. Gameplay The game has two different ways to play. The first is Mongol Conquest, which begins in the year 1175 A.D, which is a one player mode. Players assume control of Lord Temujin and they must conquer the land by keeping their economy stable, having their army ready to fight, and by attacking other lands. The second is World Conquest, where the goal is to conquer every opposing country. World Conquest, which begins in the year 1206 A.D, is started by choosing the number of players and difficulty. It supports 1-4 players. Players must choose who they want to be; Genghis Khan (Mongols), Alexios I (Byzantine), Richard (England), or Yoritomo (Japan). Then each player must randomly select the stats of their leader and successors. The player must stop a random number to choose the certain stat. This is done until all stats are chosen for the certain character, but they can be redone. After everyone is ready to go, the game begins. The countries of Eurasia cycle through; when it goes through a country, it means they have used their turn. When it comes to a player's country, they get to make three choices. These choices include training the troops, buying a certain product/quantity from a merchant, drafting soldiers, sending a treaty, or going to war. Each act takes one choice away until th
https://en.wikipedia.org/wiki/Selcall
Selcall (selective calling) is a type of squelch protocol used in radio communications systems, in which transmissions include a brief burst of sequential audio tones. Receivers that are set to respond to the transmitted tone sequence will open their squelch, while others will remain muted. Selcall is a radio signalling protocol mainly in use in Europe, Asia, Australia and New Zealand, and continues to be incorporated in radio equipment marketed in those areas. Details The transmission of a selcall code involves the generation and sequencing of a series of predefined, audible tones. Both the tone frequencies, and sometimes the tone periods, must be known in advance by both the transmitter and the receiver. Each predefined tone represents a single digit. A series of tones therefore represents a series of digits that represents a number. The number encoded in a selcall burst is used to address one or more receivers. If the receiver is programmed to recognise a certain number, then it will un-mute its speaker so that the transmission can be heard; an unrecognised number is ignored and therefore the receiver remains muted. Tone Sets A selcall tone set contains 16 tones that represent 16 digits. The digits correspond to the 16 hexadecimal digits, i.e. 0-9 and A-F. Digits A-F are typically reserved for control purposes. For example, digit "E" is typically used as the repeat digit. There are eight, well known, selcall tone sets. Tone Periods The physical characteristics of the transmitted sequence of tones is tightly controlled. Each tone is generated for a predefined period, in the order of tens of milliseconds. Each subsequent tone is transmitted immediately after the preceding one for the same period, until the sequence is complete. Typical tone periods include; 20ms, 30ms (sometimes 33ms), 40ms, 50ms, 60ms, 70ms, 80ms, 90ms and 100ms. The longer the tone period, the more reliable the decoding of the tone sequence. Naturally, the longer the tone period, the great
https://en.wikipedia.org/wiki/Arrow%20%28symbol%29
An arrow is a graphical symbol, such as ← or →, or a pictogram, used to point or indicate direction. In its simplest form, an arrow is a triangle, chevron, or concave kite, usually affixed to a line segment or rectangle, and in more complex forms a representation of an actual arrow (e.g. ➵ U+27B5). The direction indicated by an arrow is the one along the length of the line or rectangle toward the single pointed end. History An older (medieval) convention is the manicule (pointing hand, 👈). Pedro Reinel in c. 1504 first used the fleur-de-lis as indicating north in a compass rose; the convention of marking the eastern direction with a cross is older (medieval). Use of the arrow symbol does not appear to pre-date the 18th century. An early arrow symbol is found in an illustration of Bernard Forest de Bélidor's treatise L'architecture hydraulique, printed in France in 1737. The arrow is here used to illustrate the direction of the flow of water and of the water wheel's rotation. At about the same time, arrow symbols were used to indicate the flow of rivers in maps. A trend toward abstraction, in which the arrow's fletching is removed, can be observed in the mid-to-late 19th century. The arrow can be seen in the work of Paul Klee. In a further abstraction of the symbol, John Richard Green's A Short History of the English People of 1874 contained maps by cartographer Emil Reich, which indicated army movements by curved lines, with solid triangular arrowheads placed intermittently along the lines. Use of arrow symbols in mathematical notation is still younger and develops in the first half of the 20th century. David Hilbert in 1922 introduced the arrow symbol representing logical implication. The double-headed arrow representing logical equivalence was introduced by Albrecht Becker in Die Aristotelische Theorie der Möglichkeitsschlüsse, Berlin, 1933. Usage Arrows are universally recognised for indicating directions. They are widely used on signage and for wayfin
https://en.wikipedia.org/wiki/Institut%20national%20de%20l%27audiovisuel
The (abbrev. INA), () is a repository of all French radio and television audiovisual archives. Additionally it provides free access to archives of countries such as Afghanistan and Cambodia. It has its headquarters in Bry-sur-Marne. Since 2006, it has allowed free online consultation on a website called ina.fr with a search tool indexing 100,000 archives of historical programs, for a total of 20,000 hours. Recordings In the 1980s, it issued a large number of recordings on the label France's Concert Records. In the 1990s it launched its own label INA mémoire as the historical recording label of the Institut national de l'audiovisuel, and of the archives of Radio France. History The was founded in 1975 by a law of 1974 with the purpose of conserving archives of audiovisual materials, research relating to them and professional training. In 1992, legal deposit was extended to television and radio, and the institute was to be the depository. This led to the establishment of the in 1995, with the aim of conserving and making its holdings available to researchers and students. It was opened to the public in October 1998, at the . In 2002, legal deposit was extended to cable and satellite television and in 2005 to terrestrial digital television. From September 2006, the institute has been responsible for archiving 17 radio and 45 television services amounting to 300,000 hours per year. Presidents See also Groupe de Recherches Musicales François Bayle References External links INA Official website - English-language link Institut national de l'audiovisuel Official INA Arditube channel on YouTube Official INA Chansons channel on YouTube French radio and television archives on line (French) Tales of a festival : remembering Cannes in sound and picture (English and French version) Europe of cultures 50 years of artistic creation and cultural life from the 27 countries of the European Union (English and French version) L’Institut national de l’audiovisuel: Free
https://en.wikipedia.org/wiki/Windows%20Live%20Mesh
Windows Live Mesh (formerly known as Windows Live FolderShare, Live Mesh, and Windows Live Sync) is a discontinued free-to-use Internet-based file synchronization application by Microsoft designed to allow files and folders between two or more computers to be in sync with each other on Windows (Vista and later) and Mac OS X (v. 10.5 Leopard and later, Intel processors only) computers or the Web via SkyDrive. Windows Live Mesh also enabled remote desktop access via the Internet. Windows Live Mesh was part of the Windows Live Essentials 2011 suite of software. However this application was replaced by SkyDrive for Windows application in Windows Essentials 2012 and later OneDrive in Windows 8/8.1/10. Microsoft announced on December 13, 2012, that Windows Live Mesh would be discontinued on February 13, 2013. Features Features of Windows Live Mesh include: Ability to sync up to 200 folders with 100,000 files each (each file up to 40 GB) for PC-to-PC synchronization Ability to sync up to 5 GB of files to "SkyDrive synced storage" in the cloud Remote Desktop access via Windows Live Mesh and the Windows Live Devices web service PC-to-PC synchronisation of application settings for applications such as: Windows Internet Explorer - synchronisation of favorites and recently typed URLs between computers Microsoft Office - synchronisation of dictionaries, Outlook email signatures, styles and templates between computers History FolderShare and Windows Live Sync Microsoft bought FolderShare from ByteTaxi Inc. on November 3, 2005, and subsequently made it a part of their Windows Live range of services. On March 10, 2008, Microsoft released its first user visible update to the then Windows Live FolderShare. This comprised a rewrite of the FolderShare website and an updated Windows Live FolderShare client. Support for discussion groups and Remote Desktop Search was also removed in the update. The new client had some user interface and branding updates and contained several bug fi
https://en.wikipedia.org/wiki/Sulfur%20assimilation
Sulfur assimilation is the process by which living organisms incorporate sulfur into their biological molecules. In plants, sulfate is absorbed by the roots and then be transported to the chloroplasts by the transipration stream where the sulfur are reduced to sulfide with the help of a series of enzymatic reactions. Furthermore, the reduced sulfur is incorporated into cysteine, an amino acid that is a precursor to many other sulfur-containing compounds. In animals, sulfur assimilation occurs primarily through the diet, as animals cannot produce sulfur-containing compounds directly. Sulfur is incorporated into amino acids such as cysteine and methionine, which are used to build proteins and other important molecules. Besides, With the rapid development of economy, the increase emission of sulfur results in environmental issues, such as acid rain and hydrogen sulfilde. Sulfate uptake by plants Sulfate is taken up by the roots that have high affinity. The maximal sulfate uptake rate is generally already reached at sulfate levels of 0.1 mM and lower. The uptake of sulfate by the roots and its transport to the shoot is strictly controlled and it appears to be one of the primary regulatory sites of sulfur assimilation. Sulfate is actively taken up across the plasma membrane of the root cells, subsequently loaded into the xylem vessels and transported to the shoot by the transpiration stream. The uptake and transport of sulfate is energy dependent (driven by a proton gradient generated by ATPases) through a proton/sulfate co-transport. In the shoot the sulfate is unloaded and transported to the chloroplasts where it is reduced. The remaining sulfate in plant tissue is predominantly present in the vacuole, since the concentration of sulfate in the cytoplasm is kept rather constant. Distinct sulfate transporter proteins mediate the uptake, transport and subcellular distribution of sulfate. According to their cellular and subcellular gene expression, and possible functio
https://en.wikipedia.org/wiki/Random%20permutation%20statistics
The statistics of random permutations, such as the cycle structure of a random permutation are of fundamental importance in the analysis of algorithms, especially of sorting algorithms, which operate on random permutations. Suppose, for example, that we are using quickselect (a cousin of quicksort) to select a random element of a random permutation. Quickselect will perform a partial sort on the array, as it partitions the array according to the pivot. Hence a permutation will be less disordered after quickselect has been performed. The amount of disorder that remains may be analysed with generating functions. These generating functions depend in a fundamental way on the generating functions of random permutation statistics. Hence it is of vital importance to compute these generating functions. The article on random permutations contains an introduction to random permutations. The fundamental relation Permutations are sets of labelled cycles. Using the labelled case of the Flajolet–Sedgewick fundamental theorem and writing for the set of permutations and for the singleton set, we have Translating into exponential generating functions (EGFs), we have where we have used the fact that the EGF of the combinatorial species of permutations (there are n! permutations of n elements) is This one equation allows one to derive a large number of permutation statistics. Firstly, by dropping terms from , i.e. exp, we may constrain the number of cycles that a permutation contains, e.g. by restricting the EGF to we obtain permutations containing two cycles. Secondly, note that the EGF of labelled cycles, i.e. of , is because there are k! / k labelled cycles. This means that by dropping terms from this generating function, we may constrain the size of the cycles that occur in a permutation and obtain an EGF of the permutations containing only cycles of a given size. Instead of removing and selecting cycles, one can also put different weights on different size cycles. If
https://en.wikipedia.org/wiki/Midy%27s%20theorem
In mathematics, Midy's theorem, named after French mathematician E. Midy, is a statement about the decimal expansion of fractions a/p where p is a prime and a/p has a repeating decimal expansion with an even period . If the period of the decimal representation of a/p is 2n, so that then the digits in the second half of the repeating decimal period are the 9s complement of the corresponding digits in its first half. In other words, For example, Extended Midy's theorem If k is any divisor of h (where h is the number of digits of the period of the decimal expansion of a/p (where p is again a prime)), then Midy's theorem can be generalised as follows. The extended Midy's theorem states that if the repeating portion of the decimal expansion of a/p is divided into k-digit numbers, then their sum is a multiple of 10k − 1. For example, has a period of 18. Dividing the repeating portion into 6-digit numbers and summing them gives Similarly, dividing the repeating portion into 3-digit numbers and summing them gives Midy's theorem in other bases Midy's theorem and its extension do not depend on special properties of the decimal expansion, but work equally well in any base b, provided we replace 10k − 1 with bk − 1 and carry out addition in base b. For example, in octal In duodecimal (using inverted two and three for ten and eleven, respectively) Proof of Midy's theorem Short proofs of Midy's theorem can be given using results from group theory. However, it is also possible to prove Midy's theorem using elementary algebra and modular arithmetic: Let p be a prime and a/p be a fraction between 0 and 1. Suppose the expansion of a/p in base b has a period of ℓ, so where N is the integer whose expansion in base b is the string a1a2...aℓ. Note that b ℓ − 1 is a multiple of p because (b ℓ − 1)a/p is an integer. Also bn−1 is not a multiple of p for any value of n less than ℓ, because otherwise the repeating period of a/p in base b would be less than ℓ. Now
https://en.wikipedia.org/wiki/Mechanical%20biological%20treatment
A mechanical biological treatment (MBT) system is a type of waste processing facility that combines a sorting facility with a form of biological treatment such as composting or anaerobic digestion. MBT plants are designed to process mixed household waste as well as commercial and industrial wastes. Process The terms mechanical biological treatment or mechanical biological pre-treatment relate to a group of solid waste treatment systems. These systems enable the recovery of materials contained within the mixed waste and facilitate the stabilisation of the biodegradable component of the material. Twenty two facilities in the UK have implemented MBT/BMT treatment processes. The sorting component of the plants typically resemble a materials recovery facility. This component is either configured to recover the individual elements of the waste or produce a refuse-derived fuel that can be used for the generation of power. The components of the mixed waste stream that can be recovered include: Ferrous metal Non-ferrous metal Plastic Glass Terminology MBT is also sometimes termed biological mechanical treatment (BMT), however this simply refers to the order of processing (i.e., the biological phase of the system precedes the mechanical sorting). MBT should not be confused with mechanical heat treatment (MHT). Mechanical sorting The "mechanical" element is usually an automated mechanical sorting stage. This either removes recyclable elements from a mixed waste stream (such as metals, plastics, glass, and paper) or processes them. It typically involves factory style conveyors, industrial magnets, eddy current separators, trommels, shredders, and other tailor made systems, or the sorting is done manually at hand picking stations. The mechanical element has a number of similarities to a materials recovery facility (MRF). Some systems integrate a wet MRF to separate by density and flotation and to recover and wash the recyclable elements of the waste in a form that ca
https://en.wikipedia.org/wiki/Database%20machine
A database machines or back end processor is a computer or special hardware that stores and retrieves data from a database. It is specially designed for database access and is tightly coupled to the main (front-end) computer(s) by a high-speed channel, whereas a database server is a general-purpose computer that holds a database and it's loosely coupled via a local area network to its clients. Database machines can retrieve large amount of data using hundreds to thousands of microprocessors with database software. The front end processor asks the back end (typically sending a query expressed in a query language) the data and further processes it. The back end processor on the other hand analyzes and stores the data from the front end processor. Back end processors result in higher performance, increasing host main memory, increasing database recovery and security, and decreasing cost to manufacture. Britton-Lee (IDM), Tandem (Non-Stop System), and Teradata (DBC) all offered early commercial specialized database machines. A more recent example was Oracle Exadata. Criticism and suggested remedy According to Julie McCann, References Further reading Berra, P. Bruce. “Data base machines.” SIGIR Forum 12, 3 (Winter 1977), 4–23. https://doi.org/10.1145/1095317.1095318 Banerjee, Jayanta. “Data structuring and indexing for data base machines.” In Proceedings of the fifth workshop on Computer architecture for non-numeric processing (CAW '80). Association for Computing Machinery, New York, NY, USA, 11–16. https://doi.org/10.1145/800083.802687 Song, Siang Wun. “A Survey and Taxonomy of Database Machines.” IEEE Database Eng. Bull. 4 (1981): 3-13. Hoffer, Jeffrey A.; Alexander, Mary B. “The diffusion of database machines.” SIGMIS Database 23, 2 (Spring 1992): 13–19. https://doi.org/10.1145/141342.141352 See also Content Addressable File Store (CAFS) Classes of computers Databases
https://en.wikipedia.org/wiki/Pizzino
Pizzino (; plural as pizzini) is an Italian language word derived from the Sicilian language equivalent pizzinu meaning "small piece of paper". The word has been widely used to refer to small slips of paper that the Sicilian Mafia uses for high-level communications. Sicilian Mafia boss Bernardo Provenzano is among those best known for using pizzini, most notably in his instruction that Matteo Messina Denaro become his successor. The pizzini of other mafiosi have significantly aided police investigations. Provenzano case Provenzano used a version of the Caesar cipher, used by Julius Caesar in wartime communications. The Caesar code involves shifting each letter of the alphabet forward three places; Provenzano's pizzini code did the same, then replaced letters with numbers indicating their position in the alphabet. For example, one reported note by Provenzano read "I met 512151522 191212154 and we agreed that we will see each other after the holidays...". This name was decoded as "Binnu Riina". Discovery Channel News quotes cryptography expert Bruce Schneier saying "Looks like kindergarten cryptography to me. It will keep your kid sister out, but it won't keep the police out. But what do you expect from someone who is computer illiterate?". References Pizzino Cryptography Organized crime terminology
https://en.wikipedia.org/wiki/CCKM
Cisco Centralized Key Management (CCKM) is a form of Fast Roaming and a subset of the Cisco Compatible EXtensions (CCX) specification. When a wireless LAN is configured for fast reconnection, a Lightweight Extensible Authentication Protocol (LEAP) enabled client device can roam from one wireless access point to another without involving the main server. Using CCKM, an access point configured to provide Wireless Domain Services (WDS) takes the place of the RADIUS server, and authenticates the client without perceptible delay in voice or other time-sensitive applications. The WDS (which can be run as a service on a Cisco Access Point or on various router modules) caches the user credentials after the initial log-on. The user must authenticate with the Radius server the first time – then they can roam between access points using cached credentials. This saves time in the roaming process, especially valuable for VoIP devices. The current implementation of CCKM requires Cisco compatible hardware and either LEAP, EAP-FAST (CCXv3) or PEAP-GTC, PEAP-MSCHAP, EAP-TLS (CCXv4). External links http://www.cisco.com/web/partners/pr46/pr147/program_additional_information_new_release_features.html Wireless networking
https://en.wikipedia.org/wiki/Smithy%20code
The Smithy code is a series of letters embedded, as a private amusement, within the April 2006 approved judgement of Mr Justice Peter Smith on The Da Vinci Code copyright case. It was first broken, in the same month, by Dan Tench, a lawyer who writes on media issues for The Guardian, after he received a series of email clues about it from Justice Smith. How the code works The letters in question are part of the actual text of the judgement, but italicised in contrast to the rest of the text. The following sequence of unusually emphasised letters can be extracted from the judgement document: The italicised letters only occur up to paragraph 43 (which is page 13 of a 71-page document). Meanwhile, paragraph 52 concludes with this sentence: "The key to solving the conundrum posed by this judgment is in reading HBHG and DVC." (These abbreviations are used by Smith throughout the judgement in referring to the books at issue, The Holy Blood and the Holy Grail and The Da Vinci Code.) There are 70 sections in the judgement. The source words from the judgement for the letters (with intervening words removed): Letter frequencies Excluding leading letters "s m i t h y c o d e", the letter frequencies are as follows: a – 4 e – 3 g, m, p, q, s, t – 2 c, d, f, i, j, k, o, r, v, w, x, z – 1 Letter location Paragraph numbers for cipher letters: 1 Claimant(s) 2 3 4 5 f(o)r 6 7 8 9 11 13 14 16 18 (t)he 19 20 21 u(s)ed 23 w(a)s 25 26 27 29 30 31 o(f) 34 (k)ey 35 37 38 40 42 43 Hints From article "'Da Vinci' judgement code puzzles lawyers": Solution The cipher was a type of polyalphabetic cipher known as a Variant Beaufort, using a keyword based on the Fibonacci sequence, namely AAYCEHMU. This is the reverse of the Vigenère cipher, which here enables decryption rather than encryption. Assigning each letter its place in the alphabet, the keyword corresponds to 1, 1, 25, 3, 5, 8, 13, 21. It is possible that the reason
https://en.wikipedia.org/wiki/Binomial%20regression
In statistics, binomial regression is a regression analysis technique in which the response (often referred to as Y) has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables. Binomial regression is closely related to binary regression: a binary regression can be considered a binomial regression with , or a regression on ungrouped binary data, while a binomial regression can be considered a regression on grouped binary data (see comparison). Binomial regression models are essentially the same as binary choice models, one type of discrete choice model: the primary difference is in the theoretical motivation (see comparison). In machine learning, binomial regression is considered a special case of probabilistic classification, and thus a generalization of binary classification. Example application In one published example of an application of binomial regression, the details were as follows. The observed outcome variable was whether or not a fault occurred in an industrial process. There were two explanatory variables: the first was a simple two-case factor representing whether or not a modified version of the process was used and the second was an ordinary quantitative variable measuring the purity of the material being supplied for the process. Specification of model The response variable Y is assumed to be binomially distributed conditional on the explanatory variables X. The number of trials n is known, and the probability of success for each trial p is specified as a function θ(X). This implies that the conditional expectation and conditional variance of the observed fraction of successes, Y/n, are The goal of binomial regression is to estimate the fun
https://en.wikipedia.org/wiki/Co-occurrence%20matrix
A co-occurrence matrix or co-occurrence distribution (also referred to as : gray-level co-occurrence matrices GLCMs) is a matrix that is defined over an image to be the distribution of co-occurring pixel values (grayscale values, or colors) at a given offset. It is used as an approach to texture analysis with various applications especially in medical image analysis. Method Given a grey-level image , co-occurrence matrix computes how often pairs of pixels with a specific value and offset occur in the image. The offset, , is a position operator that can be applied to any pixel in the image (ignoring edge effects): for instance, could indicate "one down, two right". An image with different pixel values will produce a co-occurrence matrix, for the given offset. The value of the co-occurrence matrix gives the number of times in the image that the and pixel values occur in the relation given by the offset. For an image with different pixel values, the co-occurrence matrix C is defined over an image , parameterized by an offset , as: where: and are the pixel values; and are the spatial positions in the image I; the offsets define the spatial relation for which this matrix is calculated; and indicates the pixel value at pixel . The 'value' of the image originally referred to the grayscale value of the specified pixel, but could be anything, from a binary on/off value to 32-bit color and beyond. (Note that 32-bit color will yield a 232 × 232 co-occurrence matrix!) Co-occurrence matrices can also be parameterized in terms of a distance, , and an angle, , instead of an offset . Any matrix or pair of matrices can be used to generate a co-occurrence matrix, though their most common application has been in measuring texture in images, so the typical definition, as above, assumes that the matrix is an image. It is also possible to define the matrix across two different images. Such a matrix can then be used for color mapping. Aliases Co-occurrence mat
https://en.wikipedia.org/wiki/Wright%20%28ADL%29
In software architecture, Wright is an architecture description language developed at Carnegie Mellon University. Wright formalizes a software architecture in terms of concepts such as components, connectors, roles, and ports. The dynamic behavior of different ports of an individual component is described using the Communicating Sequential Processes (CSP) process algebra. The roles that different components interacting through a connector can take are also described using CSP. Due to the formal nature of the behavior descriptions, automatic checks of port/role compatibility, and overall system consistency can be performed. Wright was principally developed by Robert Allen and David Garlan. References External links Wright website at CMU Software architecture Formal specification languages Architecture description language
https://en.wikipedia.org/wiki/Truth-bearer
A truth-bearer is an entity that is said to be either true or false and nothing else. The thesis that some things are true while others are false has led to different theories about the nature of these entities. Since there is divergence of opinion on the matter, the term truth-bearer is used to be neutral among the various theories. Truth-bearer candidates include propositions, sentences, sentence-tokens, statements, beliefs, thoughts, intuitions, utterances, and judgements but different authors exclude one or more of these, deny their existence, argue that they are true only in a derivative sense, assert or assume that the terms are synonymous, or seek to avoid addressing their distinction or do not clarify it. Introduction Some distinctions and terminology as used in this article, based on Wolfram 1989 (Chapter 2 Section1) follow. It should be understood that the terminology described is not always used in the ways set out, and it is introduced solely for the purposes of discussion in this article. Use is made of the type–token and use–mention distinctions. Reflection on occurrences of numerals might be helpful. In grammar a sentence can be a declaration, an explanation, a question, a command. In logic a declarative sentence is considered to be a sentence that can be used to communicate truth. Some sentences which are grammatically declarative are not logically so. A character is a typographic character (printed or written) etc. A word-token is a pattern of characters. A word-type is an identical pattern of characters. A meaningful-word-token is a meaningful pattern of characters. Two word-tokens which mean the same are of the same word-meaning A sentence-token is a pattern of word-tokens. A meaningful-sentence-token is a meaningful sentence-token or a meaningful pattern of meaningful-word-tokens. Two sentence-tokens are of the same sentence-type if they are identical patterns of word-tokens characters A declarative-sentence-token is a sentence-token wh
https://en.wikipedia.org/wiki/Patlak%20plot
A Patlak plot (sometimes called Gjedde–Patlak plot, Patlak–Rutland plot, or Patlak analysis) is a graphical analysis technique based on the compartment model that uses linear regression to identify and analyze pharmacokinetics of tracers involving irreversible uptake, such as in the case of deoxyglucose. It is used for the evaluation of nuclear medicine imaging data after the injection of a radioopaque or radioactive tracer. The method is model-independent because it does not depend on any specific compartmental model configuration for the tracer, and the minimal assumption is that the behavior of the tracer can be approximated by two compartments – a "central" (or reversible) compartment that is in rapid equilibrium with plasma, and a "peripheral" (or irreversible) compartment, where tracer enters without ever leaving during the time of the measurements. The amount of tracer in the region of interest is accumulating according to the equation: where represents time after tracer injection, is the amount of tracer in region of interest, is the concentration of tracer in plasma or blood, is the clearance determining the rate of entry into the peripheral (irreversible) compartment, and is the distribution volume of the tracer in the central compartment. The first term of the right-hand side represents tracer in the peripheral compartment, and the second term tracer in the central compartment. By dividing both sides by , one obtains: The unknown constants and can be obtained by linear regression from a graph of against . See also Logan plot Positron emission tomography Multi-compartment model Binding potential Deconvolution Albert Gjedde References Further literature External links PMOD, Patlak Plot, PMOD Kinetic Modeling Tool (PKIN). Gjedde–Patlak plot, Turku PET Centre. Mathematical modeling Systems theory Plots (graphics) Pharmacokinetics
https://en.wikipedia.org/wiki/John%20Guttag
John Vogel Guttag (born March 6, 1949) is an American computer scientist, professor, and former head of the department of electrical engineering and computer science at MIT. Education and career John Guttag was raised in Larchmont, New York, the son of Irwin Guttag (1916–2005) and Marjorie Vogel Guttag. John Vogel Guttag received a bachelor's degree in English from Brown University in 1971, and a master's degree in applied mathematics from Brown in 1972. In 1975, he received a doctorate in Computer Science from the University of Toronto. He was a member of the faculty at the University of Southern California from 1975 to 1978, and joined the Massachusetts Institute of Technology faculty in 1979. From 1993 to 1998, he served as associate department head for computer science of MIT's electrical engineering and computer science Department. From January 1999 through August 2004, he served as head of that department. EECS, with approximately 2000 students and 125 faculty members, is the largest department at MIT. He helped student Vanu Bose start a company with software-defined radio technology developed at MIT. Guttag also co-heads the MIT Computer Science and Artificial Intelligence Laboratory's Networks and Mobile Systems Group. This group studies issues related to computer networks, applications of networked and mobile systems, and advanced software-based medical instrumentation and decision systems. He has also done research, published, and lectured in the areas of software engineering, mechanical theorem proving, hardware verification, compilation, software radios, and medical computing. Guttag serves on the board of directors of Empirix and Avid Technology, and on the board of trustees of the Massachusetts General Hospital Institute of Health Professions. He is a member of the American Academy of Arts and Sciences. In 2006 he was inducted as a fellow of the Association for Computing Machinery. He is one of the founders of Health[at]Scale Technologies, a mach
https://en.wikipedia.org/wiki/Star-free%20language
A regular language is said to be star-free if it can be described by a regular expression constructed from the letters of the alphabet, the empty set symbol, all boolean operators – including complementation – and concatenation but no Kleene star. The condition is equivalent to having generalized star height zero. For instance, the language of all finite words over an alphabet can be shown to be star-free by taking the complement of the empty set, . Then, the language of words over the alphabet that do not have consecutive a's can be defined as , first constructing the language of words consisting of with an arbitrary prefix and suffix, and then taking its compliment, which must be all words which do not contain the substring . An example of a regular language which is not star-free is , i.e. the language of strings consisting of an even number of "a". For where , the language can be defined as , taking the set of all words and removing from it words starting with , ending in or containing or . However, when , this definition does not create . Marcel-Paul Schützenberger characterized star-free languages as those with aperiodic syntactic monoids. They can also be characterized logically as languages definable in FO[<], the first-order logic over the natural numbers with the less-than relation, as the counter-free languages and as languages definable in linear temporal logic. All star-free languages are in uniform AC0. See also Star height Generalized star height problem Star height problem References Logic in computer science Formal languages Automata (computation)
https://en.wikipedia.org/wiki/Terraforming%20of%20Venus
The terraforming of Venus or the terraformation of Venus is the hypothetical process of engineering the global environment of the planet Venus in order to make it suitable for human habitation. Adjustments to the existing environment of Venus to support human life would require at least three major changes to the planet's atmosphere: Reducing Venus's surface temperature of Eliminating most of the planet's dense carbon dioxide and sulfur dioxide atmosphere via removal or conversion to some other form The addition of breathable oxygen to the atmosphere. These three changes are closely interrelated because Venus's extreme temperature is due to the high pressure of its dense atmosphere and the greenhouse effect. History of the idea Poul Anderson, a successful science fiction writer, had proposed the idea in his 1954 novelette "The Big Rain", a story belonging to his Psychotechnic League future history. The first known suggestion to terraform Venus in a scholarly context was by the astronomer Carl Sagan in 1961. Prior to the early 1960s, the atmosphere of Venus was believed by many astronomers to have an Earth-like temperature. When Venus was understood to have a thick carbon dioxide atmosphere with a consequence of a very large greenhouse effect, some scientists began to contemplate the idea of altering the atmosphere to make the surface more Earth-like. This hypothetical prospect, known as terraforming, was first proposed by Carl Sagan in 1961, as a final section of his classic article in the journal Science discussing the atmosphere and greenhouse effect of Venus. Sagan proposed injecting photosynthetic bacteria into the Venus atmosphere, which would convert the carbon dioxide into reduced carbon in organic form, thus reducing the carbon dioxide from the atmosphere. The knowledge of Venus's atmosphere was still inexact in 1961, when Sagan made his original proposal. Thirty-three years after his original proposal, in his 1994 book Pale Blue Dot, Sagan conced
https://en.wikipedia.org/wiki/Squared%20triangular%20number
In number theory, the sum of the first cubes is the square of the th triangular number. That is, The same equation may be written more compactly using the mathematical notation for summation: This identity is sometimes called Nicomachus's theorem, after Nicomachus of Gerasa (c. 60 – c. 120 CE). History Nicomachus, at the end of Chapter 20 of his Introduction to Arithmetic, pointed out that if one writes a list of the odd numbers, the first is the cube of 1, the sum of the next two is the cube of 2, the sum of the next three is the cube of 3, and so on. He does not go further than this, but from this it follows that the sum of the first cubes equals the sum of the first odd numbers, that is, the odd numbers from 1 to . The average of these numbers is obviously , and there are of them, so their sum is Many early mathematicians have studied and provided proofs of Nicomachus's theorem. claims that "every student of number theory surely must have marveled at this miraculous fact". finds references to the identity not only in the works of Nicomachus in what is now Jordan in the first century CE, but also in those of Aryabhata in India in the fifth century, and in those of Al-Karaji circa 1000 in Persia. mentions several additional early mathematical works on this formula, by Al-Qabisi (tenth century Arabia), Gersonides (circa 1300 France), and Nilakantha Somayaji (circa 1500 India); he reproduces Nilakantha's visual proof. Numeric values; geometric and probabilistic interpretation The sequence of squared triangular numbers is These numbers can be viewed as figurate numbers, a four-dimensional hyperpyramidal generalization of the triangular numbers and square pyramidal numbers. As observes, these numbers also count the number of rectangles with horizontal and vertical sides formed in an grid. For instance, the points of a grid (or a square made up of three smaller squares on a side) can form 36 different rectangles. The number of squares in a square gri
https://en.wikipedia.org/wiki/FCO-IM
Fully Communication Oriented Information Modeling (FCO-IM) is a method for building conceptual information models. Such models can then be automatically transformed into entity-relationship models (ERM), Unified Modeling Language (UML), relational or dimensional models with the FCO-IM Bridge toolset, and it is possible to generate complete end-user applications from them with the IMAGine toolset. Both toolsets were developed by the Research and Competence Group Data Architectures & Metadata Management of the HAN University of Applied Sciences in Arnhem, the Netherlands. Overview FCO-IM is widely taught in the Netherlands and worldwide, amongst which are universities of Professional Education in the Netherlands. The method has proven its value, and is still actively used in multiple large-scale corporate environments. Branches covered vary from retail, logistics (KLM, ProRail), banking, insurance to medical companies (Erasmus MC). FCO-IM includes an operational procedure specifying how to construct an information model as described in the book Fact Oriented Modeling. The distinguishing feature of FCO-IM is that it models the communication about a certain Universe of Discourse (UoD) completely and exclusively, i.e.: it does not model the UoD itself, but rather the facts users exchange when they communicate about the UoD. FCO-IM is therefore a member of the family of information modeling techniques known as fact-oriented modeling (FOM), as are Object-Role Modeling (ORM), predicator set model (PSM) and natural language information analysis method (NIAM). Fact-oriented modeling is sometimes also indicated as fact-based modeling. There are two main reasons why FCO-IM claims to be "fully communication-oriented". FCO-IM is the only FOM technique that completely incorporates the actual verbalizations of facts by domain experts (fact expressions) in an information model. An FCO-IM model therefore contains the soft semantics – i.e.: the meaning of the facts – as well as
https://en.wikipedia.org/wiki/Matbro
Matbro was a brand of lifting equipment, popular with farmers. Matbro produced a wide range of all terrain forklifts and telescopic handlers in their distinctive yellow livery, using engines derived from Ford and Perkins. Matbro began operating at a loss in the late 1990s and in the end went under in 2003 after accounting issues in their parent company Powerscreen. The old designs were then sold to the tractor company John Deere, which sub-licensed them to heavy lifting company Terex, who continued to evolve the designs, with new ideas such as side-mounted engines instead of rear ones and hydrostatic drive. Origin The company, named after its founders the Mathew brothers, produced forklift trucks in Horley in Surrey, England. Between 1964 and 1971 a range of forklifts carrying loads of 8000 lb to 17,765 lb, named the Yard Model and the Compact Model were produced. These used axles made by GKN Centrax at Newton Abbot in Devon, under licence from the Rockwell-Standard. The frequent failure of axles supplied by Centrax for loads heavier than recommended by Rockwell led to a legal case which eventually went before Lord Denning in the Court of Appeal of England and Wales. This model with pivot steering and equal sized wheels was the basis of the first radial arm machines, the RAM40 with 40 cwt (2 imperial tons) lift capacity, produced c.1980. This was soon improved with a telescopic boom, producing the Teleram 40. These compact machines with the operator sitting above the automatic gearbox and just in front of the engine proved popular with farmers for their ability to go almost anywhere and do a wide range of jobs, while being able to manoeuvre in confined spaces. References External links Agricultural machinery manufacturers of the United Kingdom John Deere
https://en.wikipedia.org/wiki/Software%20management%20review
A Software management review is a management study into a project's status and allocation of resources. It is different from both a software engineering peer review, which evaluates the technical quality of software products, and a software audit, which is an externally conducted audit into a project's compliance to specifications, contractual agreements, and other criteria. Process A management review can be an informal process, but generally requires a formal structure and rules of conduct, such as those advocated in the IEEE 1028 standard, which are: Evaluate entry? Management preparation? Plan the structure of the review Overview of review procedures? [Individual] Preparation? [Group] Examination? Rework/follow-up? [Exit evaluation]? Definition In software engineering, a management review is defined by the IEEE as: A systematic evaluation of a software acquisition, supply, development, operation, or maintenance process performed by or on behalf of management ... [and conducted] to monitor progress, determine the status of plans and schedules, confirm requirements and their system allocation, or evaluate the effectiveness of management approaches used to achieve fitness for purpose. Management reviews support decisions about corrective actions, changes in the allocation of resources, or changes to the scope of the project. Management reviews are carried out by, or on behalf of, the management personnel having direct responsibility for the system. Management reviews identify consistency with and deviations from plans, or adequacies and inadequacies of management procedures. This examination may require more than one meeting. The examination need not address all aspects of the product." References Software review
https://en.wikipedia.org/wiki/Software%20peer%20review
In software development, peer review is a type of software review in which a work product (document, code, or other) is examined by author's colleagues, in order to evaluate the work product's technical content and quality. Purpose The purpose of a peer review is to provide "a disciplined engineering practice for detecting and correcting defects in software artifacts, and preventing their leakage into field operations" according to the Capability Maturity Model. When performed as part of each Software development process activity, peer reviews identify problems that can be fixed early in the lifecycle. That is to say, a peer review that identifies a requirements problem during the Requirements analysis activity is cheaper and easier to fix than during the Software architecture or Software testing activities. The National Software Quality Experiment, evaluating the effectiveness of peer reviews, finds, "a favorable return on investment for software inspections; savings exceeds costs by 4 to 1". To state it another way, it is four times more costly, on average, to identify and fix a software problem later. Distinction from other types of software review Peer reviews are distinct from management reviews, which are conducted by management representatives rather than by colleagues, and for management and control purposes rather than for technical evaluation. They are also distinct from software audit reviews, which are conducted by personnel external to the project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria. Review processes Peer review processes exist across a spectrum of formality, with relatively unstructured activities such as "buddy checking" towards one end of the spectrum, and more Informal approaches such as walkthroughs, technical peer reviews, and software inspections, at the other. The IEEE defines formal structures, roles, and processes for each of the last three. Management representatives ar
https://en.wikipedia.org/wiki/Null%20move
In game theory, a null move or pass is a decision by a player to not make a move when it is that player's turn to move. Even though null moves are against the rules of many games, they are often useful to consider when analyzing these games. Examples of this include the analysis of zugzwang (a situation in chess or other games in which a null move, if it were allowed, would be better than any other move), and the null-move heuristic in game tree analysis (a method of pruning game trees involving making a null move and then searching to a lower depth). The reason a reduced-depth null move is effective in game tree alpha-beta search reduction is that tactical threats tend to show up very quickly, in just one or two moves. If the opponent has no tactical threats revealed by null move search, the position may be good enough to exceed the best result obtainable in another branch of the tree (i.e. "beta"), so that no further search need be done from the current node, and the result from the null move can be returned as the search value. Even if the null move search value doesn't exceed beta, the returned value may set a higher floor on the valuation of the position than the present alpha, so more cutoffs will occur at descendant sibling nodes from the position. The underlying assumption is that at least some legal move available to the player on move at the node is better than no move at all. In the case of the player on move being in zugzwang, that assumption is false, and the null move result is invalid (in that case, it actually sets a ceiling on the value of the position). Therefore it is necessary to have logic to exclude null moves at nodes in the tree where zugzwang is possible. In chess, zugzwang positions can occur in king and pawn endgames, and sometimes in end games that include other pieces as well. References Game theory
https://en.wikipedia.org/wiki/Colony%20%28video%20game%29
Colony is an action-adventure game written by Ste Cork and released in 1987 for the Amstrad CPC, Atari 8-bit family, Commodore 64, MSX, and ZX Spectrum by Mastertronic on their Bulldog label. Plot Overpopulation has caused humanity to grow food in colonies on other planets. Unfortunately, the mushroom-growing planet that the player is responsible for is also inhabited by hostile native aliens which resemble giant insects. The player must use the droid under their control to maintain and harvest the mushrooms as well as look after and protect the colony itself. Gameplay Gameplay takes place in a flip-screen environment consisting of the colony itself (which is made-up of storehouses, some specialist buildings, mushroom fields and areas for solar panels) and the surrounding desert of the alien planet which is filled with various giant insects. There are numerous things that the player must look-after. Mushrooms The purpose of the colony is to grow mushrooms for shipment back to Earth. These mushrooms will only grow in the green (i.e. lush) areas. They begin as seeds and quickly grow, eventually reaching a stage of maturity at which point they can be collected in order to be deposited for money and later for pick-up by a spacecraft. Unfortunately, the mushrooms are one of many things that the insects like to eat. The player can pick mature mushrooms which have been partially eaten by aliens but when deposited these will award no payment as they are unfit for human consumption. Security fencing The colony is surrounded by a security fence which keeps the giant insects out of the colony and from causing mischief therein. Unfortunately, the fence is constantly under attack by the insects who can chew their way through them. The player's droid must maintain the fence by destroying the marauding insects and replacing damaged or destroyed pieces of fence. Damaged fence sections can be deposited at a fence storehouse for repair. There are three different types of sec
https://en.wikipedia.org/wiki/Perron%27s%20formula
In mathematics, and more particularly in analytic number theory, Perron's formula is a formula due to Oskar Perron to calculate the sum of an arithmetic function, by means of an inverse Mellin transform. Statement Let be an arithmetic function, and let be the corresponding Dirichlet series. Presume the Dirichlet series to be uniformly convergent for . Then Perron's formula is Here, the prime on the summation indicates that the last term of the sum must be multiplied by 1/2 when x is an integer. The integral is not a convergent Lebesgue integral; it is understood as the Cauchy principal value. The formula requires that c > 0, c > σ, and x > 0. Proof An easy sketch of the proof comes from taking Abel's sum formula This is nothing but a Laplace transform under the variable change Inverting it one gets Perron's formula. Examples Because of its general relationship to Dirichlet series, the formula is commonly applied to many number-theoretic sums. Thus, for example, one has the famous integral representation for the Riemann zeta function: and a similar formula for Dirichlet L-functions: where and is a Dirichlet character. Other examples appear in the articles on the Mertens function and the von Mangoldt function. Generalizations Perron's formula is just a special case of the Mellin discrete convolution where and the Mellin transform. The Perron formula is just the special case of the test function for the Heaviside step function. References Page 243 of Theorems in analytic number theory Calculus Integral transforms Summability methods
https://en.wikipedia.org/wiki/DomainKeys%20Identified%20Mail
DomainKeys Identified Mail (DKIM) is an email authentication method designed to detect forged sender addresses in email (email spoofing), a technique often used in phishing and email spam. DKIM allows the receiver to check that an email that claimed to have come from a specific domain was indeed authorized by the owner of that domain. It achieves this by affixing a digital signature, linked to a domain name, to each outgoing email message. The recipient system can verify this by looking up the sender's public key published in the DNS. A valid signature also guarantees that some parts of the email (possibly including attachments) have not been modified since the signature was affixed. Usually, DKIM signatures are not visible to end-users, and are affixed or verified by the infrastructure rather than the message's authors and recipients. DKIM is an Internet Standard. It is defined in RFC 6376, dated September 2011, with updates in RFC 8301 and RFC 8463. Overview The need for email validated identification arises because forged addresses and content are otherwise easily created—and widely used in spam, phishing and other email-based fraud. For example, a fraudster may send a message claiming to be from sender@example.com, with the goal of convincing the recipient to accept and to read the email—and it is difficult for recipients to establish whether to trust this message. System administrators also have to deal with complaints about malicious email that appears to have originated from their systems, but did not. DKIM provides the ability to sign a message, and allows the signer (author organization) to communicate which email it considers legitimate. It does not directly prevent or disclose abusive behavior. DKIM also provides a process for verifying a signed message. Verifying modules typically act on behalf of the receiver organization, possibly at each hop. All of this is independent of Simple Mail Transfer Protocol (SMTP) routing aspects, in that it operates
https://en.wikipedia.org/wiki/Cryptographic%20primitive
Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions. Rationale When creating cryptographic systems, designers use cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion. Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable with number of computer operations, and it is broken with significantly fewer than operations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include: The designer might not be competent in the mathematical and practical considerations involved in cryptographic primitives. Designing a new cryptographic primitive is very time-consuming and very error-prone, even for experts in the field. Since algorithms in this field are not only required to be designed well but also need to be tested well by the cryptologist community, even if a cryptographic routine looks good from a design point of view it might still contain errors. Successfully withstanding such scrutiny gives some confidence (in fact, so far, the only confidence) that the algorithm is indeed secure enough to use; security proofs for cryptographic primitives are generally not available. Cryptographic primitives are similar in some ways to prog
https://en.wikipedia.org/wiki/Jesus%20%28video%20game%29
is a graphic adventure game developed and published by Enix. It was first released in 1987 on the PC-8801, FM-77AV, X1, and the MSX2 and was later ported to the Famicom in 1989 as (Jesus: Terror of Bio Monster). A sequel, Jesus II, was released on the PC-8801, PC-9801, and X68000 in 1991. The game's name refers to a space station called J.E.S.U.S., named after the central Christian figure Jesus. The ship is shaped like a double-edged sword a la Book of Revelation. Its inhabitants go on to fight a mysterious demonic alien from Halley's Comet. Plot The game takes place in 2061. Halley's Comet has been approaching Mars and the nations of Earth send a mission to investigate. Musou Hayao is stationed on the space lab Jesus. He speaks with his commanding officer on the station, who requests that he track down the members of the two crews being sent to the comet to deliver access cards. Hayao meets with 7 different crew members during this time: a Chinese doctor, German captain, Soviet captain, American xenobiologist, French mathematician, Italian computer engineer, and Brazilian astronomer. The mathematician is also Hayao's love interest, Eline. They share a heartfelt goodbye, as they would be boarding different ships for the mission which depart two weeks apart from another, with Eline's ship leaving for the comet first. Eline is a musician, and plays him a song that she wrote before he leaves. Hayao's ship arrives at Halley's comet. Hayao is sent to investigate the first ship and finds most of the crew missing. Eline's intelligent robot pet Fojii is found in the ship's docking bay, and after updating its data offers Hayao assistance in tracking down the missing crew members. Unfortunately, many are found dead or dying, whispering dire warnings to Hayao about something sinister on board. A crew member tells Hayao that fire cannot hurt "it". None of the dead crew display any physical signs of harm except for a small pinprick on one of their fingers. Hayao accesses
https://en.wikipedia.org/wiki/Smallest%20grammar%20problem
In data compression and the theory of formal languages, the smallest grammar problem is the problem of finding the smallest context-free grammar that generates a given string of characters (but no other string). The size of a grammar is defined by some authors as the number of symbols on the right side of the production rules. Others also add the number of rules to that. The (decision version of the) problem is NP-complete. The smallest context-free grammar that generates a given string is always a straight-line grammar without useless rules. See also Grammar-based code Kolmogorov Complexity Lossless data compression References Formal languages Data compression
https://en.wikipedia.org/wiki/Purification%20theorem
In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent. The mixed strategy equilibria are explained as being the limit of pure strategy equilibria for a disturbed game of incomplete information in which the payoffs of each player are known to themselves but not their opponents. The idea is that the predicted mixed strategy of the original game emerge as ever improving approximations of a game that is not observed by the theorist who designed the original, idealized game. The apparently mixed nature of the strategy is actually just the result of each player playing a pure strategy with threshold values that depend on the ex-ante distribution over the continuum of payoffs that a player can have. As that continuum shrinks to zero, the players strategies converge to the predicted Nash equilibria of the original, unperturbed, complete information game. The result is also an important aspect of modern-day inquiries in evolutionary game theory where the perturbed values are interpreted as distributions over types of players randomly paired in a population to play games. Example Consider the Hawk–Dove game shown here. The game has two pure strategy equilibria (Defect, Cooperate) and (Cooperate, Defect). It also has a mixed equilibrium in which each player plays Cooperate with probability 2/3. Suppose that each player i bears an extra cost ai from playing Cooperate, which is uniformly distributed on [−A, A]. Players only know their own value of this cost. So this is a game of incomplete information which we can solve using Bayesian Nash equilibrium. The probability that ai ≤ . If player 2 Cooperates when , then player 1's expected utility from Cooperating is ; his expected utility from
https://en.wikipedia.org/wiki/Radial%20spoke
The radial spoke is a multi-unit protein structure found in the axonemes of eukaryotic cilia and flagella. Although experiments have determined the importance of the radial spoke in the proper function of these organelles, its structure and mode of action remain poorly understood. Cellular location and structure Radial spokes are T-shaped structures present inside the axoneme. Each spoke consists of a "head" and a "stalk," while each of these sub-structures is itself made up of many protein subunits. In all, the radial spoke is known to contain at least 17 different proteins, with 5 located in the head and at least 12 making up the stalk. The spoke stalk binds to the A-tubule of each microtubule outer doublet, and the spoke head faces in towards the center of the axoneme (see illustration at right). Function The radial spoke is known to play a role in the mechanical movement of the flagellum/cilium. For example, mutant organisms lacking properly functioning radial spokes have flagella and cilia that are immotile. Radial spokes also influence the cilium "waveform"; that is, the exact bending pattern the cilium repeats. How the radial spoke carries out this function is poorly understood. Radial spokes are believed to interact with both the central pair microtubules and the dynein arms, perhaps in a way that maintains the rhythmic activation of the dynein motors. For example, one of the radial spoke subunits, RSP3, is an anchor protein predicted to hold another protein called protein kinase A (PKA). PKA would theoretically then be able to activate/inactivate the adjacent dynein arms via its kinase activity. However, the identities and functions of the many radial spoke subunits are just beginning to be elucidated. References Cell movement Molecular biology Proteins
https://en.wikipedia.org/wiki/Granuloma%20annulare
Granuloma annulare (GA) is a common, sometimes chronic skin condition which presents as reddish bumps on the skin arranged in a circle or ring. It can initially occur at any age, though two-thirds of patients are under 30 years old, and it is seen most often in children and young adults. Females are two times as likely to have it as males. Signs and symptoms Aside from the visible rash, granuloma annulare is usually asymptomatic. Sometimes the rash may burn or itch. People with GA usually notice a ring of small, firm bumps (papules) over the backs of the forearms, hands or feet, often centered on joints or knuckles. The bumps are caused by the clustering of T cells below the skin. These papules start as very small, pimple looking bumps, which spread over time from that size to dime, quarter, half-dollar size and beyond. Occasionally, multiple rings may join into one. Rarely, GA may appear as a firm nodule under the skin of the arms or legs. It also occurs on the sides and circumferential at the waist and without therapy can continue to be present for many years. Outbreaks continue to develop at the edges of the aging rings. Causes The condition is usually seen in otherwise healthy people. Occasionally, it may be associated with diabetes or thyroid disease. It has also been associated with autoimmune diseases such as systemic lupus erythematosus, rheumatoid arthritis, Lyme disease and Addison's disease. At this time, no conclusive connection has been made between patients. Pathology Granuloma annulare microscopically consists of dermal epithelioid histiocytes around a central zone of mucin—a so-called palisaded granuloma. Pathogenesis Granuloma annulare is an idiopathic condition, though many catalysts have been proposed. Among these is skin trauma, UV exposure, vaccinations, tuberculin skin testing, and Borrelia and viral infections. The mechanisms proposed at a molecular level vary even more. In 1977, Dahl et al. proposed that since the lesions of GA often
https://en.wikipedia.org/wiki/Semantic%20web%20service
A semantic web service, like conventional web services, is the server end of a client–server system for machine-to-machine interaction via the World Wide Web. Semantic services are a component of the semantic web because they use markup which makes data machine-readable in a detailed and sophisticated way (as compared with human-readable HTML which is usually not easily "understood" by computer programs). The problem addressed by Semantic Web Services The mainstream XML standards for interoperation of web services specify only syntactic interoperability, not the semantic meaning of messages. For example, Web Services Description Language (WSDL) can specify the operations available through a web service and the structure of data sent and received but cannot specify semantic meaning of the data or semantic constraints on the data. This requires programmers to reach specific agreements on the interaction of web services and makes automatic web service composition difficult. Semantic web services are built around universal standards for the interchange of semantic data, which makes it easy for programmers to combine data from different sources and services without losing meaning. Web services can be activated "behind the scenes" when a web browser makes a request to a web server, which then uses various web services to construct a more sophisticated reply than it would have been able to do on its own. Semantic web services can also be used by automatic programs that run without any connection to a web browser. A semantic-web-services platform that uses OWL (Web Ontology Language) to allow data and service providers to semantically describe their resources using third-party ontologies is SSWAP: Simple Semantic Web Architecture and Protocol. SSWAP establishes a lightweight protocol (few OWL classes and predicates; see the SSWAP Protocol) and the concept of a "canonical graph" to enable providers to logically describe a service. A service is essentially a transformati
https://en.wikipedia.org/wiki/Software%20review
A software review is "a process or meeting during which a software product is examined by a project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval". In this context, the term "software product" means "any technical document or partial document, produced as a deliverable of a software development activity", and may include documents such as contracts, project plans and budgets, requirements documents, specifications, designs, source code, user documentation, support and maintenance documentation, test plans, test specifications, standards, and any other type of specialist work product. Varieties of software review Software reviews may be divided into three categories: Software peer reviews are conducted by one or more colleagues of the author, to evaluate the technical content and/or quality of the work. Software management reviews are conducted by management representatives to evaluate the status of work done and to make decisions regarding downstream activities. Software audit reviews are conducted by personnel external to the software project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria. Different types of peer reviews Code review is systematic examination (often as peer review) of computer source code. Pair programming is a type of code review where two persons develop code together at the same workstation. Inspection is a very formal type of peer review where the reviewers are following a well-defined process to find defects. Walkthrough is a form of peer review where the author leads members of the development team and other interested parties go through a software product and the participants ask questions and make comments about defects. Technical review is a form of peer review in which a team of qualified personnel examines the suitability of the software product for its intended use and identifies discrepancies from specificat
https://en.wikipedia.org/wiki/TiVo%20digital%20video%20recorders
TiVo digital video recorders encompass a number of digital video recorder (DVR) models that TiVo Corporation designed. Features may vary, but a common feature is that all of the units listed here require TiVo service and use its operating system. TiVo units have been manufactured by various OEMs, including Philips, Sony, Pioneer, Toshiba, and Humax. Cisco Systems and Samsung joined forces with pay TV Provider Virgin Media (UK-only) to create the Virgin Media TiVo box. The OEMs license the software from TiVo Corporation. To date, there have been seven "series" of TiVo units produced, with the seventh series, the Edge, released in October 2019. DVR models Series1 (1999) The Series1 (retronym) was the original TiVo digital video recorder. Series1 TiVo systems are based on PowerPC processors connected to MPEG-2 encoder/decoder chips and IDE/ATA hard drives. Series1 TiVo units used one or two drives of 13–60 GB. Although not supported by TiVo or equipment manufacturers, larger drives can be added. Series1 standalone All standalone TiVo systems have coax/RF-in and an internal cable-ready tuner, analog video input—composite/RCA, and S-Video—for use with an external cable box or satellite receiver. The TiVo unit can use a serial cable or IR blasters to control the external receiver. They have coax/RF, composite/RCA, and S-Video output, and the DVD systems also have component out. Audio is RCA stereo, and the DVD systems also have digital optical out. CPU: IBM PowerPC 403GCX at 54 MHz RAM: 16 MB Series1 DirecTV Some TiVo systems are integrated with DirecTV receivers. These "DirecTiVo" recorders record the incoming satellite MPEG-2 digital stream directly to the hard disk without conversion. Because of this, and the fact that they have two tuners, DirecTiVos are able to record two programs at once. In addition, the lack of digital conversion allows recorded video to be of the same quality as live video. DirecTiVos have no MPEG encoder chip, and can only record D
https://en.wikipedia.org/wiki/Software%20walkthrough
In software engineering, a walkthrough or walk-through is a form of software peer review "in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems". The reviews are also performed by assessors, specialists, etc. and are suggested or mandatory as required by norms and standards. "Software product" normally refers to some kind of technical document. As indicated by the IEEE definition, this might be a software design document or program source code, but use cases, business process definitions, test case specifications, and a variety of other technical documentation may also be walked through. A walkthrough differs from software technical reviews in its openness of structure and its objective of familiarization. It differs from software inspection in its ability to suggest direct alterations to the product reviewed. It lacks of direct focus on training and process improvement, process and product measurement. Process A walkthrough may be quite informal, or may follow the process detailed in IEEE 1028 and outlined in the article on software reviews. Objectives and participants In general, a walkthrough has one or two broad objectives: to gain feedback about the technical quality or content of the document; and/or to familiarize the audience with the content. A walkthrough is normally organized and directed by the author of the technical document. Any combination of interested or technically qualified personnel (from within or outside the project) may be included as seems appropriate. IEEE 1028 recommends three specialist roles in a walkthrough: The author, who presents the software product in step-by-step manner at the walk-through meeting, and is probably responsible for completing most action items; The walkthrough leader, who conducts the walkthrough, handle
https://en.wikipedia.org/wiki/Software%20technical%20review
A software technical review is a form of peer review in which "a team of qualified personnel ... examines the suitability of the software product for its intended use and identifies discrepancies from specifications and standards. Technical reviews may also provide recommendations of alternatives and examination of various alternatives" (IEEE Std. 1028-1997, IEEE Standard for Software Reviews, clause 3.7). "Software product" normally refers to some kind of technical document. This might be a software design document or program source code, but use cases, business process definitions, test case specifications, and a variety of other technical documentation, may also be subject to technical review. Technical review differs from software walkthroughs in its specific focus on the technical quality of the product reviewed. It differs from software inspection in its ability to suggest direct alterations to the product reviewed, and its lack of a direct focus on training and process improvement. The term formal technical review is sometimes used to mean a software inspection. A 'Technical Review' may also refer to an acquisition lifecycle event or Design review. Objectives and participants The purpose of a technical review is to arrive at a technically superior version of the work product reviewed, whether by correction of defects or by recommendation or introduction of alternative approaches. While the latter aspect may offer facilities that software inspection lacks, there may be a penalty in time lost to technical discussions or disputes which may be beyond the capacity of some participants. IEEE 1028 recommends the inclusion of participants to fill the following roles: The Decision Maker (the person for whom the technical review is conducted) determines if the review objectives have been met. The Review Leader is responsible for performing administrative tasks relative to the review, ensuring orderly conduct, and ensuring that the review meets its objectives.
https://en.wikipedia.org/wiki/Mobile%20dating
Mobile dating services, also known as cell dating, cellular dating, or cell phone dating, allow individuals to chat, flirt, meet, and possibly become romantically involved by means of text messaging, mobile chatting, and the mobile web. These services allow their users to provide information about themselves in a short profile which is either stored in their phones as a dating ID or as a username on the mobile dating site. They can then search for other IDs online or by calling a certain phone number dictated by the service. The criteria include age, gender and sexual preference. Usually these sites are free to use but standard text messaging fees may still apply as well as a small fee the dating service charges per message. Mobile dating websites, in order to increase the opportunities for meeting, focus attention on users that share the same social network and proximity. Some companies even offer services such as homing devices to alert users when another user is within thirty feet of one another. Some systems involve bluetooth technology to connect users in locations such as bars and clubs. This is known as proximity dating. These systems are actually more popular in some countries in Europe and Asia than online dating. With the advent of GPS Phones and GSM localization, proximity dating is likely to rise sharply in popularity. According to The San Francisco Chronicle in 2005, "Mobile dating is the next big leap in online socializing." More than 3.6 million cell phone users logged into mobile dating sites in March 2007, with most users falling between the ages of 27-35. Some experts believe that the rise in mobile dating is due to the growing popularity of online dating. Others believe it is all about choice, as Joe Brennan Jr., vice president of Webdate says, "It's about giving people a choice. They don't have to date on their computer. They can date on their handset, it's all about letting people decide what path is best for them." A study published in 201
https://en.wikipedia.org/wiki/Characterization%20%28materials%20science%29
Characterization, when used in materials science, refers to the broad and general process by which a material's structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term's use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals. While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years. Microscopy Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample's structure on a range of length scales. Some common examples of microscopy techniques include: Optical microscopy Scanning electron microscopy (SEM) Transmission electron mi
https://en.wikipedia.org/wiki/Burnside%20ring
In mathematics, the Burnside ring of a finite group is an algebraic construction that encodes the different ways the group can act on finite sets. The ideas were introduced by William Burnside at the end of the nineteenth century. The algebraic ring structure is a more recent development, due to Solomon (1967). Formal definition Given a finite group G, the generators of its Burnside ring Ω(G) are the formal sums of isomorphism classes of finite G-sets. For the ring structure, addition is given by disjoint union of G-sets and multiplication by their Cartesian product. The Burnside ring is a free Z-module, whose generators are the (isomorphism classes of) orbit types of G. If G acts on a finite set X, then one can write (disjoint union), where each Xi is a single G-orbit. Choosing any element xi in Xi creates an isomorphism G/Gi → Xi, where Gi is the stabilizer (isotropy) subgroup of G at xi. A different choice of representative yi in Xi gives a conjugate subgroup to Gi as stabilizer. This shows that the generators of Ω(G) as a Z-module are the orbits G/H as H ranges over conjugacy classes of subgroups of G. In other words, a typical element of Ω(G) is where ai in Z and G1, G2, ..., GN are representatives of the conjugacy classes of subgroups of G. Marks Much as character theory simplifies working with group representations, marks simplify working with permutation representations and the Burnside ring. If G acts on X, and H ≤ G (H is a subgroup of G), then the mark of H on X is the number of elements of X that are fixed by every element of H: , where If H and K are conjugate subgroups, then mX(H) = mX(K) for any finite G-set X; indeed, if K = gHg−1 then XK = g · XH. It is also easy to see that for each H ≤ G, the map Ω(G) → Z : X ↦ mX(H) is a homomorphism. This means that to know the marks of G, it is sufficient to evaluate them on the generators of Ω(G), viz. the orbits G/H. For each pair of subgroups H,K ≤ G define This is mX(H) for X = G/K. The conditi
https://en.wikipedia.org/wiki/Pre-shared%20key
In cryptography, a pre-shared key (PSK) is a shared secret which was previously shared between the two parties using some secure channel before it needs to be used. Key To build a key from shared secret, the key derivation function is typically used. Such systems almost always use symmetric key cryptographic algorithms. The term PSK is used in Wi-Fi encryption such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), where the method is called WPA-PSK or WPA2-PSK, and also in the Extensible Authentication Protocol (EAP), where it is known as EAP-PSK. In all these cases, both the wireless access points (AP) and all clients share the same key. The characteristics of this secret or key are determined by the system which uses it; some system designs require that such keys be in a particular format. It can be a password, a passphrase, or a hexadecimal string. The secret is used by all systems involved in the cryptographic processes used to secure the traffic between the systems. Crypto systems rely on one or more keys for confidentiality. One particular attack is always possible against keys, the brute force key space search attack. A sufficiently long, randomly chosen, key can resist any practical brute force attack, though not in principle if an attacker has sufficient computational power (see password strength and password cracking for more discussion). Unavoidably, however, pre-shared keys are held by both parties to the communication, and so can be compromised at one end, without the knowledge of anyone at the other. There are several tools available to help one choose strong passwords, though doing so over any network connection is inherently unsafe as one cannot in general know who, if anyone, may be eavesdropping on the interaction. Choosing keys used by cryptographic algorithms is somewhat different in that any pattern whatsoever should be avoided, as any such pattern may provide an attacker with a lower effort attack than brute force search. T
https://en.wikipedia.org/wiki/Software%20audit%20review
A software audit review, or software audit, is a type of software review in which one or more auditors who are not members of the software development organization conduct "An independent examination of a software product, software process, or set of software processes to assess compliance with specifications, standards, contractual agreements, or other criteria". "Software product" mostly, but not exclusively, refers to some kind of technical document. IEEE Std. 1028 offers a list of 32 "examples of software products subject to audit", including documentary products such as various sorts of plan, contracts, specifications, designs, procedures, standards, and reports, but also non-documentary products such as data, test data, and deliverable media. Software audits are distinct from software peer reviews and software management reviews in that they are conducted by personnel external to, and independent of, the software development organization, and are concerned with compliance of products or processes, rather than with their technical content, technical quality, or managerial implications. The term "software audit review" is adopted here to designate the form of software audit described in IEEE Std. 1028. Objectives and participants "The purpose of a software audit is to provide an independent evaluation of conformance of software products and processes to applicable regulations, standards, guidelines, plans, and procedures". The following roles are recommended: The Initiator (who might be a manager in the audited organization, a customer or user representative of the audited organization, or a third party), decides upon the need for an audit, establishes its purpose and scope, specifies the evaluation criteria, identifies the audit personnel, decides what follow-up actions will be required, and distributes the audit report. The Lead Auditor (who must be someone "free from bias and influence that could reduce his ability to make independent, objective evalu
https://en.wikipedia.org/wiki/Sunset%20Sound%20Recorders
Sunset Sound Recorders is a recording studio in Hollywood, California, United States located at 6650 Sunset Boulevard. Background The Sunset Sound Recorders complex was created by Walt Disney's Director of Recording, Tutti Camarata, from a collection of old commercial and residential buildings. At the encouragement of Disney himself, Camarata began the project in 1958, starting with a former automotive repair garage whose sloping floor would tend to reduce unwanted sonic standing wave reflections. Soon, the audio for many of Disney's early films was being recorded at the studio, including Bedknobs and Broomsticks, Mary Poppins, and 101 Dalmatians Over 200 Gold records have been recorded at Sunset Sound, including hit albums for Elton John, Led Zeppelin, Van Halen, Toto, parts of Prince's Purple Rain, the Rolling Stones' Exile on Main St., the Beach Boys' Pet Sounds, Linda Ronstadt's Don't Cry Now, parts of Guns N' Roses' Chinese Democracy and Janis Joplin's posthumously-released Pearl. In addition, the Doors recorded their first two albums, The Doors and Strange Days, at the studio. Idina Menzel recorded her vocal track for the song "Let It Go" for Disney Animation's 2013 film Frozen at the studio. In 1981, Sunset Sound Recorders owner Camarata purchased The Sound Factory, another Los Angeles recording studio founded by Moonglow Records and later purchased and developed by David Hassinger. The two studios now operate as Sunset Sound and The Sound Factory, respectively. References External links 2019 video tour of Sunset Sound by engineer Warren Huart Archive of 2009 Sunset Sound and Sound Factory website Recording studios in California Audio engineering Music of Los Angeles Sunset Boulevard (Los Angeles) Companies based in Los Angeles Entertainment companies based in California 01
https://en.wikipedia.org/wiki/Tombs%20%26%20Treasure
Tombs & Treasure, known in Japan as , is an adventure game originally developed by Falcom in 1986 for the PC-8801, PC-9801, FM-7, MSX 2 and X1 Japanese computer systems. A Famicom/NES version, released in 1988, was altered to be more story-based, and features new music and role-playing elements; an English-language NES version was published by Infocom in 1991. Japanese enhanced remakes were released for the Saturn and Windows systems in 1998 and 1999, respectively. The game takes place in the ancient Mayan city of Chichen Itza on the Yucatán Peninsula. It alternates between using a three-quarters overhead view for travelling from ruin to ruin, and switching to a first-person perspective upon entering a specific location. Plot At the start of the game, the player is allowed to name both the protagonist, a young brown-haired man, and the lead female, a green-haired lass who is the daughter of one Professor Imes; if no names are manually entered, the game will randomly choose from a pre-coded list of names for both characters. Professor Imes is a renowned archaeologist who has been investigating an artifact known as the Sun Key, which potentially has the ability to unlock the greatest secrets of the lost Mayan civilization, and is rumored to be housed somewhere within Chichen Itza. However, on his latest expedition, he and his team mysteriously disappeared while exploring different temples between June 22 and July 14; only his guide, José, was able to escape unharmed and return with some artifacts that the team found, hoping they will help the player in his quest to find the professor. The three, after talking with the professor's secretary Anne, travel to Mexico and into the ancient city to look for clues. Several actual sites of Chichen Itza are explored by the player, although their interiors and purposes are purposefully changed slightly in order to help create an atmosphere of fantasy and mystery-solving intrigue. Furthermore, each ruin is home to a demon that
https://en.wikipedia.org/wiki/Talairach%20coordinates
Talairach coordinates, also known as Talairach space, is a 3-dimensional coordinate system (known as an 'atlas') of the human brain, which is used to map the location of brain structures independent from individual differences in the size and overall shape of the brain. It is still common to use Talairach coordinates in functional brain imaging studies and to target transcranial stimulation of brain regions. However, alternative methods such as the MNI Coordinate System (originated at the Montreal Neurological Institute and Hospital) have largely replaced Talairach for stereotaxy and other procedures. History The coordinate system was first created by neurosurgeons Jean Talairach and Gabor Szikla in their work on the Talairach Atlas in 1967, creating a standardized grid for neurosurgery. The grid was based on the idea that distances to lesions in the brain are proportional to overall brain size (i.e., the distance between two structures is larger in a larger brain). In 1988 a second edition of the Talairach Atlas came out that was coauthored by Tournoux, and it is sometimes known as the Talairach-Tournoux system. This atlas was based on single post-mortem dissection of a human brain. The Talairach Atlas uses Brodmann areas as the labels for brain regions. Description The Talairach coordinate system is defined by making two anchors, the anterior commissure and posterior commissure, lie on a straight horizontal line. Since these two points lie on the midsagittal plane, the coordinate system is completely defined by requiring this plane to be vertical. Distances in Talairach coordinates are measured from the anterior commissure as the origin (as defined in the 1998 edition). The y-axis points posterior and anterior to the commissures, the left and right is the x-axis, and the z-axis is in the ventral-dorsal (down and up) directions. Once the brain is reoriented to these axes, the researchers must also outline the six cortical outlines of the brain: anterior, posteri
https://en.wikipedia.org/wiki/Comparison%20of%20free%20and%20open-source%20software%20licenses
This comparison only covers software licenses which have a linked Wikipedia article for details and which are approved by at least one of the following expert groups: the Free Software Foundation, the Open Source Initiative, the Debian Project and the Fedora Project. For a list of licenses not specifically intended for software, see List of free-content licences. FOSS licenses FOSS stands for "Free and Open Source Software". There is no one universally agreed-upon definition of FOSS software and various groups maintain approved lists of licenses. The Open Source Initiative (OSI) is one such organization keeping a list of open-source licenses. The Free Software Foundation (FSF) maintains a list of what it considers free. FSF's free software and OSI's open-source licenses together are called FOSS licenses. There are licenses accepted by the OSI which are not free as per the Free Software Definition. The Open Source Definition allows for further restrictions like price, type of contribution and origin of the contribution, e.g. the case of the NASA Open Source Agreement, which requires the code to be "original" work. The OSI does not endorse FSF license analysis (interpretation) as per their disclaimer. The FSF's Free Software Definition focuses on the user's unrestricted rights to use a program, to study and modify it, to copy it, and redistribute it for any purpose, which are considered by the FSF the four essential freedoms. The OSI's open-source criteria focuses on the availability of the source code and the advantages of an unrestricted and community driven development model. Yet, many FOSS licenses, like the Apache License, and all Free Software licenses allow commercial use of FOSS components. General comparison For a simpler comparison across the most common licenses see free-software license comparison. The following table compares various features of each license and is a general guide to the terms and conditions of each license, based on seven subjects o
https://en.wikipedia.org/wiki/Genetic%20divergence
Genetic divergence is the process in which two or more populations of an ancestral species accumulate independent genetic changes (mutations) through time, often leading to reproductive isolation and continued mutation even after the populations have become reproductively isolated for some period of time, as there isn’t genetic exchange anymore. In some cases, subpopulations cover living in ecologically distinct peripheral environments can exhibit genetic divergence from the remainder of a population, especially where the range of a population is very large (see parapatric speciation). The genetic differences among divergent populations can involve silent mutations (that have no effect on the phenotype) or give rise to significant morphological and/or physiological changes. Genetic divergence will always accompany reproductive isolation, either due to novel adaptations via selection and/or due to genetic drift, and is the principal mechanism underlying speciation. On a molecular genetics level, genetic divergence is due to changes in a small number of genes in a species, resulting in speciation. However, researchers argue that it is unlikely that divergence is a result of a significant, single, dominant mutation in a genetic locus because if that were so, the individual with that mutation would have zero fitness. Consequently, they could not reproduce and pass the mutation on to further generations. Hence, it is more likely that divergence, and subsequently reproductive isolation, are the outcomes of multiple small mutations over evolutionary time accumulating in a population isolated from gene flow. Causes Founder effect One possible cause of genetic divergence is the founder effect, which is when a few individuals become isolated from their original population. Those individuals might overrepresent a certain genetic pattern, which means that certain biological characteristics are overrepresented. These individuals can form a new population with different gene