source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Certified%20email
Certified email (known as Posta elettronica certificata in Italy, or PEC in short) is a special type of email in use in Italy, Switzerland, Hong Kong and Germany. Certified email is meant to provide a legal equivalent of the traditional registered mail, where users are able to legally prove that a given email has been sent and received by paying a small fee. Registered mail is mainly used in Italy, but there are present efforts to extend its legal validity according to the framework of the European Union. Description A certified email can only be sent using a special Certified Email Account provided by a registered provider. When a certified email is sent, the sender's provider will release a receipt of the successful (or failed) transaction. This receipt has legal value and it includes precise information about the time the certified email was sent. Similarly, the receiver's provider will deliver the message in the appropriate certified email account and will then release to the sender a receipt of successful (or failed) delivery, indicating on this receipt the exact time of delivery. If either of these two receipts are lost by the sender, providers are required to issue a proof of transaction with equal legal validity, if this proof is requested within 30 months of delivery. In terms of user experience, a certified email account is very similar to a normal email account. The only additional features are the receipts, received as attachments, providing details and timestamps for all transactions. A certified email account can only handle certified email and can't be used to send regular email. Technical process The development of this email service has conceptual variations that are dominated by two-party scenarios with only one sender and one receiver as well as a trusted third party (TTP) serving as a mediator. As in traditional registered mail, many certified email technologies call for the parties involved to trust the TTP, or the "postman", because it h
https://en.wikipedia.org/wiki/Radio%20Battalion
Radio Battalions are tactical signals intelligence units of Marine Corps Intelligence. There are currently three operational Radio Battalions in the Marine Corps organization: 1st, 2nd, and 3rd. In fleet operations, teams from Radio Battalions are most often attached to the command element of Marine Expeditionary Units. Concept A Radio Battalion consists mainly of signals intelligence and electronic intelligence operators organized into smaller tactical units with different roles. Basic collection teams consist of 4–6 operators using specialized equipment based in HMMWVs. A variation on this is the MEWSS (Mobile Electronic Warfare Support System), which is an amphibious light armored vehicle equipped with similar electronic warfare equipment. MEWSS crews serve dual roles as electronic warfare operators and LAV crewmen. Radio Reconnaissance Platoons serve in a special operations role where the use of standard collection teams is not possible, such as covert infiltrations or tactical recovery of aircraft and personnel (TRAP). History In June 1943, 2nd Radio Intelligence Platoon was activated at Camp Elliott, California. The unit took part in the Battle of Guadalcanal and the Battle of Peleliu. The 3rd Radio Intelligence Platoon was also formed in June 1943 and took part in the Battles of the Kwajalein Atoll and Okinawa. General Alfred M. Gray Jr., who served as the 29th Commandant of the Marine Corps from 1 July 1987 until his retirement on 30 June 1991, is considered the founding father of post-war Marine Corps signals intelligence (SIGINT). In 1955 then Captain Gray was tasked with forming two SIGINT units, one to be assigned to Europe and the other to the Pacific area, chosen from Marines undergoing Manual Morse intercept training. Captain Gray established the Pacific team at NSG Kamiseya, Japan in May 1956. In 1958 then-Captain Gray was assigned to Hawaii to form and activate the 1st Radio Company, a tactical signals intelligence (SIGINT) unit, where he
https://en.wikipedia.org/wiki/Robinson%20oscillator
The Robinson oscillator is an electronic oscillator circuit originally devised for use in the field of continuous wave (CW) nuclear magnetic resonance (NMR). It was a development of the marginal oscillator. Strictly one should distinguish between the marginal oscillator and the Robinson oscillator, although sometimes they are conflated and referred to as a Robinson marginal oscillator. Modern magnetic resonance imaging (MRI) systems are based on pulsed (or Fourier transform) NMR; they do not rely on the use of such oscillators. The key feature of a Robinson oscillator is a limiter in the feedback loop. This means that a square wave current, of accurately-fixed amplitude, is fed back to the tank circuit. The tank selects the fundamental of the square wave, which is amplified and fed back. This results in an oscillation with well-defined amplitude; the voltage across the tank circuit is proportional to its Q-factor. The marginal oscillator has no limiter. It is arranged for the working point of one of the amplifier elements to operate at a nonlinear part of its characteristic and this determines the amplitude of oscillation. This is not as stable as the Robinson arrangement. The Robinson oscillator was invented by British physicist Neville Robinson. References Robinson, F. N. H., Nuclear Resonance Absorption Circuit, Journal of Scientific Instruments, 36: 481-487 (1959) Deschamps, P., Vaissiére, J. and Sullivan, N. S., Integrated circuit Robinson oscillator for NMR detection, Review of Scientific Instruments, 48(6):664–668, June 1977. DOI 10.1063/1.1135103 Wilson, K. J. and Vallabhan, C. P. G., An improved MOSFET-based Robinson oscillator for NMR detection, Meas. Sci. Technol., 1(5):458-460, May 1990. DOI 10.1088/0957-0233/1/5/015 Nuclear magnetic resonance Electronic oscillators
https://en.wikipedia.org/wiki/Semantic%20Interoperability%20Community%20of%20Practice
Semantic Interoperability Community of Practice (SICoP) is a group of people who seek to make the Semantic Web operational in their respective settings by achieving "semantic interoperability" and "semantic data integration". SICoP seeks to enable Semantic Interoperability, specifically the "operationalizing" of relevant technologies and approaches, through online conversation, meetings, tutorials, conferences, pilot projects, and other activities aimed at developing and disseminating best practices. The individuals making up this Community of Practice are from various settings, however, the SICoP claims neither formal nor implied endorsement by any organization. See also Semantic Wiki Information Management Semantics Interoperability
https://en.wikipedia.org/wiki/Discharger
A discharger in electronics is a device or circuit that releases stored energy or electric charge from a battery, capacitor or other source. Discharger types include: metal probe with insulated handle & ground wire, and sometimes resistor (for capacitors) resistor (for batteries) parasitic discharge (for batteries arranged in parallel) more complex electronic circuits (for batteries) See also Bleeder resistor Electronic circuits
https://en.wikipedia.org/wiki/Intertidal%20ecology
Intertidal ecology is the study of intertidal ecosystems, where organisms live between the low and high tide lines. At low tide, the intertidal is exposed whereas at high tide, the intertidal is underwater. Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as between different species of intertidal organisms within a particular intertidal community. The most important environmental and species interactions may vary based on the type of intertidal community being studied, the broadest of classifications being based on substrates—rocky shore and soft bottom communities. Organisms living in this zone have a highly variable and often hostile environment, and have evolved various adaptations to cope with and even exploit these conditions. One easily visible feature of intertidal communities is vertical zonation, where the community is divided into distinct vertical bands of specific species going up the shore. Species ability to cope with abiotic factors associated with emersion stress, such as desiccation determines their upper limits, while biotic interactions e.g.competition with other species sets their lower limits. Intertidal regions are utilized by humans for food and recreation, but anthropogenic actions also have major impacts, with overexploitation, invasive species and climate change being among the problems faced by intertidal communities. In some places Marine Protected Areas have been established to protect these areas and aid in scientific research. Types of intertidal communities Intertidal habitats can be characterized as having either hard or soft bottoms substrates. Rocky intertidal communities occur on rocky shores, such as headlands, cobble beaches, or human-made jetties. Their degree of exposure may be calculated using the Ballantine Scale. Soft-sediment habitats include sandy beaches, and intertidal wetlands (e.g., mudflats and salt marshes). These habitats differ in levels of abio
https://en.wikipedia.org/wiki/The%20Complexity%20of%20Songs
"The Complexity of Songs" is a scholarly article by computer scientist Donald Knuth in 1977, as an in-joke about computational complexity theory. The article capitalizes on the tendency of popular songs to devolve from long and content-rich ballads to highly repetitive texts with little or no meaningful content. The article notes that a song of length N words may be produced remembering, e.g., only words ("space complexity" of the song) or even less. Article summary Knuth writes that "our ancient ancestors invented the concept of refrain" to reduce the space complexity of songs, which becomes crucial when a large number of songs is to be committed to one's memory. Knuth's Lemma 1 states that if N is the length of a song, then the refrain decreases the song complexity to cN, where the factor c < 1. Knuth further demonstrates a way of producing songs with O() complexity, an approach "further improved by a Scottish farmer named O. MacDonald". More ingenious approaches yield songs of complexity O(), a class known as "m bottles of beer on the wall". Finally, the progress during the 20th century—stimulated by the fact that "the advent of modern drugs has led to demands for still less memory"—leads to the ultimate improvement: Arbitrarily long songs with space complexity O(1) exist, e.g. a song defined by the recurrence relation 'That's the way,' 'I like it,' , for all 'uh huh,' 'uh huh' Further developments Prof. Kurt Eisemann of San Diego State University in his letter to the Communications of the ACM further improves the latter seemingly unbeatable estimate. He begins with an observation that for practical applications the value of the "hidden constant" c in the Big Oh notation may be crucial in making the difference between the feasibility and unfeasibility: for example a constant value of 1080 would exceed the capacity of any known device. He further notices that a technique has already been known in Mediaeval Europe whereby textual content of an arbitra
https://en.wikipedia.org/wiki/Aplasia
Aplasia (; from Greek a, "not", "no" + plasis, "formation") is a birth defect where an organ or tissue is wholly or largely absent. It is caused by a defect in a developmental process. Aplastic anemia is the failure of the body to produce blood cells. It may occur at any time, and has multiple causes. Examples Acquired pure red cell aplasia Aplasia cutis congenita Aplastic anemia Germ cell aplasia, also known as Sertoli cell-only syndrome Radial aplasia Thymic aplasia, which is found in DiGeorge syndrome and also occurs naturally as part of the gradual loss of function of the immune system later in life See also Atrophy Hyperplasia Hypoplasia Neoplasia List of biological development disorders References Medical terminology Anatomy Embryology Blood disorders
https://en.wikipedia.org/wiki/Distributed%20lock%20manager
Operating systems use lock managers to organise and serialise the access to resources. A distributed lock manager (DLM) runs in every machine in a cluster, with an identical copy of a cluster-wide lock database. In this way a DLM provides software applications which are distributed across a cluster on multiple machines with a means to synchronize their accesses to shared resources. DLMs have been used as the foundation for several successful clustered file systems, in which the machines in a cluster can use each other's storage via a unified file system, with significant advantages for performance and availability. The main performance benefit comes from solving the problem of disk cache coherency between participating computers. The DLM is used not only for file locking but also for coordination of all disk access. VMScluster, the first clustering system to come into widespread use, relied on the OpenVMS DLM in just this way. Resources The DLM uses a generalized concept of a resource, which is some entity to which shared access must be controlled. This can relate to a file, a record, an area of shared memory, or anything else that the application designer chooses. A hierarchy of resources may be defined, so that a number of levels of locking can be implemented. For instance, a hypothetical database might define a resource hierarchy as follows: Database Table Record Field A process can then acquire locks on the database as a whole, and then on particular parts of the database. A lock must be obtained on a parent resource before a subordinate resource can be locked. Lock modes A process running within a VMSCluster may obtain a lock on a resource. There are six lock modes that can be granted, and these determine the level of exclusivity being granted, it is possible to convert the lock to a higher or lower level of lock mode. When all processes have unlocked a resource, the system's information about the resource is destroyed. Null (NL). Indicates interes
https://en.wikipedia.org/wiki/Amiga%20software
Amiga software is computer software engineered to run on the Amiga personal computer. Amiga software covers many applications, including productivity, digital art, games, commercial, freeware and hobbyist products. The market was active in the late 1980s and early 1990s but then dwindled. Most Amiga products were originally created directly for the Amiga computer (most taking advantage of the platform's unique attributes and capabilities), and were not ported from other platforms. During its lifetime, thousands of applications were produced with over 10,000 utilities (collected into the Aminet repository). However, it was perceived as a games machine from outside its community of experienced and professional users. More than 12,000 games were available. New applications for the three existing Amiga-like operating systems are generally ported from the open source (mainly from Linux) software base. Many Amiga software products or noteworthy programs during the timeline were ported to other platforms or inspired new programs, such as those aimed at 3D rendering or audio creations, e.g. LightWave 3D, Cinema 4D, and Blender (whose development started for the Amiga platform only). The first multimedia word processors for Amiga, such as TextCraft, Scribble!, Rashumon, and Wordworth, were the first on the market to implement full color WYSIWYG (with other platforms then only implementing black-and-white previews) and allowing the embedding of audio files. History and characteristics From the origins to 1988 1985 Amiga software started its history with the 1985 Amiga 1000. Commodore International released the programming specifications and development computers to various software houses, prominently Electronic Arts, a software house that then offered Deluxe Paint, Deluxe Music and others. Electronic Arts also developed the Interchange File Format (IFF) file container, to store project files realized by Deluxe Paint and Deluxe Music. IFF became the de facto standard in
https://en.wikipedia.org/wiki/Rabbi%20Nehemiah
Rabbi Nehemiah was a rabbi who lived circa 150 AD (fourth generation of tannaim). He was one of the great students of Rabbi Akiva, and one of the rabbis who received semicha from R' Judah ben Baba The Talmud equated R' Nechemiah with Rabbi Nehorai: "His name was not Rabbi Nehorai, but Rabbi Meir." His son, R' Yehudah BeRabi Nechemiah, studied before Rabbi Tarfon, but died at a young age after damaging R' Tarfon's honor, after R' Akiva predicted his death. Teachings In the Talmud, all anonymous sayings in the Tosefta are attributed to R' Nechemiah. However, Sherira Gaon said that this does not mean they were said by R' Nechemiah, but that the laws in question were transmitted by R' Nechemiah. In the Talmud, many times he disagrees with R' Judah bar Ilai on matters of halacha. He is attributed as the author of the Mishnat ha-Middot (ca. AD 150), making it the earliest known Hebrew text on geometry, although some historians assign the text to a later period by an unknown author. The Mishnat ha-Middot argues against the common belief that the Bible defines the geometric ratio (pi) as being exactly equal to 3, based on the description in 1 Kings 7:23 (and 2 Chronicles 4:2) of the great bowl situated outside the Temple of Jerusalem as having a diameter of 10 cubits and a circumference of 30 cubits. He maintained that the diameter of the bowl was measured from the outside brim, while the circumference was measured along the inner brim, which with a brim that is one handbreadth wide (as described in the subsequent verses 1 Kings 7:24 and 2 Chronicles 4:3) yields a ratio from the circular rim closer to the actual value of . Quotes "Due to the sin of baseless hatred, great strife is found in a man's home, and his wife miscarries, and his sons and daughters die at a young age." See also History of numerical approximations of π References Mishnah rabbis 2nd-century rabbis Mathematics writers Geometers 2nd-century mathematicians
https://en.wikipedia.org/wiki/Append
In computer programming, append is the operation for concatenating linked lists or arrays in some high-level programming languages. Lisp Append originates in the programming language Lisp. The append procedure takes zero or more (linked) lists as arguments, and returns the concatenation of these lists. (append '(1 2 3) '(a b) '() '(6)) ;Output: (1 2 3 a b 6) Since the append procedure must completely copy all of its arguments except the last, both its time and space complexity are O(n) for a list of elements. It may thus be a source of inefficiency if used injudiciously in code. The nconc procedure (called append! in Scheme) performs the same function as append, but destructively: it alters the cdr of each argument (save the last), pointing it to the next list. Implementation Append can easily be defined recursively in terms of cons. The following is a simple implementation in Scheme, for two arguments only: (define append (lambda (ls1 ls2) (if (null? ls1) ls2 (cons (car ls1) (append (cdr ls1) ls2))))) Append can also be implemented using fold-right: (define append (lambda (a b) (fold-right cons b a))) Other languages Following Lisp, other high-level programming languages which feature linked lists as primitive data structures have adopted an append. To append lists, as an operator, Haskell uses ++, OCaml uses @. Other languages use the + or ++ symbols to nondestructively concatenate a string, list, or array. Prolog The logic programming language Prolog features a built-in append predicate, which can be implemented as follows: append([],Ys,Ys). append([X|Xs],Ys,[X|Zs]) :- append(Xs,Ys,Zs). This predicate can be used for appending, but also for picking lists apart. Calling ?- append(L,R,[1,2,3]). yields the solutions: L = [], R = [1, 2, 3] ; L = [1], R = [2, 3] ; L = [1, 2], R = [3] ; L = [1, 2, 3], R = [] Miranda In Miranda, this right-fold, from Hughes (1989:5-6), has the same semantics (by example) as the Scheme imple
https://en.wikipedia.org/wiki/Recovery%20%28metallurgy%29
In metallurgy, recovery is a process by which a metal or alloy's deformed grains can reduce their stored energy by the removal or rearrangement of defects in their crystal structure. These defects, primarily dislocations, are introduced by plastic deformation of the material and act to increase the yield strength of a material. Since recovery reduces the dislocation density, the process is normally accompanied by a reduction in a material's strength and a simultaneous increase in the ductility. As a result, recovery may be considered beneficial or detrimental depending on the circumstances. Recovery is related to the similar processes of recrystallization and grain growth, each of them being stages of annealing. Recovery competes with recrystallization, as both are driven by the stored energy, but is also thought to be a necessary prerequisite for the nucleation of recrystallized grains. It is so called because there is a recovery of the electrical conductivity due to a reduction in dislocations. This creates defect-free channels, giving electrons an increased mean free path. Definition The physical processes that fall under the designations of recovery, recrystallization and grain growth are often difficult to distinguish in a precise manner. Doherty et al. (1998) stated: "The authors have agreed that ... recovery can be defined as all annealing processes occurring in deformed materials that occur without the migration of a high-angle grain boundary" Thus the process can be differentiated from recrystallization and grain growth as both feature extensive movement of high-angle grain boundaries. If recovery occurs during deformation (a situation that is common in high-temperature processing) then it is referred to as 'dynamic' while recovery that occurs after processing is termed 'static'. The principal difference is that during dynamic recovery, stored energy continues to be introduced even as it is decreased by the recovery process - resulting in a form of dyna
https://en.wikipedia.org/wiki/Loudspeaker%20enclosure
A loudspeaker enclosure or loudspeaker cabinet is an enclosure (often rectangular box-shaped) in which speaker drivers (e.g., loudspeakers and tweeters) and associated electronic hardware, such as crossover circuits and, in some cases, power amplifiers, are mounted. Enclosures may range in design from simple, homemade DIY rectangular particleboard boxes to very complex, expensive computer-designed hi-fi cabinets that incorporate composite materials, internal baffles, horns, bass reflex ports and acoustic insulation. Loudspeaker enclosures range in size from small "bookshelf" speaker cabinets with woofers and small tweeters designed for listening to music with a hi-fi system in a private home to huge, heavy subwoofer enclosures with multiple or even speakers in huge enclosures which are designed for use in stadium concert sound reinforcement systems for rock music concerts. The primary role of the enclosure is to prevent sound waves generated by the rearward-facing surface of the diaphragm of an open speaker driver interacting with sound waves generated at the front of the speaker driver. Because the forward- and rearward-generated sounds are out of phase with each other, any interaction between the two in the listening space creates a distortion of the original signal as it was intended to be reproduced. As such, a loudspeaker cannot be used without installing it in a baffle of some type, such as a closed box, vented box, open baffle, or a wall or ceiling (infinite baffle). The enclosure also plays a role in managing vibration induced by the driver frame and moving airmass within the enclosure, as well as heat generated by driver voice coils and amplifiers (especially where woofers and subwoofers are concerned). Sometimes considered part of the enclosure, the base, may include specially designed "feet" to decouple the speaker from the floor. Enclosures designed for use in PA systems, sound reinforcement systems and for use by electric musical instrument playe
https://en.wikipedia.org/wiki/Carry-save%20adder
A carry-save adder is a type of digital adder, used to efficiently compute the sum of three or more binary numbers. It differs from other digital adders in that it outputs two (or more) numbers, and the answer of the original summation can be achieved by adding these outputs together. A carry save adder is typically used in a binary multiplier, since a binary multiplier involves addition of more than two binary numbers after multiplication. A big adder implemented using this technique will usually be much faster than conventional addition of those numbers. Motivation Consider the sum: 12345678 + 87654322 = 100000000 Using basic arithmetic, we calculate right to left, "8 + 2 = 0, carry 1", "7 + 2 + 1 = 0, carry 1", "6 + 3 + 1 = 0, carry 1", and so on to the end of the sum. Although we know the last digit of the result at once, we cannot know the first digit until we have gone through every digit in the calculation, passing the carry from each digit to the one on its left. Thus adding two n-digit numbers has to take a time proportional to n, even if the machinery we are using would otherwise be capable of performing many calculations simultaneously. In electronic terms, using bits (binary digits), this means that even if we have n one-bit adders at our disposal, we still have to allow a time proportional to n to allow a possible carry to propagate from one end of the number to the other. Until we have done this, We do not know the result of the addition. We do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative). A carry look-ahead adder can reduce the delay. In principle the delay can be reduced so that it is proportional to log n, but for large numbers this is no longer the case, because even when carry look-ahead is implemented, the distances that signals have to travel on the chip increase in proportion to n, and propagation delays increase at the same
https://en.wikipedia.org/wiki/PortAudio
PortAudio is an open-source computer library for audio playback and recording. It is a cross-platform library, so programs using it can run on many different computer operating systems, including Windows, Mac OS X and Linux. PortAudio supports Core Audio, ALSA, and MME, DirectSound, ASIO and WASAPI on Windows. Like other libraries whose primary goal is portability, PortAudio is written in the C programming language. It has also been implemented in the languages PureBasic and Lazarus/Free Pascal. PortAudio is based on a callback paradigm, similar to JACK and ASIO. PortAudio is part of the PortMedia project, which aims to provide a set of platform-independent libraries for music software. The free audio editor Audacity uses the PortAudio library, and so does JACK on the Windows platform. See also List of free software for audio Notes References PortAudio: Portable Audio Processing for All Platforms Using portable, multi-OS sound systems External links Audio libraries Computer libraries Free software programmed in C Software using the MIT license
https://en.wikipedia.org/wiki/PortMedia
PortMedia, formerly PortMusic, is a set of open source computer libraries for dealing with sound and MIDI. Currently the project has two main libraries: PortAudio, for digital audio input and output, and PortMidi, a library for MIDI input and output. A library for dealing with different audio file formats, PortSoundFile, is being planned, although another library, libsndfile, already exists and is licensed under the copyleft GNU Lesser General Public License. A standard MIDI file I/O library, PortSMF, is under construction. PortMusic has become PortMedia and is hosted on SourceForge. See also List of free software for audio External links PortMusic website Audio libraries Computer libraries Free audio software
https://en.wikipedia.org/wiki/PortMidi
PortMidi is a computer library for real time input and output of MIDI data. It is designed to be portable to many different operating systems. PortMidi is part of the PortMusic project. See also PortAudio External links portmidi.h – definition of the API and contains the documentation for PortMidi Audio libraries Computer libraries
https://en.wikipedia.org/wiki/Attack%20tree
Attack trees are conceptual diagrams showing how an asset, or target, might be attacked. Attack trees have been used in a variety of applications. In the field of information technology, they have been used to describe threats on computer systems and possible attacks to realize those threats. However, their use is not restricted to the analysis of conventional information systems. They are widely used in the fields of defense and aerospace for the analysis of threats against tamper resistant electronics systems (e.g., avionics on military aircraft). Attack trees are increasingly being applied to computer control systems (especially relating to the electric power grid). Attack trees have also been used to understand threats to physical systems. Some of the earliest descriptions of attack trees are found in papers and articles by Bruce Schneier, when he was CTO of Counterpane Internet Security. Schneier was clearly involved in the development of attack tree concepts and was instrumental in publicizing them. However, the attributions in some of the early publicly available papers on attack trees also suggest the involvement of the National Security Agency in the initial development. Attack trees are very similar, if not identical, to threat trees. Threat trees were developed by Jonathan Weiss of Bell Laboratories to comply with guidance in MIL STD 1785 for AT&T's work on Command and Control for federal applications, and were first described in his paper in 1982. This work was later discussed in 1994 by Edward Amoroso. Basic Attack trees are multi-leveled diagrams consisting of one root, leaves, and children. From the bottom up, child nodes are conditions which must be satisfied to make the direct parent node true; when the root is satisfied, the attack is complete. Each node may be satisfied only by its direct child nodes. A node may be the child of another node; in such a case, it becomes logical that multiple steps must be taken to carry out an attack.
https://en.wikipedia.org/wiki/Plug%20door
A plug door is a door designed to seal itself by taking advantage of pressure difference on its two sides and is typically used on aircraft with cabin pressurization. The higher pressure on one side forces the usually wedge-shaped door into its socket, making a good seal and preventing it from being opened until the pressure is released. Conversely, a non-plug door relies on the strength of the locking mechanism to keep the door shut. Aircraft The plug door is often seen on aircraft with pressurized cabins. Due to the air pressure within the aircraft cabin being higher than that of the surrounding atmosphere, the door seals itself closed as the aircraft climbs and the pressure differential increases. This prevents accidental opening of the door. In the event of a decompression, with there no longer being a pressure differential, the doors may be opened, and as such most airlines' operating procedures require cabin crew to keep passengers away from the doors until the aircraft has safely landed. On some aircraft the plug door opens partially inward, and through a complex hinge design can be tilted to fit through the fuselage opening, or the door may have locking hinged panels at the top and bottom edges that can make it smaller than the opening, so that it may be swung outward. Plug doors are used on most modern airliners, particularly for the small passenger doors. However, since plug doors must open inward, the design is disadvantageous for cargo doors. Due to its large area, the cargo door on an airliner cannot be swung inside the fuselage without taking up a considerable amount of valuable cargo space. For this reason, these doors often open outward and use a locking mechanism with multiple pins or hatch dogs to prevent opening while in flight. Rapid transit The MTR Adtranz–CAF EMU and MTR Rotem EMU trains in Hong Kong use plug doors, on the Airport Express, Tung Chung line and Tseung Kwan O line. Spacecraft An inward opening plug hatch design was used on
https://en.wikipedia.org/wiki/Shannon%E2%80%93Weaver%20model
The Shannon–Weaver model is one of the first and most influential models of communication. It was initially published in the 1948 paper A Mathematical Theory of Communication and explains communication in terms of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source produces the original message. The transmitter translates the message into a signal, which is sent using a channel. The receiver translates the signal back into the original message and makes it available to the destination. For a landline phone call, the person calling is the source. They use the telephone as a transmitter, which produces an electric signal that is sent through the wire as a channel. The person receiving the call is the destination and their telephone is the receiver. Shannon and Weaver distinguish three types of problems of communication: technical, semantic, and effectiveness problems. They focus on the technical level, which concerns the problem of how to use a signal to accurately reproduce a message from one location to another location. The difficulty in this regard is that noise may distort the signal. They discuss redundancy as a solution to this problem: if the original message is redundant then the distortions can be detected, which makes it possible to reconstruct the source's original intention. The Shannon–Weaver model of communication has been very influential in various fields, including communication theory and information theory. Many later theorists have built their own models on its insights. However, it is often criticized based on the claim that it oversimplifies communication. One common objection is that communication should not be understood as a one-way process but as a dynamic interaction of messages going back and forth between both participants. Another criticism rejects the idea that the message exists prior to the communication and argues instead that the encoding is itself a creative process that creates th
https://en.wikipedia.org/wiki/MIKEY
Multimedia Internet KEYing (MIKEY) is a key management protocol that is intended for use with real-time applications. It can specifically be used to set up encryption keys for multimedia sessions that are secured using SRTP, the security protocol commonly used for securing real-time communications such as VoIP. MIKEY was first defined in . Additional MIKEY modes have been defined in , , , and . Purpose of MIKEY As described in RFC 3830, the MIKEY protocol is intended to provide end-to-end security between users to support a communication. To do this, it shares a session key, known as the Traffic Encryption Key (TEK), between the participants of a communication session. The MIKEY protocol may also authenticate the participants of the communication. MIKEY provides many methods to share the session key and authenticate participants. Using MIKEY in practice MIKEY is used to perform key management for securing a multimedia communication protocol. As such, MIKEY exchanges generally occur within the signalling protocol which supports the communication. A common setup is for MIKEY to support Secure VoIP by providing the key management mechanism for the VoIP protocol (SRTP). Key management is performed by including MIKEY messages within the SDP content of SIP signalling messages. Use cases MIKEY considers how to secure the following use cases: One-to-one communications Conference communications Group Broadcast Call Divert Call Forking Delayed delivery (Voicemail) Not all MIKEY methods support each use case. Each MIKEY method also has its own advantages and disadvantages in terms of feature support, computational complexity and latency of communication setup. Key transport and exchange methods MIKEY supports eight different methods to set up a common secret (to be used as e.g. a session key or a session KEK): Pre-Shared Key (MIKEY-PSK): This is the most efficient way to handle the transport of the Common Secret, since only symmetric encryption is used and
https://en.wikipedia.org/wiki/Rambutan%20%28cryptography%29
Rambutan is a family of encryption technologies designed by the Communications-Electronics Security Group (CESG), the technical division of the United Kingdom government's secret communications agency, GCHQ. It includes a range of encryption products designed by CESG for use in handling confidential (not secret) communications between parts of the British government, government agencies, and related bodies such as NHS Trusts. Unlike CESG's Red Pike system, Rambutan is not available as software: it is distributed only as a self-contained electronic device (an ASIC) which implements the entire cryptosystem and handles the related key distribution and storage tasks. Rambutan is not sold outside the government sector. Technical details of the Rambutan algorithm are secret. Security researcher Bruce Schneier describes it as being a stream cipher (linear-feedback shift register) based cryptosystem with 5 shift registers each of around 80 bits, and a key size of 112 bits. RAMBUTAN-I communications chips (which implement a secure X.25 based communications system) are made by approved contractors Racal and Baltimore Technologies/Zergo Ltd. CESG later specified RAMBUTAN-II, an enhanced system with backward compatibility with existing RAMBUTAN-I infrastructure. The RAMBUTAN-II chip is a 64-pin quad ceramic pack chip, which implements the electronic codebook, cipher block chaining, and output feedback operating modes (each in 64 bits) and the cipher feedback mode in 1 or 8 bits. Schneier suggests that these modes may indicate Rambutan is a block cipher rather than a stream. The three 64 bit modes operate at 88 megabits/second. Rambutan operates in three modes: ECB, CBC, and 8 bit CFB. References Cryptographic hardware Stream ciphers
https://en.wikipedia.org/wiki/Torsion%20tensor
In differential geometry, the notion of torsion is a manner of characterizing a twist or screw of a moving frame around a curve. The torsion of a curve, as it appears in the Frenet–Serret formulas, for instance, quantifies the twist of a curve about its tangent vector as the curve evolves (or rather the rotation of the Frenet–Serret frame about the tangent vector). In the geometry of surfaces, the geodesic torsion describes how a surface twists about a curve on the surface. The companion notion of curvature measures how moving frames "roll" along a curve "without twisting". More generally, on a differentiable manifold equipped with an affine connection (that is, a connection in the tangent bundle), torsion and curvature form the two fundamental invariants of the connection. In this context, torsion gives an intrinsic characterization of how tangent spaces twist about a curve when they are parallel transported; whereas curvature describes how the tangent spaces roll along the curve. Torsion may be described concretely as a tensor, or as a vector-valued 2-form on the manifold. If ∇ is an affine connection on a differential manifold, then the torsion tensor is defined, in terms of vector fields X and Y, by where [X,Y] is the Lie bracket of vector fields. Torsion is particularly useful in the study of the geometry of geodesics. Given a system of parametrized geodesics, one can specify a class of affine connections having those geodesics, but differing by their torsions. There is a unique connection which absorbs the torsion, generalizing the Levi-Civita connection to other, possibly non-metric situations (such as Finsler geometry). The difference between a connection with torsion, and a corresponding connection without torsion is a tensor, called the contorsion tensor. Absorption of torsion also plays a fundamental role in the study of G-structures and Cartan's equivalence method. Torsion is also useful in the study of unparametrized families of geodesics, via
https://en.wikipedia.org/wiki/Surface%20%28mathematics%29
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line. There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not. A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian). Definitions Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface. If the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation A surface may also be defined as the image, in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to insure that the image is not a curve). In this case, one says that one has a parametric surface, which is parametrized by these two variables, called parameters. For example, the unit sphere may be parametrized by the Eu
https://en.wikipedia.org/wiki/Helmut%20Gr%C3%B6ttrup
Helmut Gröttrup (12 February 1916 – 4 July 1981) was a German engineer, rocket scientist and inventor of the smart card. During World War II, he worked in the German V-2 rocket program under Wernher von Braun. From 1946 to 1950 he headed a group of 170 German scientists who were forced to work for the Soviet rocketry program under Sergei Korolev. After returning to West Germany in December 1953, he developed data processing systems, contributed to early commercial applications of computer science and coined the German term "Informatik". In 1967 Gröttrup invented the smart card as a "forgery-proof key" for secure identification and access control (ID card) or storage of a secure key, also including inductive coupling for near-field communication (NFC). From 1970 he headed a start-up division of Giesecke+Devrient for the development of banknote processing systems and machine-readable security features. Education Helmut Gröttrup's father Johann Gröttrup (1881 – 1940) was a mechanical engineer. He worked full-time at the Bund der technischen Angestellten und Beamten (Butab), a federation for technical staff and officials of the social democratic trade union in Berlin. His mother Thérèse Gröttrup (1894 – 1981), born Elsen, was active in the peace movement. Johann Gröttrup lost his job in 1933 when the Nazi Party came into power. From 1935 to 1939 Helmut Gröttrup studied applied physics at the Technical University of Berlin and made his thesis with professor Hans Geiger, the co-inventor of the Geiger counter. He also worked for Manfred von Ardenne's research laboratory Forschungslaboratorium für Elektronenphysik. German rocketry program From December 1939, Helmut Gröttrup worked in the German V-2 rocket program at the Peenemünde Army Research Center with Walter Dornberger and Wernher von Braun. In December 1940, he was made department head under Ernst Steinhoff for developing remote guidance and control systems. Since October 1943 Gröttrup had been under SD surveillan
https://en.wikipedia.org/wiki/TreeFam
TreeFam (Tree families database) is a database of phylogenetic trees of animal genes. It aims at developing a curated resource that gives reliable information about ortholog and paralog assignments, and evolutionary history of various gene families. TreeFam defines a gene family as a group of genes that evolved after the speciation of single-metazoan animals. It also tries to include outgroup genes like yeast (S.cerevisiae and S. pombe) and plant (A. thaliana) to reveal these distant members. TreeFam is also an ortholog database. Unlike other pairwise alignment based ones, TreeFam infers orthologs by means of gene trees. It fits a gene tree into the universal species tree and finds historical duplications, speciations and losses events. TreeFam uses this information to evaluate tree building, guide manual curation, and infer complex ortholog and paralog relations. The basic elements of TreeFam are gene families that can be divided into two parts: TreeFam-A and TreeFam-B families. TreeFam-B families are automatically created. They might contain errors given complex phylogenies. TreeFam-A families are manually curated from TreeFam-B ones. Family names and node names are assigned at the same time. The ultimate goal of TreeFam is to present a curated resource for all the families. TreeFam is being run as a project at the Wellcome Trust Sanger Institute, and its software is housed on SourceForge as "TreeSoft". See also Homology (biology) HomoloGene Phylogenetics OrthoDB Orthologous MAtrix (OMA) Inparanoid References External links TreeFam website TreeSoft Computational phylogenetics Genetics databases Genetics in the United Kingdom Phylogenetics Science and technology in Cambridgeshire South Cambridgeshire District
https://en.wikipedia.org/wiki/Hybrid%20kernel
A hybrid kernel is an operating system kernel architecture that attempts to combine aspects and benefits of microkernel and monolithic kernel architectures used in operating systems. Overview The traditional kernel categories are monolithic kernels and microkernels (with nanokernels and exokernels seen as more extreme versions of microkernels). The "hybrid" category is controversial, due to the similarity of hybrid kernels and ordinary monolithic kernels; the term has been dismissed by Linus Torvalds as simple marketing. The idea behind a hybrid kernel is to have a kernel structure similar to that of a microkernel, but to implement that structure in the manner of a monolithic kernel. In contrast to a microkernel, all (or nearly all) operating system services in a hybrid kernel are still in kernel space. There are none of the reliability benefits of having services in user space, as with a microkernel. However, just as with an ordinary monolithic kernel, there is none of the performance overhead for message passing and context switching between kernel and user mode that normally comes with a microkernel. Examples NT kernel One prominent example of a hybrid kernel is the Microsoft Windows NT kernel that powers all operating systems in the Windows NT family, up to and including Windows 11 and Windows Server 2022, and powers Windows Phone 8, Windows Phone 8.1, and Xbox One. Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. NT-based Windows is classified as a hybrid kernel (or a macrokernel) rather than a monolithic kernel because the emulation subsystems run in user-mode server processes, rather than in kernel mode as on a monolithic kernel, and further because of the large number of design goals which resemble design goals of
https://en.wikipedia.org/wiki/The%20Dam%20Busters%20%28video%20game%29
The Dam Busters is a combat flight simulator set in World War II, published by U.S. Gold in 1984. It is loosely based on the real life Operation Chastise and the 1955 film. The game was released in 1984 for the ColecoVision and Commodore 64; in 1985 for Apple II, DOS, MSX and ZX Spectrum; then in 1986 for the Amstrad CPC and NEC PC-9801. Gameplay The player chooses from three different night missions, each of which is increasingly difficult. In all three, the goal is to successfully bomb a dam. On the practice run, the player can approach and bomb the dam without any other obstacles. The two other missions feature various enemies to overcome, and the flight start from either the French coast or a British airfield. During your flight, the player controls every aspect of the bomber from each of the seven crew positions: Pilot, Front Gunner, Tail Gunner, Bomb Aimer, Navigator, Engineer, and Squadron Leader. Leaving any of these positions unattended during an event could spell the death of the person in that position, rendering it useless during further encounters. The player must evade enemies, plan your approach, and set all of the variables (speed, height, timing, etc.) to execute a successful bombing. Sometimes, it becomes necessary to deal with emergencies, such as engine fires. While en route to the target the player can expect to encounter attacks by enemy aircraft, barrage balloons, flak and enemy searchlights. Events like this will flash along the border of the screen, while indicating the key to press to take the player to the station in need of assistance. For example, when flying through enemy search lights, the player will need to man the gunner's station and shoot out the lights on the ground. If left unattended, the player can expect flak and enemy aircraft to start damaging the bomber. Once the player begins the final run to their target, they are presented with the custom bombing sights, as made famous by the story. When the player toggles t
https://en.wikipedia.org/wiki/IPrint
iPrint is a print server developed by Novell, now owned by Micro Focus. iPrint enabled users to install a device driver for a printer directly from a web browser, and to submit print jobs over a computer network. It could process print jobs routed through the internet using the Internet Printing Protocol (IPP). The iPrint server ran on Novell's NetWare operating system. Windows, Linux, and Mac OS clients needed Novell's iPrint client software to make use of iPrint services. Although iPrint is bound to Novell Distributed Print Services (NDPS), it does not require a Novell client but only an iPrint client. See also Novell Embedded Systems Technology (NEST) References Further reading http://www.novell.com/documentation/oes/index.html?page=/documentation/oes/oes_home/data/prntsvcs.html Novell Open Enterprise Server documentation on iPrint Computer printing Novell NetWare Proprietary software Servers (computing)
https://en.wikipedia.org/wiki/Histogram%20equalization
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Overview This method usually increases the global contrast of many images, especially when the image is represented by a narrow range of intensity values. Through this adjustment, the intensities can be better distributed on the histogram utilizing the full range of intensities evenly. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the highly populated intensity values which are used to degrade image contrast. The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are either over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique adaptive to the input image and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal. In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal-to-noise ratio usually hampers visual detections. Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images to which one would apply false-color. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further red
https://en.wikipedia.org/wiki/Force%20of%20mortality
In actuarial science, force of mortality represents the instantaneous rate of mortality at a certain age measured on an annualized basis. It is identical in concept to failure rate, also called hazard function, in reliability theory. Motivation and definition In a life table, we consider the probability of a person dying from age x to x + 1, called qx. In the continuous case, we could also consider the conditional probability of a person who has attained age (x) dying between ages x and x + Δx, which is where FX(x) is the cumulative distribution function of the continuous age-at-death random variable, X. As Δx tends to zero, so does this probability in the continuous case. The approximate force of mortality is this probability divided by Δx. If we let Δx tend to zero, we get the function for force of mortality, denoted by : Since fX(x)=F 'X(x) is the probability density function of X, and S(x) = 1 - FX(x) is the survival function, the force of mortality can also be expressed variously as: To understand conceptually how the force of mortality operates within a population, consider that the ages, x, where the probability density function fX(x) is zero, there is no chance of dying. Thus the force of mortality at these ages is zero. The force of mortality μ(x) uniquely defines a probability density function fX(x). The force of mortality can be interpreted as the conditional density of failure at age x, while f(x) is the unconditional density of failure at age x. The unconditional density of failure at age x is the product of the probability of survival to age x, and the conditional density of failure at age x, given survival to age x. This is expressed in symbols as or equivalently In many instances, it is also desirable to determine the survival probability function when the force of mortality is known. To do this, integrate the force of mortality over the interval x to x + t . By the fundamental theorem of calculus, this is simply Let us denote then tak
https://en.wikipedia.org/wiki/Uniform%20memory%20access
Uniform memory access (UMA) is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with non-uniform memory access (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose and time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time-critical applications. Types of architectures There are three types of UMA architectures: UMA using bus-based symmetric multiprocessing (SMP) architectures; UMA using crossbar switches; UMA using multistage interconnection networks. hUMA In April 2013, the term hUMA (heterogeneous uniform memory access) began to appear in AMD promotional material to refer to CPU and GPU sharing the same system memory via cache coherent views. Advantages include an easier programming model and less copying of data between separate memory pools. See also Non-uniform memory access Cache-only memory architecture Heterogeneous System Architecture References Computer memory Parallel computing
https://en.wikipedia.org/wiki/171%20%28number%29
171 (one hundred [and] seventy-one) is the natural number following 170 and preceding 172. In mathematics 171 is a triangular number and a Jacobsthal number. There are 171 transitive relations on three labeled elements, and 171 combinatorially distinct ways of subdividing a cuboid by flat cuts into a mesh of tetrahedra, without adding extra vertices. The diagonals of a regular decagon meet at 171 points, including both crossings and the vertices of the decagon. There are 171 faces and edges in the 57-cell, an abstract 4-polytope with hemi-dodecahedral cells that is its own dual polytope. Within moonshine theory of sporadic groups, the friendly giant is defined as having cyclic groups ⟨ ⟩ that are linked with the function, ∈ where is the character of at . This generates 171 moonshine groups within associated with that are principal moduli for different genus zero congruence groups commensurable with the projective linear group . See also The year AD 171 or 171 BC List of highways numbered 171 References Integers
https://en.wikipedia.org/wiki/174%20%28number%29
174 (one hundred [and] seventy-four) is the natural number following 173 and preceding 175. In mathematics There are 174 7-crossing semi-meanders, ways of arranging a semi-infinite curve in the plane so that it crosses a straight line seven times. There are 174 invertible (0,1)-matrices. There are also 174 combinatorially distinct ways of subdividing a topological cuboid into a mesh of tetrahedra, without adding extra vertices, although not all can be represented geometrically by flat-sided polyhedra. The Mordell curve has rank three, and 174 is the smallest positive integer for which has this rank. The corresponding number for curves is 113. In other fields In English draughts or checkers, a common variation is the "three-move restriction", in which the first three moves by both players are chosen at random. There are 174 different choices for these moves, although some systems for choosing these moves further restrict them to a subset that is believed to lead to an even position. See also The year AD 174 or 174 BC List of highways numbered 174 References Integers
https://en.wikipedia.org/wiki/Ubicom
Ubicom was a company which developed communications and media processor (CMP) and software platforms for real-time interactive applications and multimedia content delivery in the digital home. The company provided optimized system-level solutions to OEMs for a wide range of products including wireless routers, access points, VoIP gateways, streaming media devices, print servers and other network devices. Ubicom was a venture-backed, privately held company with corporate headquarters in San Jose, California. History Ubicom was founded as Scenix Semiconductor in 1996. The company operated under that name until 1999. In 2000, Scenix became "Ubicom," a word derived from "ubiquitous communications". April 1999: Mayfield Fund leads $10 million equity investment in Scenix. November 2000: Scenix changes its name to Ubicom. November 2002: Intersil and Ubicom demonstrate world's first 802.11g wireless access point. March 2006: Ubicom secures $20 million in Series 3 funding, led by Investcorp Technology Ventures. March 2012: Ubicom is taken over by Qualcomm Atheros. Products As Scenix and Ubicom, the company designed several families of microcontrollers, including: The SX Series of 8-bit microcontrollers, a product line which was partially compatible with Arizona Microchip devices and ran at up to 100 MHz, single cycle. This product was eventually sold to Parallax, who continued its production. The IP series of high performance media and Internet processors. These devices were designed to act as gateways for streaming media and data over wired and wireless links. The Scenix/Ubicom processors relied on very high speed and low latency processing to emulate hardware interfaces in software such as interrupt-polled soft-UARTS. This reduced the size of the silicon chip and therefore the cost, but increased the complexity of the software required on the chip. Ubicom developed its own architecture, the Ubicom32, and a real-time operating system (RTOS) for it. For exam
https://en.wikipedia.org/wiki/Mating%20in%20fungi
Fungi are a diverse group of organisms that employ a huge variety of reproductive strategies, ranging from fully asexual to almost exclusively sexual species. Most species can reproduce both sexually and asexually, alternating between haploid and diploid forms. This contrasts with many eukaryotes such as mammals, where the adults are always diploid and produce haploid gametes which combine to form the next generation. In fungi, both haploid and diploid forms can reproduce – haploid individuals can undergo asexual reproduction while diploid forms can produce gametes that combine to give rise to the next generation. Mating in fungi is a complex process governed by mating types. Research on fungal mating has focused on several model species with different behaviour. Not all fungi reproduce sexually and many that do are isogamous; thus, for many members of the fungal kingdom, the terms "male" and "female" do not apply. Homothallic species are able to mate with themselves, while in heterothallic species only isolates of opposite mating types can mate. Mating between isogamous fungi may consist only of a transfer of a nucleus from one cell to another. Vegetative incompatibility within species often prevents a fungal isolate from mating with another isolate. Isolates of the same incompatibility group do not mate or mating does not lead to successful offspring. High variation has been reported including same-chemotype mating, sporophyte to gametophyte mating and biparental transfer of mitochondria. Mating in Zygomycota A zygomycete hypha grows towards a compatible mate and they both form a bridge, called a progametangia, by joining at the hyphal tips via plasmogamy. A pair of septa forms around the merged tips, enclosing nuclei from both isolates. A second pair of septa forms two adjacent cells, one on each side. These adjacent cells, called suspensors provide structural support. The central cell, called the zygosporangium, is destined to become a spore. The zygosporang
https://en.wikipedia.org/wiki/C-slowing
C-slow retiming is a technique used in conjunction with retiming to improve throughput of a digital circuit. Each register in a circuit is replaced by a set of C registers (in series). This creates a circuit with C independent threads, as if the new circuit contained C copies of the original circuit. A single computation of the original circuit takes C times as many clock cycles to compute in the new circuit. C-slowing by itself increases latency, but throughput remains the same. Increasing the number of registers allows optimization of the circuit through retiming to reduce the clock period of the circuit. In the best case, the clock period can be reduced by a factor of C. Reducing the clock period of the circuit reduces latency and increases throughput. Thus, for computations that can be multi-threaded, combining C-slowing with retiming can increase the throughput of the circuit, with little, or in the best case, no increase in latency. Since registers are relatively plentiful in FPGAs, this technique is typically applied to circuits implemented with FPGAs. See also Pipelining Barrel processor Resources PipeRoute: A Pipelining-Aware Router for Reconfigurable Architectures Simple Symmetric Multithreading in Xilinx FPGAs Post Placement C-Slow Retiming for Xilinx Virtex (.ppt) Post Placement C-Slow Retiming for Xilinx Virtex (.pdf) Exploration of RaPiD-style Pipelined FPGA Interconnects Time and Area Efficient Pattern Matching on FPGAs Gate arrays
https://en.wikipedia.org/wiki/Magic%20graph
A magic graph is a graph whose edges are labelled by the first q positive integers, where q is the number of edges, so that the sum over the edges incident with any vertex is the same, independent of the choice of vertex; or it is a graph that has such a labelling. The name "magic" sometimes means that the integers are any positive integers; then the graph and the labelling using the first q positive integers are called supermagic. A graph is vertex-magic if its vertices can be labelled so that the sum on any edge is the same. It is total magic if its edges and vertices can be labelled so that the vertex label plus the sum of labels on edges incident with that vertex is a constant. There are a great many variations on the concept of magic labelling of a graph. There is much variation in terminology as well. The definitions here are perhaps the most common. Comprehensive references for magic labellings and magic graphs are Gallian (1998), Wallis (2001), and Marr and Wallis (2013). Magic squares A semimagic square is an n × n square with the numbers 1 to n2 in its cells, in which the sum of each row and column is the same. A semimagic square is equivalent to a magic labelling of the complete bipartite graph Kn,n. The two vertex sets of Kn,n correspond to the rows and the columns of the square, respectively, and the label on an edge ri&hairsp;sj is the value in row i, column j of the semimagic square. The definition of semimagic squares differs from the definition of magic squares in the treatment of the diagonals of the square. Magic squares are required to have diagonals with the same sum as the row and column sums, but for semimagic squares this is not required. Thus, every magic square is semimagic, but not vice versa. References Nora Hartsfield and Gerhard Ringel (1994, 2003), Pearls in Graph Theory, revised edition. Dover Publications, Mineola, N.Y. Section 6.1. W. D. Wallis (2001), Magic Graphs. Birkhäuser Boston, Boston, Mass. Alison M.
https://en.wikipedia.org/wiki/NCR%20VRX
VRX is an acronym for Virtual Resource eXecutive, a proprietary operating system on the NCR Criterion series, and later the V-8000 series of mainframe computers manufactured by NCR Corporation during the 1970s and 1980s. It replaced the B3 Operating System originally distributed with the Century series, and inherited many of the features of the B4 Operating System from the high-end of the NCR Century series of computers. VRX was upgraded in the late 1980s and 1990s to become VRX/E for use on the NCR 9800 (Criterion) series of computers. Edward D. Scott managed the development team of 150 software engineers who developed VRX and James J "JJ" Whelan was the software architect responsible for technical oversight and the overall architecture of VRX. Tom Tang was the Director of Engineering at NCR responsible for development of the entire Criterion family of computers. This product line achieved over $1B in revenue and $300M in profits for NCR. VRX was shipped to its first customers, on a trial basis, in 1977. Customer sites included the United Farm Workers labor union, who in 1982 were running an NCR 8555 mainframe running VRX. VRX was NCR's response to IBM's MVS virtual storage operating system and was NCR's first virtual storage system. It was based on a segmented page architecture provided in the Criterion architecture. The Criterion series provided a virtual machine architecture which allowed different machine architectures running under the same operating system. The initial offering provided a Century virtual machine which was instruction compatible with the Century series and a COBOL virtual machine designed to optimize programs written in COBOL. Switching between virtual machines was provided by a virtual machine indicator in the subroutine call mechanism. This allowed programs written in one virtual machine to use subroutines written for another. The same mechanism was used to enter an "executive" state used for operating system functions and a "privile
https://en.wikipedia.org/wiki/Channel%2037
Channel 37 is an intentionally unused ultra-high frequency (UHF) television broadcasting channel by countries in most of ITU region 2 such as the United States, Canada, Mexico and Brazil. The frequency range allocated to this channel is important for radio astronomy, so all broadcasting is prohibited within a window of frequencies centred typically on . Similar reservations exist in portions of the Eurasian and Asian regions, although the channel numbering varies. History Channel 37 in System M and N countries occupied a band of UHF frequencies from . This band is particularly important to radio astronomy because it allows observation in a region of the spectrum in between the dedicated frequency allocations near 410 MHz and 1.4 GHz. The area reserved or unused differs from nation to nation and region to region (as for example the EU and British Isles have slightly different reserved frequency areas). One radio astronomy application in this band is for very-long-baseline interferometry. When UHF channels were being allocated in the United States in 1952, channel 37 was assigned to 18 communities across the country. One of them, Valdosta, Georgia, featured the only construction permit ever issued for channel 37: WGOV-TV, owned by Eurith Dickenson "Dee" Rivers Jr., son of the former governor of Georgia (hence the call letters). Rivers received the CP on February 26, 1953, but WGOV-TV never made it to the air; on October 28, 1955, they requested an allocation on channel 8, but the petition was denied. In 1963, the Federal Communications Commission (FCC) adopted a 10-year moratorium on any allocation of stations to Channel 37. A new ban on such stations took effect at the beginning of 1974, and was made permanent by a number of later FCC actions. As a result of this, and similar actions by the Canadian Radio-television and Telecommunications Commission, Channel 37 has never been used by any over-the-air television station in Canada or the United States. The 2016-20
https://en.wikipedia.org/wiki/Emergent%20virus
An emergent virus (or emerging virus) is a virus that is either newly appeared, notably increasing in incidence/geographic range or has the potential to increase in the near future. Emergent viruses are a leading cause of emerging infectious diseases and raise public health challenges globally, given their potential to cause outbreaks of disease which can lead to epidemics and pandemics. As well as causing disease, emergent viruses can also have severe economic implications. Recent examples include the SARS-related coronaviruses, which have caused the 2002-2004 outbreak of SARS (SARS-CoV-1) and the 2019–21 pandemic of COVID-19 (SARS-CoV-2). Other examples include the human immunodeficiency virus which causes HIV/AIDS; the viruses responsible for Ebola; the H5N1 influenza virus responsible for avian flu; and H1N1/09, which caused the 2009 swine flu pandemic (an earlier emergent strain of H1N1 caused the 1918 Spanish flu pandemic). Viral emergence in humans is often a consequence of zoonosis, which involves a cross-species jump of a viral disease into humans from other animals. As zoonotic viruses exist in animal reservoirs, they are much more difficult to eradicate and can therefore establish persistent infections in human populations. Emergent viruses should not be confused with re-emerging viruses or newly detected viruses. A re-emerging virus is generally considered to be a previously appeared virus that is experiencing a resurgence, for example measles. A newly detected virus is a previously unrecognized virus that had been circulating in the species as endemic or epidemic infections. Newly detected viruses may have escaped classification because they left no distinctive clues, and/or could not be isolated or propagated in cell culture. Examples include human rhinovirus (a leading cause of common colds which was first identified in 1956), hepatitis C (eventually identified in 1989), and human metapneumovirus (first described in 2001, but thought to have been ci
https://en.wikipedia.org/wiki/Direct%20digital%20control
Direct digital control is the automated control of a condition or process by a digital device (computer). Direct digital control takes a centralized network-oriented approach. All instrumentation is gathered by various analog and digital converters which use the network to transport these signals to the central controller. The centralized computer then follows all of its production rules (which may incorporate sense points anywhere in the structure) and causes actions to be sent via the same network to valves, actuators, and other heating, ventilating, and air conditioning components that can be adjusted. Overview Central controllers and most terminal unit controllers are programmable, meaning the direct digital control program code may be customized for the intended use. The program features include time schedules, setpoints, controllers, logic, timers, trend logs, and alarms. The unit controllers typically have analog and digital inputs, that allow measurement of the variable (temperature, humidity, or pressure) and analog and digital outputs for control of the medium (hot/cold water and/or steam). Digital inputs are typically (dry) contacts from a control device, and analog inputs are typically a voltage or current measurement from a variable (temperature, humidity, velocity, or pressure) sensing device. Digital outputs are typically relay contacts used to start and stop equipment, and analog outputs are typically voltage or current signals to control the movement of the medium (air/water/steam) control devices. History An early example of a direct digital control system was completed by the Australian business Midac in 1981-1982 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end" system in the basement of the Old Geology building. Each remote or Satellite Intelligence Unit (SIU) ran 2 Z80 microprocessors whilst the fro
https://en.wikipedia.org/wiki/Fredholm%20theory
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm. Overview The following sections provide a casual sketch of the place of Fredholm theory in the broader context of operator theory and functional analysis. The outline presented here is broad, whereas the difficulty of formalizing this sketch is, of course, in the details. Fredholm equation of the first kind Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given: This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation where the function is given and is unknown. Here, stands for a linear differential operator. For example, one might take to be an elliptic operator, such as in which case the equation to be solved becomes the Poisson equation. A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function such that for a given pair , where is the Dirac delta function. The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation, The function is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises. In the general theory, and may be points on any manifold; the real number line or -dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, t
https://en.wikipedia.org/wiki/176%20%28number%29
176 (one hundred [and] seventy-six) is the natural number following 175 and preceding 177. In mathematics 176 is an even number and an abundant number. It is an odious number, a self number, a semiperfect number, and a practical number. 176 is a cake number, a happy number, a pentagonal number, and an octagonal number. 15 can be partitioned in 176 ways. The Higman–Sims group can be constructed as a doubly transitive permutation group acting on a geometry containing 176 points, and it is also the symmetry group of the largest possible set of equiangular lines in 22 dimensions, which contains 176 lines. In astronomy 176 Iduna is a large main belt asteroid with a composition similar to that of the largest main belt asteroid, 1 Ceres Gliese 176 is a red dwarf star in the constellation of Taurus Gliese 176 b is a super-Earth exoplanet in the constellation of Taurus. This planet orbits close to its parent star Gliese 176 In the Bible Minuscule 176 (in the Gregory-Aland numbering), a Greek minuscule manuscript of the New Testament 176 is the highest verse number in the Bible. Found in Psalm 119. In the military Attack Squadron 176 United States Navy squadron during the Vietnam War was a United States Navy troop transport during World War II, the Korean War and Vietnam War was a United States Navy during World War II was a United States Navy during World War II was a United States Navy Porpoise-class submarine during World War II was a United States Navy during World War II was a United States Navy following World War I was a United States Navy Sonoma-class fleet tug during World War II 176th Wing is the largest unit of the Alaska Air National Guard In transportation Heinkel He 176 was a German rocket-powered aircraft London Buses route 176 176th Street, Bronx elevated station on the IRT Jerome Avenue Line of the New York City Subway In other fields 176 is also: The year AD 176 or 176 BC 176 AH is a year in the Islamic calendar that co
https://en.wikipedia.org/wiki/Host%20system
Host system is any networked computer that provides services to other systems or users. These services may include printer, web or database access. Host system is a computer on a network, which provides services to users or other computers on that network. Host system usually runs a multi-user operating system such as Unix, MVS or VMS, or at least an operating system with network services such as Windows. Computer networking fr:Système hôte
https://en.wikipedia.org/wiki/Torsion%20box
A torsion box consists of two thin layers of material (skins) on either side of a lightweight core, usually a grid of beams. It is designed to resist torsion under an applied load. A hollow core door is probably the most common example of a torsion box (stressed skin) structure. The principle is to use less material more efficiently. The torsion box uses the properties of its thin surfaces to carry the imposed loads primarily through tension while the close proximity of the enclosed core material compensates for the tendency of the opposite side to buckle under compression. Torsion boxes are used in the construction of structural insulated panels for houses, wooden tables and doors, skis, snowboards, and airframes - especially wings and vertical stabilizers. See also Tubular bridge and Fairbairn crane, the Victorian invention of the multiple torsion box, for the construction of bridges and cranes in wrought iron. Structural engineering
https://en.wikipedia.org/wiki/177%20%28number%29
177 (one hundred [and] seventy-seven) is the natural number following 176 and preceding 178. In mathematics It is a Leyland number since . It is a 60-gonal number, and an arithmetic number, since the mean of its divisors (1, 3, 59 and 177) is equal to 60, an integer. 177 is a Leonardo number, part of a sequence of numbers closely related to the Fibonacci numbers. In graph enumeration, there are 177 undirected graphs (not necessarily connected) that have seven edges and no isolated vertices, and 177 rooted trees with ten nodes and height at most three. There are 177 ways of re-connecting the (labeled) vertices of a regular octagon into a star polygon that does not use any of the octagon edges. In other fields 177 is the second highest score for a flight of three darts, below the highest score of 180. See also The year AD 177 or 177 BC List of highways numbered 177 References Integers
https://en.wikipedia.org/wiki/ACM%20SIGACT
ACM SIGACT or SIGACT is the Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory, whose purpose is support of research in theoretical computer science. It was founded in 1968 by Patrick C. Fischer. Publications SIGACT publishes a quarterly print newsletter, SIGACT News. Its online version, SIGACT News Online, is available since 1996 for SIGACT members, with unrestricted access to some features. Conferences SIGACT sponsors or has sponsored several annual conferences. COLT: Conference on Learning Theory, until 1999 PODC: ACM Symposium on Principles of Distributed Computing (jointly sponsored by SIGOPS) PODS: ACM Symposium on Principles of Database Systems POPL: ACM Symposium on Principles of Programming Languages SOCG: ACM Symposium on Computational Geometry (jointly sponsored by SIGGRAPH), until 2014 SODA: ACM/SIAM Symposium on Discrete Algorithms (jointly sponsored by the Society for Industrial and Applied Mathematics). Two annual workshops held in conjunction with SODA also have the same joint sponsorship: ALENEX: Workshop on Algorithms and Experiments ANALCO: Workshop on Analytic Algorithms and Combinatorics SPAA: ACM Symposium on Parallelism in Algorithms and Architectures STOC: ACM Symposium on the Theory of Computing COLT, PODC, PODS, POPL, SODA, and STOC are all listed as highly cited venues by both citeseerx and libra. Awards and prizes Gödel Prize, for outstanding papers in theoretical computer science (sponsored jointly with EATCS) Donald E. Knuth Prize, for outstanding contributions to the foundations of computer science (sponsored jointly with IEEE Computer Society's Technical Committee on the Mathematical Foundations of Computing) Edsger W. Dijkstra Prize in distributed computing (sponsored jointly with SIGOPS, EATCS, and companies) Paris Kanellakis Theory and Practice Award, for theoretical accomplishments of significant and demonstrable effect on the practice of computing (ACM Award co-sponsored by SIGAC
https://en.wikipedia.org/wiki/Bandwidth%20expansion
Bandwidth expansion is a technique for widening the bandwidth or the resonances in an LPC filter. This is done by moving all the poles towards the origin by a constant factor . The bandwidth-expanded filter can be easily derived from the original filter by: Let be expressed as: The bandwidth-expanded filter can be expressed as: In other words, each coefficient in the original filter is simply multiplied by in the bandwidth-expanded filter. The simplicity of this transformation makes it attractive, especially in CELP coding of speech, where it is often used for the perceptual noise weighting and/or to stabilize the LPC analysis. However, when it comes to stabilizing the LPC analysis, lag windowing is often preferred to bandwidth expansion. References P. Kabal, "Ill-Conditioning and Bandwidth Expansion in Linear Prediction of Speech", Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, pp. I-824-I-827, 2003. Signal processing
https://en.wikipedia.org/wiki/Halpern%E2%80%93L%C3%A4uchli%20theorem
In mathematics, the Halpern–Läuchli theorem is a partition result about finite products of infinite trees. Its original purpose was to give a model for set theory in which the Boolean prime ideal theorem is true but the axiom of choice is false. It is often called the Halpern–Läuchli theorem, but the proper attribution for the theorem as it is formulated below is to Halpern–Läuchli–Laver–Pincus or HLLP (named after James D. Halpern, Hans Läuchli, Richard Laver, and David Pincus), following . Let d,r < ω, be a sequence of finitely splitting trees of height ω. Let then there exists a sequence of subtrees strongly embedded in such that Alternatively, let and . The HLLP theorem says that not only is the collection partition regular for each d < ω, but that the homogeneous subtree guaranteed by the theorem is strongly embedded in References Ramsey theory Theorems in the foundations of mathematics Trees (set theory)
https://en.wikipedia.org/wiki/65%2C535
65535 is the integer after 65534 and before 65536. It is the maximum value of an unsigned 16-bit integer. In mathematics 65535 is the sum of 20 through 215 (20 + 21 + 22 + ... + 215) and is therefore a repdigit in base 2 (1111111111111111), in base 4 (33333333), and in base 16 (FFFF). It is the ninth number whose Euler totient has an aliquot sum that is : , and the twenty-eighth perfect totient number equal to the sum of its iterated totients. 65535 is the fifteenth 626-gonal number, the fifth 6555-gonal number, and the third 21846-gonal number. 65535 is the product of the first four Fermat primes: 65535 = (2 + 1)(4 + 1)(16 + 1)(256 + 1). Because of this property, it is possible to construct with compass and straightedge a regular polygon with 65535 sides (see, constructible polygon). In computing 65535 occurs frequently in the field of computing because it is (one less than 2 to the 16th power), which is the highest number that can be represented by an unsigned 16-bit binary number. Some computer programming environments may have predefined constant values representing 65535, with names like . In older computers with processors having a 16-bit address bus such as the MOS Technology 6502 popular in the 1970s and the Zilog Z80, 65535 (FFFF16) is the highest addressable memory location, with 0 (000016) being the lowest. Such processors thus support at most 64 KiB of total byte-addressable memory. In Internet protocols, 65535 is also the number of TCP and UDP ports available for use, since port 0 is reserved. In some implementations of Tiny BASIC, entering a command that divides any number by zero will return 65535. In Microsoft Word 2011 for Mac, 65535 is the highest line number that will be displayed. In HTML, 65535 is the decimal value of the web color Aqua (#00FFFF) . See also 4,294,967,295 255 (number) 16-bit computing References Integers
https://en.wikipedia.org/wiki/Group%20%28computing%29
In computing, the term group generally refers to a grouping of users. In principle, users may belong to none, one, or many groups (although in practice some systems place limits on this.) The primary purpose of user groups is to simplify access control to computer systems. Suppose a computer science department has a network which is shared by students and academics. The department has made a list of directories which the students are permitted to access and another list of directories which the staff are permitted to access. Without groups, administrators would give each student permission to every student directory, and each staff member permission to every staff directory. In practice, that would be very unworkable – every time a student or staff member arrived, administrators would have to allocate permissions on every directory. With groups, the task is much simpler: create a student group and a staff group, placing each user in the proper group. The entire group can be granted access to the appropriate directory. To add or remove an account, one must only need to do it in one place (in the definition of the group), rather than on every directory. This workflow provides clear separation of concerns: to change access policies, alter the directory permissions; to change the individuals which fall under the policy, alter the group definitions. Uses of groups The primary uses of groups are: Access control Accounting - allocating shared resources like disk space and network bandwidth Default per-user configuration profiles - e.g., by default, every staff account could have a specific directory in their PATH Content selection - only display content relevant to group members - e.g. this portal channel is intended for students, this mailing list is for the chess club Delegable group administration Many systems provide facilities for delegation of group administration. In these systems, when a group is created, one or more users may be named as group administrato
https://en.wikipedia.org/wiki/Thread%20protector
A thread protector is used to protect the threads of a pipe during transportation and storage. Thread protectors are generally manufactured from plastic or steel and can be applied to the pipe manually or automatically (by machine). Thread protectors are used frequently in the oil and gas industry to protect pipes during transportation to the oil and gas fields. Metal thread protectors can be cleaned and re-used, while plastic thread protectors are often collected and either re-used or recycled. Thread protectors are widely used on firearms to protect threaded barrels. Some firearms are manufactured with thread and protectors in the factory, but most thread protectors are part of the aftermarket process of fitting a sound moderator (silencer), muzzle brake or flash hider. They protect the threads from mechanical damage and ensure the center lines line up when the muzzle device is replaced. References William C. Lyons, Ph.D., P.E., Gary J Plisga, BS. Standard Handbook of Petroleum and Natural Gas Engineering Gulf Professional Publishing, Mar 15, 2011 pg. 4-435 Piping
https://en.wikipedia.org/wiki/Bursting
Bursting, or burst firing, is an extremely diverse general phenomenon of the activation patterns of neurons in the central nervous system and spinal cord where periods of rapid action potential spiking are followed by quiescent periods much longer than typical inter-spike intervals. Bursting is thought to be important in the operation of robust central pattern generators, the transmission of neural codes, and some neuropathologies such as epilepsy. The study of bursting both directly and in how it takes part in other neural phenomena has been very popular since the beginnings of cellular neuroscience and is closely tied to the fields of neural synchronization, neural coding, plasticity, and attention. Observed bursts are named by the number of discrete action potentials they are composed of: a doublet is a two-spike burst, a triplet three and a quadruplet four. Neurons that are intrinsically prone to bursting behavior are referred to as bursters and this tendency to burst may be a product of the environment or the phenotype of the cell. Physiological context Overview Neurons typically operate by firing single action potential spikes in relative isolation as discrete input postsynaptic potentials combine and drive the membrane potential across the threshold. Bursting can instead occur for many reasons, but neurons can be generally grouped as exhibiting input-driven or intrinsic bursting. Most cells will exhibit bursting if they are driven by a constant, subthreshold input and particular cells which are genotypically prone to bursting (called bursters) have complex feedback systems which will produce bursting patterns with less dependence on input and sometimes even in isolation. In each case, the physiological system is often thought as being the action of two linked subsystems. The fast subsystem is responsible for each spike the neuron produces. The slow subsystem modulates the shape and intensity of these spikes before eventually triggering quiescence. Inpu
https://en.wikipedia.org/wiki/178%20%28number%29
178 (one hundred [and] seventy-eight) is the natural number following 177 and preceding 179. In mathematics There are 178 biconnected graphs with six vertices, among which one is designated as the root and the rest are unlabeled. There are also 178 median graphs on nine vertices. 178 is one of the indexes of the smallest triple of dodecahedral numbers where one is the sum of the other two: the sum of the 46th and the 178th dodecahedral numbers is the 179th. See also The year 178 AD or 178 BC List of highways numbered 178 References Integers
https://en.wikipedia.org/wiki/Advanced%20Amiga%20Architecture%20chipset
The AAA chipset (Advanced Amiga Architecture) was intended to be the next-generation Amiga multimedia system designed by Commodore International. Initially begun as a secret project, the first design discussions were started in 1988, and after many revisions and redesigns the first silicon versions were fabricated in 1992–1993. The project was stymied in 1993 based on a lack of funds for chip revisions. At the same time AAA started first silicon testing, the next generation Commodore chipset project was in progress. While AAA was a reinvention and huge upgrade of the Amiga architecture, project Hombre was essentially a clean slate. It took what was learned from Amiga and went in new directions, which included an on-chip CPU with a custom 3D instruction set, 16-bit and 24-bit chunky pixel display, and up to four 16-bit playfields running simultaneously. Hombre also embraced the PCI bus, which was seen as the future for main board interconnect and expansion going forward. Design goals AAA was slated to include numerous technologies. 32-bit CPU bus 32-bit and 64-bit graphics bus options. 256 deep CLUT entries 25-bit wide each (256 indirect colors indexed through 24-bit palette with extra genlock bit like AGA has). This mode runs in the native AmigaOS display. Direct 16 bit-planes planar pixels without CLUT entries, since this mode doesn't contain a palette or a CLUT it requires some kind of ReTargetable Graphics (RTG) driver like chunky modes. New Agnus/Alice replacement chip 'Andrea' with an updated 32-bit blitter and Copper which can handle chunky pixels. A line-buffer chip with double buffering called 'Linda' provides higher resolution (up to 1280 x 1024). Linda also decompresses two new packed pixels (PACKLUT, PACKHY) on the fly. Updated version of Paula called 'Mary' with 8 voices that can be assigned either to left or right channel; each channel has 16-bit resolution with up to 100 kHz sample rate; additionally it does 8-bit audio sampling input. Dir
https://en.wikipedia.org/wiki/Acoustic%20location
Acoustic location is a method of determining the position of an object or sound source by using sound waves. Location can take place in gases (such as the atmosphere), liquids (such as water), and in solids (such as in the earth). Location can be done actively or passively: Active acoustic location involves the creation of sound in order to produce an echo, which is then analyzed to determine the location of the object in question. Passive acoustic location involves the detection of sound or vibration created by the object being detected, which is then analyzed to determine the location of the object in question. Both of these techniques, when used in water, are known as sonar; passive sonar and active sonar are both widely used. Acoustic mirrors and dishes, when using microphones, are a means of passive acoustic localization, but when using speakers are a means of active localization. Typically, more than one device is used, and the location is then triangulated between the several devices. As a military air defense tool, passive acoustic location was used from mid-World War I to the early years of World War II to detect enemy aircraft by picking up the noise of their engines. It was rendered obsolete before and during World War II by the introduction of radar, which was far more effective (but interceptable). Acoustic techniques had the advantage that they could 'see' around corners and over hills, due to sound diffraction. Civilian uses include locating wildlife and locating the shooting position of a firearm. Overview Acoustic source localization is the task of locating a sound source given measurements of the sound field. The sound field can be described using physical quantities like sound pressure and particle velocity. By measuring these properties it is (indirectly) possible to obtain a source direction. Traditionally sound pressure is measured using microphones. Microphones have a polar pattern describing their sensitivity as a function of the
https://en.wikipedia.org/wiki/Chu%20space
Chu spaces generalize the notion of topological space by dropping the requirements that the set of open sets be closed under union and finite intersection, that the open sets be extensional, and that the membership predicate (of points in open sets) be two-valued. The definition of continuous function remains unchanged other than having to be worded carefully to continue to make sense after these generalizations. The name is due to Po-Hsiang Chu, who originally constructed a verification of autonomous categories as a graduate student under the direction of Michael Barr in 1979. Definition Understood statically, a Chu space (A, r, X) over a set K consists of a set A of points, a set X of states, and a function r : A × X → K. This makes it an A × X matrix with entries drawn from K, or equivalently a K-valued binary relation between A and X (ordinary binary relations being 2-valued). Understood dynamically, Chu spaces transform in the manner of topological spaces, with A as the set of points, X as the set of open sets, and r as the membership relation between them, where K is the set of all possible degrees of membership of a point in an open set. The counterpart of a continuous function from (A, r, X) to (B, s, Y) is a pair (f, g) of functions f : A → B, g : Y → X satisfying the adjointness condition s(f(a), y) = r(a, g(y)) for all a ∈ A and y ∈ Y. That is, f maps points forwards at the same time as g maps states backwards. The adjointness condition makes g the inverse image function f−1, while the choice of X for the codomain of g corresponds to the requirement for continuous functions that the inverse image of open sets be open. Such a pair is called a Chu transform or morphism of Chu spaces. A topological space (X, T) where X is the set of points and T the set of open sets, can be understood as a Chu space (X,∈,T) over {0, 1}. That is, the points of the topological space become those of the Chu space while the open sets become states and the membership re
https://en.wikipedia.org/wiki/JEIDA%20memory%20card
The JEIDA memory card standard is a popular memory card standard at the beginning of memory cards appearing on portable computers. JEIDA cards could be used to expand system memory or as a solid-state storage drive. History Before the advent of the JEIDA standard, laptops had proprietary cards that were not interoperable with other manufacturers laptops, other laptop lines, or even other models in the same line. The establishment of the JEIDA interface and cards across Japanese portables provoked a response from the US government, through SEMATECH, and thus PCMCIA was born. PCMCIA and JEIDA worked to solve this rift between the two competing standards, and merged into JEIDA 4.1 or PCMCIA 2.0 in 1991. Usage The JEIDA memory card was used in earlier ThinkPad models, where IBM branded them as IC DRAM Cards. The interface has also been used for SRAM cards. Versions Version 1.0 is an 88-pin memory card. It has 2 rows of pin holes which are shifted against each other by half the pin spacing. The card is 3.3mm thick. Version 2.0 is only mechanically compatible with the Version 1.0 card. Version 1.0 cards fail in devices designed for Version 2.0. Version 3 is a 68-pin memory card. It is also used in the Neo Geo. Version 4.0 corresponds with 68-pin PCMCIA 1.0 (1990). Version 4.1 unified the PCMCIA and JEIDA standards as PCMCIA 2.0. v4.1 is the 16-bit PC Card standard that defines Type I, II, III, and IV card sizes. Version 4.2 is the PCMCIA 2.1 standard, and introduced CardBus' 32-bit interface in an almost physically identical casing. See also Japan Electronic Industries Development Association Japan Electronics and Information Technology Industries Association Personal Computer Memory Card International Association Compact Flash References External links IC DRAM Card - Thinkwiki.org Computer buses Motherboard PCMCIA
https://en.wikipedia.org/wiki/Blain%20%28animal%20disease%29
Blain was an animal disease of unknown etiology that was well known in the 18th and 19th centuries. It is unclear whether it is still extant, or what modern disease it corresponds to. According to Ephraim Chambers' 18th-century Cyclopaedia, or an Universal Dictionary of Arts and Sciences, blain was "a distemper" (in the archaic eighteenth-century sense of the word, meaning "disease") occurring in animals, consisting of a "Bladder growing on the Root of the Tongue against the Wind-Pipe", which "at length swelling, stops the Wind". It was thought to occur "by great chafing, and heating of the Stomach". Blain is also mentioned in Cattle: Their Breeds, Management, and Diseases, published in 1836, where it is also identified as "gloss-anthrax". W. C. Spooner's 1888 book The History, Structure, Economy and Diseases of the Sheep also identifies blain as being the same as gloss-anthrax. A description of blain is provided in the Horticulture column of the Monday Morning edition of the Belfast News-Letter, September 13, 1852. Headline: The Prevailing Epidemic Disease in Horned Cattle - The Mouth and Food Disease. "There are two diseases of the mouth - one of a very serious character, which is called blain (gloss anthrax) or inflammation of the tongue. This is a very virulent disease, and sometimes of a very rapid action, and which should be at once attended to, and not trifled with; but though it always exhibits itself in inflammation of the membranes of the mouth, beneath or above the tongue, and the sides of the tongue itself, it soon extends through the whole system, and, according to the best veterinarians, involves inflammation and gangrene of the oesophagus and intestines. The symptoms are many, the eyes are inflamed, and constantly weeping; swellings appear round the eyes and some other parts of the body; the pulse quick, heaving of the flanks, and the bowels sometimes constipated. Such are the general symptoms of this formidable disease, more or less aggravated by
https://en.wikipedia.org/wiki/Cre-Lox%20recombination
Cre-Lox recombination is a site-specific recombinase technology, used to carry out deletions, insertions, translocations and inversions at specific sites in the DNA of cells. It allows the DNA modification to be targeted to a specific cell type or be triggered by a specific external stimulus. It is implemented both in eukaryotic and prokaryotic systems. The Cre-lox recombination system has been particularly useful to help neuroscientists to study the brain in which complex cell types and neural circuits come together to generate cognition and behaviors. NIH Blueprint for Neuroscience Research has created several hundreds of Cre driver mouse lines which are currently used by the worldwide neuroscience community. An important application of the Cre-lox system is excision of selectable markers in gene replacement. Commonly used gene replacement strategies introduce selectable markers into the genome to facilitate selection of genetic mutations that may cause growth retardation. However, marker expression can have polar effects on the expression of upstream and downstream genes. Removal of selectable markers from the genome by Cre-lox recombination is an elegant and efficient way to circumvent this problem and is therefore widely used in plants, mouse cell lines, yeast, etc. The system consists of a single enzyme, Cre recombinase, that recombines a pair of short target sequences called the Lox sequences. This system can be implemented without inserting any extra supporting proteins or sequences. The Cre enzyme and the original Lox site called the LoxP sequence are derived from bacteriophage P1. Placing Lox sequences appropriately allows genes to be activated, repressed, or exchanged for other genes. At a DNA level many types of manipulations can be carried out. The activity of the Cre enzyme can be controlled so that it is expressed in a particular cell type or triggered by an external stimulus like a chemical signal or a heat shock. These targeted DNA changes are us
https://en.wikipedia.org/wiki/Relation%20algebra
In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution called converse, a unary operation. The motivating example of a relation algebra is the algebra 2&hairsp;X 2 of all binary relations on a set X, that is, subsets of the cartesian square X2, with R•S interpreted as the usual composition of binary relations R and S, and with the converse of R as the converse relation. Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated in the algebraic logic of Ernst Schröder. The equational form of relation algebra treated here was developed by Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be conducted without variables. Definition A relation algebra is an algebraic structure equipped with the Boolean operations of conjunction x∧y, disjunction x∨y, and negation x−, the Boolean constants 0 and 1, the relational operations of composition x•y and converse x˘, and the relational constant , such that these operations and constants satisfy certain equations constituting an axiomatization of a calculus of relations. Roughly, a relation algebra is to a system of binary relations on a set containing the empty (0), universal (1), and identity relations and closed under these five operations as a group is to a system of permutations of a set containing the identity permutation and closed under composition and inverse. However, the first-order theory of relation algebras is not complete for such systems of binary relations. Following Jónsson and Tsinakis (1993) it is convenient to define additional operations x&hairsp;◁ y = x • y˘, and, dually, x ▷&hairsp;y = x˘ • y. Jónsson and Tsinakis showed that , and that both were equal to x˘. Hence a relation algebra can equally well be defined a
https://en.wikipedia.org/wiki/CellProfiler
CellProfiler is free, open-source software designed to enable biologists without training in computer vision or programming to quantitatively measure phenotypes from thousands of images automatically. Advanced algorithms for image analysis are available as individual modules that can be placed in sequential order together to form a pipeline; the pipeline is then used to identify and measure biological objects and features in images, particularly those obtained through fluorescence microscopy. Distributions are available for Microsoft Windows, macOS, and Linux. The source code for CellProfiler is freely available. CellProfiler is developed by the Broad Institute's Imaging Platform. Features CellProfiler can read and analyze most common microscopy image formats. Biologists typically use CellProfiler to identify objects of interest (e.g. cells, colonies, C. elegans worms) and then measure their properties of interest. Specialized modules for illumination correction may be applied as pre-processing step to remove distortions due to uneven lighting. Object identification (segmentation) is performed through machine learning or image thresholding, recognition and division of clumped objects, and removal or merging of objects on the basis of size or shape. Each of these steps are customizable by the user for their unique image assay. A wide variety of measurements can be generated for each identified cell or subcellular compartment, including morphology, intensity, and texture among others. These measurements are accessible by using built-in viewing and plotting data tools, exporting in a comma-delimited spreadsheet format, or importing into a MySQL or SQLite database. CellProfiler interfaces with the high-performance scientific libraries NumPy and SciPy for many mathematical operations, the Open Microscopy Environment Consortium’s Bio-Formats library for reading more than 100 image file formats, ImageJ for use of plugins and macros, and ilastik for pixel-based classif
https://en.wikipedia.org/wiki/Lov%C3%A1sz%20conjecture
In graph theory, the Lovász conjecture (1969) is a classical problem on Hamiltonian paths in graphs. It says: Every finite connected vertex-transitive graph contains a Hamiltonian path. Originally László Lovász stated the problem in the opposite way, but this version became standard. In 1996, László Babai published a conjecture sharply contradicting this conjecture, but both conjectures remain widely open. It is not even known if a single counterexample would necessarily lead to a series of counterexamples. Historical remarks The problem of finding Hamiltonian paths in highly symmetric graphs is quite old. As Donald Knuth describes it in volume 4 of The Art of Computer Programming, the problem originated in British campanology (bell-ringing). Such Hamiltonian paths and cycles are also closely connected to Gray codes. In each case the constructions are explicit. Variants of the Lovász conjecture Hamiltonian cycle Another version of Lovász conjecture states that Every finite connected vertex-transitive graph contains a Hamiltonian cycle except the five known counterexamples. There are 5 known examples of vertex-transitive graphs with no Hamiltonian cycles (but with Hamiltonian paths): the complete graph , the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle. Cayley graphs None of the 5 vertex-transitive graphs with no Hamiltonian cycles is a Cayley graph. This observation leads to a weaker version of the conjecture: Every finite connected Cayley graph contains a Hamiltonian cycle. The advantage of the Cayley graph formulation is that such graphs correspond to a finite group and a generating set . Thus one can ask for which and the conjecture holds rather than attack it in full generality. Directed Cayley graph For directed Cayley graphs (digraphs) the Lovász conjecture is false. Various counterexamples were obtained by Robert Alexander Rankin. Still, many of the belo
https://en.wikipedia.org/wiki/Epoxyeicosatrienoic%20acid
The epoxyeicosatrienoic acids or EETs are signaling molecules formed within various types of cells by the metabolism of arachidonic acid by a specific subset of Cytochrome P450 enzymes termed cytochrome P450 epoxygenases. These nonclassic eicosanoids are generally short-lived, being rapidly converted from epoxides to less active or inactive dihydroxy-eicosatrienoic acids (diHETrEs) by a widely distributed cellular enzyme, Soluble epoxide hydrolase (sEH), also termed Epoxide hydrolase 2. The EETs consequently function as transiently acting, short-range hormones; that is, they work locally to regulate the function of the cells that produce them (i.e. they are autocrine agents) or of nearby cells (i.e. they are paracrine agents). The EETs have been most studied in animal models where they show the ability to lower blood pressure possibly by a) stimulating arterial vasorelaxation and b) inhibiting the kidney's retention of salts and water to decrease intravascular blood volume. In these models, EETs prevent arterial occlusive diseases such as heart attacks and brain strokes not only by their anti-hypertension action but possibly also by their anti-inflammatory effects on blood vessels, their inhibition of platelet activation and thereby blood clotting, and/or their promotion of pro-fibrinolytic removal of blood clots. With respect to their effects on the heart, the EETs are often termed cardio-protective. Beyond these cardiovascular actions that may prevent various cardiovascular diseases, studies have implicated the EETs in the pathological growth of certain types of cancer and in the physiological and possibly pathological perception of neuropathic pain. While studies to date imply that the EETs, EET-forming epoxygenases, and EET-inactivating sEH can be manipulated to control a wide range of human diseases, clinical studies have yet to prove this. Determination of the role of the EETS in human diseases is made particularly difficult because of the large number of
https://en.wikipedia.org/wiki/International%20Joint%20Conference%20on%20Automated%20Reasoning
The International Joint Conference on Automated Reasoning (IJCAR) is a series of conferences on the topics of automated reasoning, automated deduction, and related fields. It is organized semi-regularly as a merger of other meetings. IJCAR replaces those independent conferences in the years it takes place. The conference is organized by CADE Inc., and CADE has always been one of the conferences partaking in IJCAR. The first IJCAR was held in Siena, Italy in 2001 as a merger of CADE, FTP, and TABLEAUX. The second IJCAR was held in Cork, Ireland in 2004 as a merger of CADE, FTP, TABLEAUX, FroCoS and CALCULEMUS. The third IJCAR was held as an independent subconference of the fourth Federated Logic Conference in Seattle, United States, and merged CADE, FTP, TABLEAUX, FroCoS and TPHOLs. The fourth IJCAR was held in Sydney, Australia in 2008, and merged CADE, FroCoS, FTP and TABLEAUX. The fifth IJCAR was held in 2010 as an independent subconference of the fifth Federated Logic Conference in Edinburgh, UK, and merged CADE, FTP, TABLEAUX, and FroCoS. The sixth IJCAR was held in Manchester, UK, as part of the Alan Turing Year 2012, and was collocated with the Alan Turing Centenary Conference. It again merged CADE, FTP, TABLEAUX, and FroCoS. The seventh IJCAR was held in Vienna, Austria, as part of the Vienna Summer of Logic in 2014, and merged CADE, TABLEAUX, and FroCoS. The eighth IJCAR was held in Coimbra, Portugal, in 2016, and merged CADE, TABLEAUX, and FroCoS. External links IJCAR Home Page IJCAR-2006 Home Page IJCAR-2008 Home Page IJCAR 2016 Home Page Theoretical computer science conferences Logic conferences
https://en.wikipedia.org/wiki/Fierz%20identity
In theoretical physics, a Fierz identity is an identity that allows one to rewrite bilinears of the product of two spinors as a linear combination of products of the bilinears of the individual spinors. It is named after Swiss physicist Markus Fierz. The Fierz identities are also sometimes called the Fierz–Pauli–Kofink identities, as Pauli and Kofink described a general mechanism for producing such identities. There is a version of the Fierz identities for Dirac spinors and there is another version for Weyl spinors. And there are versions for other dimensions besides 3+1 dimensions. Spinor bilinears in arbitrary dimensions are elements of a Clifford algebra; the Fierz identities can be obtained by expressing the Clifford algebra as a quotient of the exterior algebra. When working in 4 spacetime dimensions the bivector may be decomposed in terms of the Dirac matrices that span the space: . The coefficients are and are usually determined by using the orthogonality of the basis under the trace operation. By sandwiching the above decomposition between the desired gamma structures, the identities for the contraction of two Dirac bilinears of the same type can be written with coefficients according to the following table. {| class="wikitable" style="text-align: center" |- ! Product ! S ! V ! T ! A ! P |- | S × S = | 1/4 | 1/4 | −1/4 | −1/4 | 1/4 |- | V × V = | 1 | −1/2 | 0 | −1/2 | −1 |- | T × T = | −3/2 | 0 | −1/2 | 0 | −3/2 |- | A × A = | −1 | −1/2 | 0 | −1/2 | 1 |- | P × P = | 1/4 | −1/4 | −1/4 | 1/4 | 1/4 |- |} where The table is symmetric with respect to reflection across the central element. The signs in the table correspond to the case of commuting spinors, otherwise, as is the case of fermions in physics, all coefficients change signs. For example, under the assumption of commuting spinors, the V × V product can be expanded as, Combinations of bilinears corresponding to the eigenvectors of the transpose matrix tra
https://en.wikipedia.org/wiki/AES52
AES52 is a standard first published by the Audio Engineering Society in March 2006 that specifies the insertion of unique identifiers into the AES3 digital audio transport structure. Background The AES3 transport stream continues to be used extensively in both discrete and network based audio systems alongside audio stored as files. Audio content is moving towards being handled by asset management systems and descriptive metadata is associated with that content is also being stored within systems. In order to provide a mechanism for AES3 transport streams to have similar abilities to work with content management systems, some form of unique label is required which can provide the link with these systems. One of the unique labels currently standardised in the media industry is the SMPTE UMID (SMPTE 330M-2004) while another commonly used in the Information Technology area is the International Electrotechnical Commission (IEC) UUID. Operation The standard specifies the method for inserting unique identifiers into the user data area of an AES3 stream. This specifically covers the use of UUID as well as a basic or extended SMPTE UMID but can be extended to embed other data types into the AES3 stream by registering these with the AES so the standard can be updated to include these by following AES due process. External links AES52-2006 from the AES standards web site Audio engineering Audio Engineering Society standards Unique identifiers Broadcasting standards
https://en.wikipedia.org/wiki/Vine%20Linux
is a Japanese Linux distribution sponsored by VineCaves. It has been a fork of Red Hat Linux 7.2 since Vine Linux 3.0. Work on Vine Linux was started in 1998. All versions except Vine Seed have been announced to be discontinued from May 4, 2021. Release history References External links RPM-based Linux distributions Japanese-language Linux distributions X86-64 Linux distributions PowerPC operating systems Linux distributions without systemd Linux distributions
https://en.wikipedia.org/wiki/Unique%20Material%20Identifier
The Unique Material Identifier (UMID) is a SMPTE standard for providing a stand-alone method for generating a unique label designed to be used to attach to media files and streams. The UMID is standardized in SMPTE 330M. There are two types of UMID: Basic UMID contains the minimal components necessary for the unique identification (the essential metadata) The length of the basic UMID is 32 octets. The Extended UMID provides information on the creation time and date, recording location and the name of the organisation and the maker as well as the components of the Basic UMID. The length of the Extended UMID is 64 octets. This data may be parsed to extract specific information produced at the time it was generated or simply used as a unique label. References Unique identifiers Broadcasting standards SMPTE standards
https://en.wikipedia.org/wiki/Converting%20%28metallurgy%29
Converting is a type of metallurgical smelting that includes several processes; the most commercially important form is the treatment of molten metal sulfides to produce crude metal and slag, as in the case of copper and nickel converting. A now-uncommon form is batch treatment of pig iron to produce steel by the Bessemer process. The vessel used was called the Bessemer converter. Modern steel mills use basic oxygen process converters. Equipment The converting process occurs in a converter. Two kinds of converters are widely used: horizontal and vertical. Horizontal (which are an improvement of the ) prevail in the metallurgy of non ferrous metals. Such a converter is a horizontal barrel lined with refractory material inside. A hood for the purpose of the loading/unloading operations is located on the upper side of the converter. Two belts of tuyeres come along the axis on either sides of the converter. Molten sulfide material, referred to as matte, is poured through the hood into the converter during the operation of loading. Air is distributed to tuyeres from the two tuyere collectors which are located on opposite sides of the converter. Collector pipes vary in diameter with distance from the connection to air supplying trunk; this is to provide equal pressure of air in each tuyere. This high temperature roasting allows oxygen in the air to replace sulfide compounds in the minerals. Unless carefully captured, these oxidized sulfur compounds such as sulfur trioxide leave the converter as a noxious acidic vapor, along with other dangerous volatile elements such as arsenic trioxide. See also Smelting References Metallurgical processes
https://en.wikipedia.org/wiki/Tiger%20bush
Tiger bush, or brousse tigrée in the French language, is a patterned vegetation community and ground consisting of alternating bands of trees, shrubs, or grass separated by bare ground or low herb cover, that run roughly parallel to contour lines of equal elevation. The patterns occur on low slopes in arid and semi-arid regions, such as in Australia, Sahelian West Africa, and North America. Due to the natural water harvesting capacity, many species in tiger bush usually occur only under a higher rainfall regime. Formation The alternating pattern arises from the interplay of hydrological, ecological, and erosional phenomena. In the regions where tiger bush is present, plant growth is water-limited - the shortage of rainfall prevents vegetation from covering the entire landscape. Instead, trees and shrubs are able to establish by either tapping soil moisture reserves laterally or by sending roots to deeper, wetter soil depths. By a combination of plant litter, root macropores, and increased surface roughness, infiltration into the soil around the base of these plants is enhanced. Surface runoff arriving at these plants will thus likely to become run-on, and infiltrate into the soil. By contrast, the areas between these larger plants contain a greater portion of bare ground and herbaceous plants. Both bare soil, with its smoother surface and soil crusts, and herbaceous plants, with fewer macropores, inhibit infiltration. This causes much of the rainfall that falls in the inter-canopy areas to flow downslope, and infiltrate beneath the larger plants. The larger plants are in effect harvesting rainfall from the ground immediately up-slope. Although these vegetation patterns may seem very stable through time, such patterning requires specific climatic conditions. For instance, a decrease in rainfall is able to trigger patterning in formerly homogeneous vegetation within a few decades. More water will infiltrate at the up-slope edge of the canopies than down-slope. T
https://en.wikipedia.org/wiki/Patterned%20vegetation
Patterned vegetation is a vegetation community that exhibits distinctive and repetitive patterns. Examples of patterned vegetation include fir waves, tiger bush, and string bog. The patterns typically arise from an interplay of phenomena that differentially encourage plant growth or mortality. A coherent pattern arises because there is a strong directional component to these phenomena, such as wind in the case of fir waves, or surface runoff in the case of tiger bush. The regular patterning of some types of vegetation is a striking feature of some landscapes. Patterns can include relatively evenly spaced patches, parallel bands or some intermediate between those two. These patterns in the vegetation can appear without any underlying pattern in soil types, and are thus said to “self-organize” rather than be determined by the environment. Several of the mechanisms underlying patterning of vegetation have been known and studied since at least the middle of the 20th century, however, mathematical models replicating them have only been produced much more recently. Self-organization in spatial patterns is often a result of spatially uniform states becoming unstable through the monotonic growth and amplification of nonuniform perturbations. A well known instability of this kind leads to so-called Turing patterns. These patterns occur at many scales of life, from cellular development (where they were first proposed) to pattern formation on animal pelts to sand dunes and patterned landscapes (see also pattern formation). In their simplest form models that capture Turing instabilities require two interactions at differing scales: local facilitation and more distant competition. For example, when Sato and Iwasa produced a simple model of fir waves in the Japanese Alps, they assumed that trees exposed to cold winds would suffer mortality from frost damage, but upwind trees would protect nearby downwind trees from wind. Banding appears because the protective boundary layer crea
https://en.wikipedia.org/wiki/Butterfly%20graph
In the mathematical field of graph theory, the butterfly graph (also called the bowtie graph and the hourglass graph) is a planar, undirected graph with 5 vertices and 6 edges. It can be constructed by joining 2 copies of the cycle graph with a common vertex and is therefore isomorphic to the friendship graph . The butterfly graph has diameter 2 and girth 3, radius 1, chromatic number 3, chromatic index 4 and is both Eulerian and a penny graph (this implies that it is unit distance and planar). It is also a 1-vertex-connected graph and a 2-edge-connected graph. There are only three non-graceful simple graphs with five vertices. One of them is the butterfly graph. The two others are cycle graph and the complete graph . It often can serve as a simple counterexample to many seemingly intuitive ideas in graph theory. Bowtie-free graphs A graph is bowtie-free if it has no butterfly as an induced subgraph. The triangle-free graphs are bowtie-free graphs, since every butterfly contains a triangle. In a k-vertex-connected graph, an edge is said to be k-contractible if the contraction of the edge results in a k-connected graph. Ando, Kaneko, Kawarabayashi and Yoshimoto proved that every k-vertex-connected bowtie-free graph has a k-contractible edge. Algebraic properties The full automorphism group of the butterfly graph is a group of order 8 isomorphic to the dihedral group D4, the group of symmetries of a square, including both rotations and reflections. The characteristic polynomial of the butterfly graph is . References Individual graphs Planar graphs
https://en.wikipedia.org/wiki/Singmaster%27s%20conjecture
Singmaster's conjecture is a conjecture in combinatorial number theory, named after the British mathematician David Singmaster who proposed it in 1971. It says that there is a finite upper bound on the multiplicities of entries in Pascal's triangle (other than the number 1, which appears infinitely many times). It is clear that the only number that appears infinitely many times in Pascal's triangle is 1, because any other number x can appear only within the first x + 1 rows of the triangle. Statement Let N(a) be the number of times the number a > 1 appears in Pascal's triangle. In big O notation, the conjecture is: Known bound Singmaster (1971) showed that Abbot, Erdős, and Hanson (1974) (see References) refined the estimate to: The best currently known (unconditional) bound is and is due to Kane (2007). Abbot, Erdős, and Hanson note that conditional on Cramér's conjecture on gaps between consecutive primes that holds for every . Singmaster (1975) showed that the Diophantine equation has infinitely many solutions for the two variables n, k. It follows that there are infinitely many triangle entries of multiplicity at least 6: For any non-negative i, a number a with six appearances in Pascal's triangle is given by either of the above two expressions with where Fj is the jth Fibonacci number (indexed according to the convention that F0 = 0 and F1 = 1). The above two expressions locate two of the appearances; two others appear symmetrically in the triangle with respect to those two; and the other two appearances are at and Elementary examples 2 appears just once; all larger positive integers appear more than once; 3, 4, 5 each appear two times; infinitely many appear exactly twice; all odd prime numbers appear two times; 6 appears three times, as do all central binomial coefficients except for 1 and 2; (it is in principle not excluded that such a coefficient would appear 5, 7 or more times, but no such example is known) all numbers of the form f
https://en.wikipedia.org/wiki/LO-NOx%20burner
A LO burner is a type of burner that is typically used in utility boilers to produce steam and electricity. Background The first discovery Around 1986 John Joyce (of Bowin Cars fame), an influential Australian inventor, first learned about oxides of nitrogen (NOx) and their role in the production of smog and acid rain. His first introduction to the complexities of the subject was brought about by the work of Fred Barnes and Dr John Bromley from the state Energy Commission of Western Australia. The vast majority of the research and development stretching back over twenty years was about large scale industrial burners and complex mechanisms which, in the end, did not produce what one would consider low NOx (2 ng/J or ~ 4 ppm at 0% O2 on dry basis). In fact at that time, 15 ng/J NO2 appears to have been considered low NO2. The one clear message that did flow through all the mass of information he studied, was the effect of temperature on the formation of NOx. "Need is the Mother of Invention" In the late 1980s, Health and Environment Authorities in Australia raised concerns about the indoor air quality and the extent that particularly older style unflued gas heaters were contributing to higher than acceptable levels of nitrogen dioxide (NO2). Consequently in 1989 the New South Wales Department of School Education initiated an extensive investigation of nitrogen dioxide in schools throughout New South Wales. As an interim measure the Health Authorities advised that a level of 0.3 ppm NO2 should become the upper limit for classrooms. The Australian Gas Association in turn reduced the indoor emission rate of NO2 for unflued gas heaters from 15 to 5 ng/J and this remains the current limit. The New South Wales government, through the Public Works Department, also re-evaluated alternative methods of heating classrooms, to ensure a safe and healthy environment for students. It was in this context, that John Joyce's company Bowin Technology embarked on a major research
https://en.wikipedia.org/wiki/Large%20countable%20ordinal
In the mathematical discipline of set theory, there are many ways of describing specific countable ordinals. The smallest ones can be usefully and non-circularly expressed in terms of their Cantor normal forms. Beyond that, many ordinals of relevance to proof theory still have computable ordinal notations (see ordinal analysis). However, it is not possible to decide effectively whether a given putative ordinal notation is a notation or not (for reasons somewhat analogous to the unsolvability of the halting problem); various more-concrete ways of defining ordinals that definitely have notations are available. Since there are only countably many notations, all ordinals with notations are exhausted well below the first uncountable ordinal ω1; their supremum is called Church–Kleene ω1 or ω (not to be confused with the first uncountable ordinal, ω1), described below. Ordinal numbers below ω are the recursive ordinals (see below). Countable ordinals larger than this may still be defined, but do not have notations. Due to the focus on countable ordinals, ordinal arithmetic is used throughout, except where otherwise noted. The ordinals described here are not as large as the ones described in large cardinals, but they are large among those that have constructive notations (descriptions). Larger and larger ordinals can be defined, but they become more and more difficult to describe. Generalities on recursive ordinals Ordinal notations Recursive ordinals (or computable ordinals) are certain countable ordinals: loosely speaking those represented by a computable function. There are several equivalent definitions of this: the simplest is to say that a computable ordinal is the order-type of some recursive (i.e., computable) well-ordering of the natural numbers; so, essentially, an ordinal is recursive when we can present the set of smaller ordinals in such a way that a computer (Turing machine, say) can manipulate them (and, essentially, compare them). A different defin
https://en.wikipedia.org/wiki/Texture%20mapping%20unit
In computer graphics, a texture mapping unit (TMU) is a component in modern graphics processing units (GPUs). They are able to rotate, resize, and distort a bitmap image to be placed onto an arbitrary plane of a given 3D model as a texture, in a process called texture mapping. In modern graphics cards it is implemented as a discrete stage in a graphics pipeline, whereas when first introduced it was implemented as a separate processor, e.g. as seen on the Voodoo2 graphics card. Background and history The TMU came about due to the compute demands of sampling and transforming a flat image (as the texture map) to the correct angle and perspective it would need to be in 3D space. The compute operation is a large matrix multiply, which CPUs of the time (early Pentiums) could not cope with at acceptable performance. In 2013, TMUs are part of the shader pipeline and decoupled from the Render Output Pipelines (ROPs). For example, in AMD's Cypress GPU, each shader pipeline (of which there are 20) has four TMUs, giving the GPU 80 TMUs. This is done by chip designers to closely couple shaders and the texture engines they will be working with. Geometry 3D scenes are generally composed of two things: 3D geometry, and the textures that cover that geometry. Texture units in a video card take a texture and 'map' it to a piece of geometry. That is, they wrap the texture around the geometry and produce textured pixels which can then be written to the screen. Textures can be an actual image, a lightmap, or even normal maps for advanced surface lighting effects. Texture fill rate To render a 3D scene, textures are mapped over the top of polygon meshes. This is called texture mapping and is accomplished by texture mapping units (TMUs) on the videocard. Texture fill rate is a measure of the speed with which a particular card can perform texture mapping. Though pixel shader processing is becoming more important, this number still holds some weight. Best example of this is the X1600
https://en.wikipedia.org/wiki/Current%20limiting
Current limiting is the practice of imposing a limit on the current that may be delivered to a load to protect the circuit generating or transmitting the current from harmful effects due to a short-circuit or overload. The term "current limiting" is also used to define a type of overcurrent protective device. According to the 2020 NEC/NFPA 70, a current-limiting overcurrent protective device is defined as, "A device that, when interrupting currents in its current-limiting range, reduces the current flowing in the faulted circuit to a magnitude substantially less than that obtainable in the same circuit if the device were replaced with a solid conductor having compatible impedance." Inrush current limiting An inrush current limiter is a device or devices combination used to limit inrush current. Passive resistive components such as resistors (with power dissipation drawback), or negative temperature coefficient (NTC) thermistors are simple options while the positive one (PTC) is used to limit max current afterward as the circuit has been operating (with cool-down time drawback on both). More complex solutions using active components can be used when more straightforward options are unsuitable. In electronic power circuits Some electronic circuits employ active current limiting since a fuse may not protect solid-state devices. One style of current limiting circuit is shown in the image. The schematic represents a simple protection mechanism used in regulated DC supplies and class-AB power amplifiers. Q1 is the pass or output transistor. Rsens is the load current sensing device. Q2 is the protection transistor which turns on as soon as the voltage across Rsens becomes about 0.65 V. This voltage is determined by the value of Rsens and the load current through it (Iload). When Q2 turns on, it removes the base current from Q1, thereby reducing the collector current of Q1, which is nearly the load current. Thus, Rsens fixes the maximum current to a value given by 0.6
https://en.wikipedia.org/wiki/SSHFS
In computing, SSHFS (SSH Filesystem) is a filesystem client to mount and interact with directories and files located on a remote server or workstation over a normal ssh connection. The client interacts with the remote file system via the SSH File Transfer Protocol (SFTP), a network protocol providing file access, file transfer, and file management functionality over any reliable data stream that was designed as an extension of the Secure Shell protocol (SSH) version 2.0. The current implementation of SSHFS using FUSE is a rewrite of an earlier version. The rewrite was done by Miklos Szeredi, who also wrote FUSE. Features SFTP provides secure file transfer from a remote file system. While SFTP clients can transfer files and directories, they cannot mount the server's file system into the local directory tree. Using SSHFS, a remote file system may be treated in the same way as other volumes (such as hard drives or removable media). Using the Unix command ls with sshfs will sometimes not list the owner of a file correctly, although it is possible to map them manually. For distributed remote file systems with multiple users, protocols such as Apple Filing Protocol, Network File System and Server Message Block are more often used. SSHFS is an alternative to those protocols only in situations where users are confident that files and directories will not be targeted for writing by another user, at the same time. The advantage of SSHFS when compared to other network file system protocols is that, given that a user already has SSH access to a host, it does not require any additional configuration work, or the opening of additional entry ports in a firewall. See also ExpanDrive Files transferred over shell protocol (FISH) FileZilla, a free software utility for multiple platforms. FTPFS GVfs SSH file transfer protocol (SFTP) Secure copy (SCP) WinSCP References External links Free special-purpose file systems Network file systems Remote administration soft
https://en.wikipedia.org/wiki/FpgaC
FpgaC is a compiler for a subset of the C programming language, which produces digital circuits that will execute the compiled programs. The circuits may use FPGAs or CPLDs as the target processor for reconfigurable computing, or even ASICs for dedicated applications. FpgaC's goal is to be an efficient High Level Language (HLL) for reconfigurable computing, rather than a Hardware Description Language (HDL) for building efficient custom hardware circuits. History The historical roots of FpgaC are in the Transmogrifier C 3.1 (TMCC) HDL, a 1996 BSD licensed Open source offering from University of Toronto. TMCC is one of the first FPGA C compilers, with work starting in 1994 and presented at IEEE's FCCM95. This predated the evolution from the Handel language to Handel-C work done shortly afterward at Oxford University Computing Laboratory. TMCC was renamed FpgaC for the initial SourceForge project release, with syntax modifications to start the evolution to ANSI C. Later development has removed all explicit HDL syntax from the language, and increased the subset of C supported. By capitalizing on ANSI C C99 extensions, the same functionality is now available by inference rather than non-standard language extensions. This shift away from non-standard HDL extensions was influenced in part by Streams-C from Los Alamos National Laboratory (now available commercially as Impulse C). In the years that have followed, compiling ANSI C for execution as FPGA circuits has become a mainstream technology. Commercial FPGA C compilers are available from multiple vendors, and ANSI C based System Level Tools have gone mainstream for system description and simulation languages. FPGA based Reconfigurable Computing offerings from industry leaders like Altera, Silicon Graphics, Seymour Cray's SRC Computers, and Xilinx have capitalized on two decades of government and university reconfigurable computing research. External links Transmogrifier C Homepage Oxford Handel-C FPGA System Le
https://en.wikipedia.org/wiki/Naming%20collision
A naming collision is a circumstance where two or more identifiers in a given namespace or a given scope cannot be unambiguously resolved, and such unambiguous resolution is a requirement of the underlying system. Example: XML element names In XML, element names can be originated and changed to reflect the type of information contained in the document. This level of flexibility may cause problems if separate documents encode different kinds of information, but use the same identifiers for the element names. For example, the following sample document defines the basic semantics for a "person" document and a "book" document. Both of these use a "title" element, but the meaning is not the same: <root> <person> <fname>Nancy</fname> <lname>Davolio</lname> <title>Dr.</title> <age>29</age> </person> <book> <title>Harry Potter And The Cursed Child</title> <isbn>ABCD1234567</isbn> </book> </root> For an application to allow a user to correctly query for and retrieve the "title" element, it must provide a way to unambiguously specify which title element is being requested. Failure to do so would give rise to a naming collision on the title element (as well as any other elements that shared this unintended similarity). In the preceding example, there is enough information in the structure of the document itself (which is specified by the "root" element) to provide a means of unambiguously resolving element names. For example, using XPath: //root/person/title ;; the formal title for a person //root/book/title ;; the title of a book Collision domain The term collision domain may also be used to refer to a system in which a single name or identifier is open to multiple interpretations by different layers or processing. The notion of a namespace has been widely adopted as a software programming practice to avert undesired clashes. Note that its use in the networking field is
https://en.wikipedia.org/wiki/BSSGP
BSSGP is a protocol used in the GPRS mobile packet data system. It denotes Base Station System GPRS Protocol. It transfers information between two GPRS entities SGSN and BSS over a BSSGP Virtual Connection (BVC). This protocol provides radio-related quality of service and routing information that is required to transmit user data between a BSS and an SGSN. It does not carry out any form of error correction. BSSGP is used to handle the flow control between SGSN and BSS. The flow control mechanism implemented in SGSN node, only for GSM, is used to prevent congestion and loss of data due to overload in the BSS. This mechanism controls the flow from the SGSN to the BSS but not in the uplink direction. The primary functions of BSSGP include: Provision by an SGSN to a BSS of a radio related information used by the RLC/MAC function in the download link. Provision by a BSS to an SGSN of radio related information derived from the RLC/MAC function in the uplink. Provision of functionality to enable two physically distinct nodes, an SGSN, a BSS, to operate node management control functions (QoS, flow control). It is specified in 3GPP TS 48.018. References 3GPP standards Network protocols
https://en.wikipedia.org/wiki/Remote%20Initial%20Program%20Load
Remote Initial Program Load (RIPL or RPL) is a protocol for starting a computer and loading its operating system from a server via a network. Such a server runs a network operating system such as LAN Manager, LAN Server, Windows NT Server, Novell NetWare, LANtastic, Solaris or Linux. RIPL is similar to Preboot Execution Environment (PXE), but it uses the Novell NetWare-based boot method. It was originally developed by IBM. IBM LAN Server IBM LAN Server enables clients (RIPL requesters) to load the operating systems DOS or OS/2 via the 802.2/DLC-protocol from the LAN (often Token Ring). Therefore, the server compares the clients' requests with entries in its RPL.MAP table. Remote booting DOS workstations via boot images was supported as early as 1990 by IBM LAN Server 1.2 via its PCDOSRPL protocol. IBM LAN Server 2.0 introduced remote booting of OS/2 stations (since OS/2 1.30.1) in 1992. RPL and DOS For DOS remote boot to work, the RPL boot loader is loaded into the client's memory over the network before the operating system starts. Without special precautions the operating system could easily overwrite the RPL code during boot, since the RPL code resides in unallocated memory (typically at the top of the available conventional memory). The RPL code hides and thereby protects itself from being overwritten by hooking INT 12h and reducing the memory reported by this BIOS service by its own size. INT 12h is used by DOS to query the amount of available memory when initializing its own real-mode memory allocation scheme. This causes problems on more modern DOS systems, where free real-mode address ranges may be utilized by the operating system in order to relocate parts of itself and load drivers high, so that the amount of available conventional memory is maximized. Typically, various operating system vendor and version specific "dirty tricks" had to be used by the RPL code in order to survive this very dynamic boot process and let DOS regain control over the memor
https://en.wikipedia.org/wiki/Virtual%20COM%20port
A virtual serial port is a software representation of a serial port that either does not connect to a real serial port, or adds functionality to a real serial port through software extension. Software virtual ports A software-based virtual serial port presents one or more virtual serial port identifiers on a PC which other applications can see and interact with as if they were real hardware ports, but the data sent and received to these virtual devices is handled by software that manipulates the transmitted and received data to grant greater functionality. Operating systems usually do not provide virtual serial port capability. Third-party applications can add this ability, such as the open-source com0com, freeware HW VSP3, or the commercial Virtual Serial Port Driver. Some virtual serial ports emulate all hardware serial port functionality, including all signal pin states, and permit a large number of virtual ports in any desired configuration. Others provide a limited set of capabilities and do not fully emulate the hardware. This technique can be used either to extend the capabilities of software that cannot be updated to use newer communication technologies, such as by transmitting serial data over modern networks, or to achieve data flows that are not normally possible due to software limitations, such as splitting serial port output. Port sharing A serial port typically can only be monitored or transmitted to by one device at a time under the constraints of most operating systems, but a virtual serial port program can create two virtual ports, allowing two separate applications to monitor the same data. For instance, a GPS device which outputs location data to a PCs serial port may be of interest to multiple applications at once. Network transmission Another option is to communicate with another serial device via internet or LAN as if they were locally connected, using serial over LAN. This allows software intended to interface with a device over a lo
https://en.wikipedia.org/wiki/Energy%20Sciences%20Network
The Energy Sciences Network (ESnet) is a high-speed computer network serving United States Department of Energy (DOE) scientists and their collaborators worldwide. It is managed by staff at the Lawrence Berkeley National Laboratory. More than 40 DOE Office of Science labs and research sites are directly connected to this network. The ESnet network also connects research and commercial networks, allowing DOE researchers to collaborate with scientists around the world. Overview The Energy Sciences Network (ESnet) is the Office of Science's network user facility, delivering data transport capabilities for the requirements of large-scale science. Formed in 1986, combining the operations of earlier DOE networking projects known as HEPnet (for high-energy physics) and MFEnet (for magnetic fusion energy research), ESnet is stewarded by the Scientific Computing Research Program, managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory and is used to enable the DOE science mission. ESnet interconnects the DOE's national laboratory system, dozens of other DOE sites, research and commercial networks around the world, enabling scientists at DOE laboratories and academic institutions across the country to transfer data streams and access remote research resources in real time. ESnet provides the networking infrastructure and services required by the national laboratories, large science collaborations, and the DOE research community. ESnet services aim to provide bandwidth connections to enable scientists to collaborate across a range of research areas across the USA and, since December 2014, Europe with a view to enhancing collaboration. According to ESnet's own figures, during the period 1990 to 2019, average traffic volumes have grown by a factor of 10 every 47 months. In 2009, ESnet received $62 million in American Research and Recovery Act (ARRA) funding from the DOE Office of Science to invest in its infrastructure to prov
https://en.wikipedia.org/wiki/Josephson%20vortex
In superconductivity, a Josephson vortex (after Brian Josephson from Cambridge University) is a quantum vortex of supercurrents in a Josephson junction (see Josephson effect). The supercurrents circulate around the vortex center which is situated inside the Josephson barrier, unlike Abrikosov vortices in type-II superconductors, which are located in the superconducting condensate. Abrikosov vortices (after Alexei Abrikosov) in superconductors are characterized by normal cores where the superconducting condensate is destroyed on a scale of the superconducting coherence length ξ (typically 5-100 nm) . The cores of Josephson vortices are more complex and depend on the physical nature of the barrier. In Superconductor-Normal Metal-Superconductor (SNS) Josephson junctions there exist measurable superconducting correlations induced in the N-barrier by proximity effect from the two neighbouring superconducting electrodes. Similarly to Abrikosov vortices in superconductors, Josephson vortices in SNS Josephson junctions are characterized by cores in which the correlations are suppressed by destructive quantum interference and the normal state is recovered. However, unlike Abrikosov cores, having a size ~ξ, the size of the Josephson ones is not defined by microscopic parameters only. Rather, it depends on supercurrents circulating in superconducting electrodes, applied magnetic field etc. In Superconductor-Insulator-Superconductor (SIS) Josephson tunnel junctions the cores are not expected to have a specific spectral signature; they were not observed. Usually the Josephson vortex's supercurrent loops create a magnetic flux which equals, in long enough Josephson junctions, to Φ0—a single flux quantum. Yet fractional vortices may also exist in Superconductor-Ferromagnet-Superconductor Josephson junctions or in junctions in which superconducting phase discontinuities are present. It was demonstrated by Hilgenkamp et al. that Josephson vortices in the so-called 0-π Long Josep
https://en.wikipedia.org/wiki/Robert%20S.%20Boyer
Robert Stephen Boyer is an American retired professor of computer science, mathematics, and philosophy at The University of Texas at Austin. He and J Strother Moore invented the Boyer–Moore string-search algorithm, a particularly efficient string searching algorithm, in 1977. He and Moore also collaborated on the Boyer–Moore automated theorem prover, Nqthm, in 1992. Following this, he worked with Moore and Matt Kaufmann on another theorem prover called ACL2. Publications Boyer has published extensively, including the following books: A Computational Logic Handbook, with J S. Moore. Second Edition. Academic Press, London, 1998. Automated Reasoning: Essays in Honor of Woody Bledsoe, editor. Kluwer Academic, Dordrecht, The Netherlands, 1991. A Computational Logic Handbook, with J S. Moore. Academic Press, New York, 1988. The Correctness Problem in Computer Science, editor, with J S. Moore. Academic Press, London, 1981. A Computational Logic, with J S. Moore. Academic Press, New York, 1979. See also Boyer–Moore majority vote algorithm QED manifesto References External links Home page of Robert S. Boyer. Accessed February 18, 2016. University of Texas, College of Liberal Arts Honors Retired Faculty - 2008. Accessed March 21, 2009. Robert Stephen Boyer at the Mathematics Genealogy Project Living people Alumni of the University of Edinburgh University of Texas at Austin faculty Fellows of the Association for the Advancement of Artificial Intelligence Year of birth missing (living people) Formal methods people Lisp (programming language) people
https://en.wikipedia.org/wiki/Speaker%20terminal
A speaker terminal is a type of electrical connector often used for interconnecting speakers and audio power amplifiers. The terminals are used in pairs with each of the speaker cable's two wires being connected to one terminal in the pair. Since speaker connections are polarized, the terminals are typically color-coded so that the positive wire connects to the red and the negative to the black terminal. The terminal consists of a spring-loaded metallic pincher that opens when the lever is pressed, and when released will tightly grip the conductor which has been inserted into it. This type of terminal is popular because it does not require any special connector to be applied to the end of the wire; instead, the wire is simply stripped of insulation on its end and inserted into the terminal. This terminal may be used with a variety of wire gauges as well as with either solid core or stranded wires. DIY projects sometimes reuse speaker terminals for other applications using bare wire leads. See also Banana connector Binding post References Audiovisual connectors Sound production technology
https://en.wikipedia.org/wiki/Site-specific%20recombinase%20technology
Site-specific recombinase technologies are genome engineering tools that depend on recombinase enzymes to replace targeted sections of DNA. History In the late 1980s gene targeting in murine embryonic stem cells (ESCs) enabled the transmission of mutations into the mouse germ line, and emerged as a novel option to study the genetic basis of regulatory networks as they exist in the genome. Still, classical gene targeting proved to be limited in several ways as gene functions became irreversibly destroyed by the marker gene that had to be introduced for selecting recombinant ESCs. These early steps led to animals in which the mutation was present in all cells of the body from the beginning leading to complex phenotypes and/or early lethality. There was a clear need for methods to restrict these mutations to specific points in development and specific cell types. This dream became reality when groups in the USA were able to introduce bacteriophage and yeast-derived site-specific recombination (SSR-) systems into mammalian cells as well as into the mouse. Classification, properties and dedicated applications Common genetic engineering strategies require a permanent modification of the target genome. To this end great sophistication has to be invested in the design of routes applied for the delivery of transgenes. Although for biotechnological purposes random integration is still common, it may result in unpredictable gene expression due to variable transgene copy numbers, lack of control about integration sites and associated mutations. The molecular requirements in the stem cell field are much more stringent. Here, homologous recombination (HR) can, in principle, provide specificity to the integration process, but for eukaryotes it is compromised by an extremely low efficiency. Although meganucleases, zinc-finger- and transcription activator-like effector nucleases (ZFNs and TALENs) are actual tools supporting HR, it was the availability of site-specific recombina
https://en.wikipedia.org/wiki/Wi-Fi%20calling
Wi-Fi calling refers to mobile phone voice calls and data that are made over IP networks using Wi-Fi, instead of the cell towers provided by cellular networks. Using this feature, compatible handsets are able to route regular cellular calls through a wireless LAN (Wi-Fi) network with broadband Internet, while seamlessly change connections between the two where necessary. This feature makes use of the Generic Access Network (GAN) protocol, also known as Unlicensed Mobile Access (UMA). Essentially, GAN/UMA allows cell phone packets to be forwarded to a network access point over the internet, rather than over-the-air using GSM/GPRS, UMTS or similar. A separate device known as a "GAN Controller" (GANC) receives this data from the Internet and feeds it into the phone network as if it were coming from an antenna on a tower. Calls can be placed from or received to the handset as if it were connected over-the-air directly to the GANC's point of presence; the system is essentially invisible to the network as a whole. This can be useful in locations with poor cell coverage where some other form of internet access is available, especially at the home or office. The system offers seamless handoff, so the user can move from cell to Wi-Fi and back again with the same invisibility that the cell network offers when moving from tower to tower. Since the GAN system works over the internet, a UMA-capable handset can connect to their service provider from any location with internet access. This is particularly useful for travellers, who can connect to their provider's GANC and make calls into their home service area from anywhere in the world. This is subject to the quality of the internet connection, however, and may not work well over limited bandwidth or long-latency connection. To improve quality of service (QoS) in the home or office, some providers also supply a specially programmed wireless access point that prioritizes UMA packets. Another benefit of Wi-Fi calling is that mob
https://en.wikipedia.org/wiki/EncFS
EncFS is a Free (LGPL) FUSE-based cryptographic filesystem. It transparently encrypts files, using an arbitrary directory as storage for the encrypted files. Two directories are involved in mounting an EncFS filesystem: the source directory, and the mountpoint. Each file in the mountpoint has a specific file in the source directory that corresponds to it. The file in the mountpoint provides the unencrypted view of the one in the source directory. Filenames are encrypted in the source directory. Files are encrypted using a volume key, which is stored either within or outside the encrypted source directory. A password is used to decrypt this key. Common uses In Linux, allows encryption of home folders as an alternative to eCryptfs. Allows encryption of files and folders saved to cloud storage (Dropbox, Google Drive, OneDrive, etc.). Allows portable encryption of file folders on removable disks. Available as a cross-platform folder encryption mechanism. Increases storage security by adding two-factor authentication (2FA). When the EncFS volume key is stored outside the encrypted source directory and into a physically separated location from the actual encrypted data, it significantly increases security by adding a two-factor authentication (2FA). For example, EncFS is able to store each unique volume key anywhere else than the actual encrypted data, such as on a USB flash drive, network mount, optical disc or cloud. In addition to that a password could be required to decrypt this volume key. Advantages EncFS offers several advantages over other disk encryption software simply because each file is stored individually as an encrypted file elsewhere in the host's directory tree. Cross-platform EncFS is available on multiple platforms, whereas eCryptfs is tied to the Linux kernel Bitrot detection EncFS implements bitrot detection on top of any underlying filesystem Scalable storage EncFS has no "volumes" that occupy a fixed size — encrypted directories gro
https://en.wikipedia.org/wiki/Gravity%20of%20Earth
The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation). It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm . In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ). The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value). The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects. Variation in magnitude A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid.
https://en.wikipedia.org/wiki/Optical%20mineralogy
Optical mineralogy is the study of minerals and rocks by measuring their optical properties. Most commonly, rock and mineral samples are prepared as thin sections or grain mounts for study in the laboratory with a petrographic microscope. Optical mineralogy is used to identify the mineralogical composition of geological materials in order to help reveal their origin and evolution. Some of the properties and techniques used include: Refractive index Birefringence Michel-Lévy Interference colour chart Pleochroism Extinction angle Conoscopic interference pattern (Interference figure) Becke line test Optical relief Sign of elongation (Length fast vs. length slow) Wave plate History William Nicol, whose name is associated with the creation of the Nicol prism, is likely the first to prepare thin slices of mineral substances, and his methods were applied by Henry Thronton Maire Witham (1831) to the study of plant petrifactions. This method, of significant importance in petrology, was not at once made use of for the systematic investigation of rocks, and it was not until 1858 that Henry Clifton Sorby pointed out its value. Meanwhile, the optical study of sections of crystals had been advanced by Sir David Brewster and other physicists and mineralogists and it only remained to apply their methods to the minerals visible in rock sections. Sections A rock-section should be about one-thousandth of an inch (30 micrometres) in thickness, and is relatively easy to make. A thin splinter of the rock, about 1 centimetre may be taken; it should be as fresh as possible and free from obvious cracks. By grinding it on a plate of planed steel or cast iron with a little fine carborundum it is soon rendered flat on one side, and is then transferred to a sheet of plate glass and smoothed with the finest grained emery until all roughness and pits are removed, and the surface is a uniform plane. The rock chip is then washed, and placed on a copper or iron plate which is heated
https://en.wikipedia.org/wiki/Equations%20for%20a%20falling%20body
A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth’s gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth’s gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories. History Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water. The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.) The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures. Overview Near the surface of the Earth, the acceleration due to gravity  = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approxima
https://en.wikipedia.org/wiki/Grapefruit%E2%80%93drug%20interactions
Some fruit juices and fruits can interact with numerous drugs, in many cases causing adverse effects. The effect is most studied with grapefruit and grapefruit juice, but similar effects have been observed with certain other citrus fruits. The effect was first discovered accidentally in 1991, when a test of drug interactions with alcohol used grapefruit juice to hide the taste of the ethanol. A 2005 medical review advised patients to avoid all citrus juices until further research clarifies the risks. It was reported in 2008 that similar effects had been observed with apple juice. One whole grapefruit, or a small glass () of grapefruit juice, can cause drug overdose toxicity. Fruit consumed three days before the medicine can still have an effect. The relative risks of different types of citrus fruit have not been systematically studied. Affected drugs typically have an auxiliary label saying "Do not take with grapefruit" on the container, and the interaction is elaborated upon in the package insert. People are also advised to ask their physician or pharmacist about drug interactions. The effects are caused by furanocoumarins (and, to a lesser extent, flavonoids). These chemicals inhibit key drug metabolizing enzymes, such as cytochrome P450 3A4 (CYP3A4). CYP3A4 is a metabolizing enzyme for almost 50% of drugs, and is found in the liver and small intestinal epithelial cells. As a result, many drugs are affected. Inhibition of enzymes can have two different effects, depending on whether the drug is either metabolized by the enzyme to an inactive metabolite, or activated by the enzyme to an active metabolite. In the first instance, inhibition of drug-metabolizing enzymes results in elevated concentrations of an active drug in the body, which may cause adverse effects. Conversely, if the medication is a prodrug, it needs to be metabolised to be converted to the active drug. Compromising its metabolism lowers concentrations of the active drug, reducing its therapeut
https://en.wikipedia.org/wiki/TreeDL
Tree Description Language (TreeDL) is a computer language for description of strictly-typed tree data structures and operations on them. The main use of TreeDL is in the development of language-oriented tools (compilers, translators, etc.) for the description of a structure of abstract syntax trees. Tree description can be used as a documentation of interface between parser and other subsystems; a source for generation of data types representing a tree in target programming languages; a source for generation of various support code: visitors, walkers, factories, etc. TreeDL can be used with any parser generator that allows custom actions during parsing (for example, ANTLR, JavaCC). Language overview Tree description lists the node types allowed in a tree. Node types support single inheritance. Node types have children and attributes. Children must be of defined node type. Attributes may be of primitive type (numeric, string, boolean), enum type or node type. Attributes are used to store literals during tree construction and additional information gathered during tree analysis (for example, links between reference and definition, to represent higher-order abstract syntax). Operations over a tree are defined as multimethods. Advantages of this approach are described in the article Treecc: An Aspect-Oriented Approach to Writing Compilers Tree descriptions support inheritance to allow modularity and reuse of base language tree descriptions for language extensions. See also ANTLR - parser generator that offers a different approach to tree processing: tree grammars. SableCC - parser generator that generates strictly-typed abstract syntax trees. External links old TreeDL home Programming languages Domain-specific knowledge representation languages
https://en.wikipedia.org/wiki/Herbert%20Fr%C3%B6hlich
Herbert Fröhlich (9 December 1905 – 23 January 1991) FRS was a German-born British physicist. Career In 1927, Fröhlich entered Ludwig-Maximilians University in Munich to study physics, and received his doctorate under Arnold Sommerfeld in 1930. His first position was as Privatdozent at the University of Freiburg. Due to rising anti-Semitism and the Deutsche Physik movement under Adolf Hitler, and at the invitation of Yakov Frenkel, Fröhlich went to the Soviet Union, in 1933, to work at the Ioffe Physico-Technical Institute in Leningrad. During the Great Purge following the murder of Sergei Kirov, he fled to England in 1935. Except for a short visit to the Netherlands and a brief internment during World War II, he worked in Nevill Francis Mott's department, at the University of Bristol, until 1948, rising to the position of Reader. At the invitation of James Chadwick, he took the Chair for Theoretical Physics at the University of Liverpool. In 1950, Bell Telephone Laboratories offered Fröhlich their endowed professorial position at Princeton University. However, at Liverpool he had a purely research post which was attractive to him. He was then newly married to an American, Fanchon Angst, who was studying linguistic philosophy at Somerville College, Oxford under P. F. Strawson, and who did not want to return to the United States at that time. From 1973, he was Professor of Solid State Physics at the University of Salford, however, all the while maintaining an office at the University of Liverpool, where he gained emeritus status in 1976 and remained there until his death. During 1981, he was a visiting professor at Purdue University. He was nominated for the Nobel Prize in Physics in 1963 and in 1964. Fröhlich, who pursued theoretical research notably in the fields of superconductivity and bioelectrodynamics, proposed a theory of coherent excitations in biological systems known as Fröhlich coherence. A system that attains this coherent state is known as a Fröhli
https://en.wikipedia.org/wiki/Glossary%20of%20wine%20terms
The glossary of wine terms lists the definitions of many general terms used within the wine industry. For terms specific to viticulture, winemaking, grape varieties, and wine tasting, see the topic specific list in the "See also" section below. A Abboccato An Italian term for full-bodied wines with medium-level sweetness ABC Initials for "Anything but Chardonnay" or "Anything but Cabernet". A term conceived by Bonny Doon's Randall Grahm to denote wine drinkers' interest in grape varieties. Abfüllung (Erzeugerabfüllung) Bottled by the proprietor. Will be on the label followed by relevant information concerning the bottler. ABV Abbreviation of alcohol by volume, generally listed on a wine label. AC Abbreviation for "Agricultural Cooperative" on Greek wine labels and for Adega Cooperativa on Portuguese labels. Acescence Wine with a sharp, sweet-and-sour tang. The acescence characteristics frequently recalls a vinegary smell. Adamado Portuguese term for a medium-sweet wine Adega Portuguese wine term for a winery or wine cellar. Almacenista Spanish term for a Sherry producer who ferments and matures the wine before selling it to a merchant Altar wine The wine used by the Catholic Church in celebrations of the Eucharist. Alte Reben German term for old vine Amabile Italian term for a medium-sweet wine AOC Abbreviation for Appellation d'Origine Contrôlée, (), as specified under French law. The AOC laws specify and delimit the geography from which a particular wine (or other food product) may originate and methods by which it may be made. The regulations are administered by the Institut National des Appellations d'Origine (INAO). A.P. number Abbreviation for Amtliche Prüfungsnummer, the official testing number displayed on a German wine label that shows that the wine was tasted and passed government quality control standards. ATTTB Abbreviation for the Alcohol and Tobacco Tax and Trade Bureau, a United States government agency that is primarily responsible