source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/American%20Solar%20Challenge
The American Solar Challenge (ASC), previously known as the North American Solar Challenge and Sunrayce, is a solar car race across the United States. In the race, teams from colleges and universities throughout North America design, build, test, and race solar-powered vehicles in a long distance road rally-style event. ASC is a test of teamwork, engineering skill, and endurance that stretches across thousands of miles of public roads. ASC 2016 took place from July 30 – August 6, on a 1,975 mile (3,178 km) route from Brecksville, Ohio to Hot Springs, South Dakota. ASC 2020 was postponed until 2021 due to the COVID‑19 pandemic. Format and organization Rules Race consists of a series of timed stages between predetermined locations; all teams begin and end each stage in the same location The team with the lowest overall elapsed time wins The total area of all solar cells and related reflectors, etc. must not exceed 6 square meters When the vehicle has stopped, the solar array may be reoriented toward the sun for charging batteries Strict specifications and engineering scrutiny process is provided for vehicle configuration, safety requirements, and other standards Previous races have divided teams into open and stock classes based on levels of solar cell and battery technologies. The Formula Sun Grand Prix track race serves as a qualifier for the more prestigious ASC. History Originally called Sunrayce USA, the first race was organized and sponsored by General Motors in 1990 in an effort to promote automotive engineering and solar energy among college students. At the time, GM had just won the inaugural World Solar Challenge in Australia in 1987; rather than continue actively racing, it instead opted to sponsor collegiate events. Subsequent races were held in 1993, 1995, 1997 and 1999 under the name Sunrayce [year] (e.g. Sunrayce 93). In 2001, the race was renamed American Solar Challenge and was sponsored by the United States Department of Energy and the National
https://en.wikipedia.org/wiki/Hook%20effect
The hook effect refers to the prozone phenomenon, also known as antibody excess or the Postzone phenomenon, also known as antigen excess. It is an immunologic phenomenon whereby the effectiveness of antibodies to form immune complexes can be impaired when concentrations of an antibody or an antigen are very high. The formation of immune complexes stops increasing with greater concentrations and then decreases at extremely high concentrations, producing a hook shape on a graph of measurements. An important practical relevance of the phenomenon is as a type of interference that plagues certain immunoassays and nephelometric assays, resulting in false negatives or inaccurately low results. Other common forms of interference include antibody interference, cross-reactivity and signal interference. The phenomenon is caused by very high concentrations of a particular analyte or antibody and is most prevalent in one-step (sandwich) immunoassays. Mechanism and in vitro importance Prozone - excess antibodies In an agglutination test, a person's serum (which contains antibodies) is added to a test tube, which contains a particular antigen. If the antibodies interact with the antigen to form immune complexes, called agglutination, then the test is interpreted as positive. However, if too many antibodies are present that can bind to the antigen, then the antigenic sites are coated by antibodies, and few or no antibodies directed toward the pathogen are able to bind more than one antigenic particle. Since the antibodies do not bridge between antigens, no agglutination occurs. Because no agglutination occurs, the test is interpreted as negative. In this case, the result is a false negative. The range of relatively high antibody concentrations within which no reaction occurs is called the prozone. Postzone - excess antigens The effect can also occur because of antigen excess, when both the capture and detection antibodies become saturated by the high analyte concentration. In
https://en.wikipedia.org/wiki/Caret%20notation
Caret notation is a notation for control characters in ASCII. The notation assigns to control-code 1, sequentially through the alphabet to assigned to control-code 26 (0x1A). For the control-codes outside of the range 1–26, the notation extends to the adjacent, non-alphabetic ASCII characters. Often a control character can be typed on a keyboard by holding down the and typing the character shown after the caret. The notation is often used to describe keyboard shortcuts even though the control character is not actually used (as in "type ^X to cut the text"). The meaning or interpretation of, or response to the individual control-codes is not prescribed by the caret notation. Description The notation consists of a caret () followed by a single character (usually a capital letter). The character has the ASCII code equal to the control code with the bit representing 0x40 reversed. A useful mnemonic, this has the effect of rendering the control codes 1 through 26 as through . Seven ASCII control characters map outside the upper-case alphabet: 0 (NUL) is , 27 (ESC) is , 28 is , 29 is , 30 is , 31 is , and 127 (DEL) is . Examples are "" for the Windows CR, LF newline pair, and describing the ANSI escape sequence to clear the screen as "". Only the use of characters in the range of 63–95 ("") is specifically allowed in the notation, but use of lower-case alphabetic characters entered at the keyboard is nearly always allowed – they are treated as equivalent to upper-case letters. When converting to a control character, except for '?', masking with 0x1F will produce the same result and also turn lower-case into the same control character as upper-case. There is no corresponding version of the caret notation for control-codes with more than 7 bits such as the C1 control characters from 128–159 (0x80–0x9F). Some programs that produce caret notation show these as backslash and octal ("" through ""). Also see the bar notation used by Acorn Computers, below. History Th
https://en.wikipedia.org/wiki/Comparison%20of%20project%20management%20software
The following is a comparison of project management software. General information Features Monetary features See also Kanban (development) Project management software Project planning Comparison of scrum software Comparison of development estimation software Comparison of source-code-hosting facilities Comparison of CRM systems Notes References Project management software
https://en.wikipedia.org/wiki/Spite%20%28game%20theory%29
In fair division problems, spite is a phenomenon that occurs when a player's value of an allocation decreases when one or more other players' valuation increases. Thus, other things being equal, a player exhibiting spite will prefer an allocation in which other players receive less than more (if more of the good is desirable). In this language, spite is difficult to analyze because one has to assess two sets of preferences. For example, in the divide and choose method, a spiteful player would have to make a trade-off between depriving his opponent of cake, and getting more himself. Within the field of sociobiology, spite is used to describe those social behaviors that have a negative impact on both the actor and recipient(s). Spite can be favored by kin selection when: (a) it leads to an indirect benefit to some third party that is sufficiently related to the actor (Wilsonian spite); or (b) when it is directed primarily at negatively related individuals (Hamiltonian spite). Negative relatedness occurs when two individuals are less related than average. In game theory The iterated prisoner's dilemma provides an example where players may "punish" each other for failing to cooperate in previous rounds, even if doing so would cause negative consequences for both players. For example, the simple "tit for tat" strategy has been shown to be effective in round-robin tournaments of iterated prisoner's dilemma. In industrial relations There is always difficulty in fairly dividing the proceeds of a business between the business owners and the employees. When a trade union decides to call a strike, both employer and the union members lose money (and may damage the national economy). The unionists hope that the employer will give in to their demands before such losses have destroyed the business. In the reverse direction, an employer may terminate the employment of certain productive workers who are agitating for higher wages or organising a trade union. Losing productiv
https://en.wikipedia.org/wiki/De%20Arte%20Combinatoria
The Dissertatio de arte combinatoria ("Dissertation on the Art of Combinations" or "On the Combinatorial Art") is an early work by Gottfried Leibniz published in 1666 in Leipzig. It is an extended version of his first doctoral dissertation, written before the author had seriously undertaken the study of mathematics. The booklet was reissued without Leibniz' consent in 1690, which prompted him to publish a brief explanatory notice in the Acta Eruditorum. During the following years he repeatedly expressed regrets about its being circulated as he considered it immature. Nevertheless it was a very original work and it provided the author the first glimpse of fame among the scholars of his time. Summary The main idea behind the text is that of an alphabet of human thought, which is attributed to Descartes. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters. All truths may be expressed as appropriate combinations of concepts, which can in turn be decomposed into simple ideas, rendering the analysis much easier. Therefore, this alphabet would provide a logic of invention, opposed to that of demonstration which was known so far. Since all sentences are composed of a subject and a predicate, one might Find all the predicates which are appropriate to a given subject, or Find all the subjects which are convenient to a given predicate. For this, Leibniz was inspired in the Ars Magna of Ramon Llull, although he criticized this author because of the arbitrariness of his categories indexing. Leibniz discusses in this work some combinatorial concepts. He had read Clavius' comments to Sacrobosco's De sphaera mundi, and some other contemporary works. He introduced the term variationes ordinis for the permutations, combinationes for the combinations of two elements, con3nationes (shorthand for conternationes) for those of three elements, etc. His general term for combinations was complexions. He fo
https://en.wikipedia.org/wiki/Arabic%20chat%20alphabet
The Arabic chat alphabet, Arabizi, or Arabeezi refer to the romanized alphabets for informal Arabic dialects in which Arabic script is transcribed or encoded into a combination of Latin script and Arabic numerals. These informal chat alphabets were originally used primarily by youth in the Arab world in very informal settings—especially for communicating over the Internet or for sending messages via cellular phones—though use is not necessarily restricted by age anymore and these chat alphabets have been used in other media such as advertising. These chat alphabets differ from more formal and academic Arabic transliteration systems, in that they use numerals and multigraphs instead of diacritics for letters such as qāf () or ḍād () that do not exist in the basic Latin script (ASCII), and in that what is being transcribed is an informal dialect and not Standard Arabic. These Arabic chat alphabets also differ from each other, as each is influenced by the particular phonology of the Arabic dialect being transcribed and the orthography of the dominant European language in the area—typically the language of the former colonists, and typically either French or English. Because of their widespread use, including in public advertisements by large multinational companies, large players in the online industry like Google and Microsoft have introduced tools that convert text written in Arabish to Arabic (Google Translate and Microsoft Translator). Add-ons for Mozilla Firefox and Chrome also exist (Panlatin and ARABEASY Keyboard ). The Arabic chat alphabet is never used in formal settings and is rarely, if ever, used for long communications. History During the last decades of the 20th century, Western text-based communication technologies, such as mobile phone text messaging, the World Wide Web, email, bulletin board systems, IRC, and instant messaging became increasingly prevalent in the Arab world. Most of these technologies originally permitted the use of the Latin scri
https://en.wikipedia.org/wiki/Virtual%20globe
A virtual globe is a three-dimensional (3D) software model or representation of Earth or another world. A virtual globe provides the user with the ability to freely move around in the virtual environment by changing the viewing angle and position. Compared to a conventional globe, virtual globes have the additional capability of representing many different views of the surface of Earth. These views may be of geographical features, man-made features such as roads and buildings, or abstract representations of demographic quantities such as population. On November 20, 1997, Microsoft released an offline virtual globe in the form of Encarta Virtual Globe 98, followed by Cosmi's 3D World Atlas in 1999. The first widely publicized online virtual globes were NASA WorldWind (released in mid-2004) and Google Earth (mid-2005). Types Virtual globes may be used for study or navigation (by connecting to a GPS device) and their design varies considerably according to their purpose. Those wishing to portray a visually accurate representation of the Earth often use satellite image servers and are capable not only of rotation but also zooming and sometimes horizon tilting. Very often such virtual globes aim to provide as true a representation of the world as is possible, with worldwide coverage up to a very detailed level. When this is the case, the interface often has the option of providing simplified graphical overlays to highlight man-made features, since these are not necessarily obvious from a photographic aerial view. The other issue raised by such detail available is that of security, with some governments having raised concerns about the ease of access to detailed views of sensitive locations such as airports and military bases. Another type of virtual globe exists whose aim is not the accurate representation of the planet, but instead a simplified graphical depiction. Most early computerized atlases were of this type and, while displaying less detail, these simplified
https://en.wikipedia.org/wiki/Telescopic%20handler
A telescopic handler, also called a lull, telehandler, teleporter, reach forklift, or zoom boom, is a machine widely used in agriculture and industry. It is somewhat like a forklift but has a boom (telescopic cylinder), making it more a crane than a forklift, with the increased versatility of a single telescopic boom that can extend forwards and upwards from the vehicle. The boom can be fitted with different attachments, such as a bucket, pallet forks, muck grab, or winch. Uses In industry, the most common attachment for a telehandler is pallet forks and the most common application is to move loads to and from places unreachable for a conventional forklift. For example, telehandlers have the ability to remove palletised cargo from within a trailer and to place loads on rooftops and other high places. The latter application would otherwise require a crane, which is not always practical or time-efficient. In agriculture the most common attachment for a telehandler are buckets or bucket grabs, again the most common application is to move loads to and from places unreachable for a 'conventional machine' which in this case is a wheeled loader or backhoe loader. For example, telehandlers have the ability to reach directly into a high-sided trailer or hopper. The latter application would otherwise require a loading ramp, conveyor, or something similar. The telehandler can also work with a crane jib along with lifting loads, the attachments that include on the market are dirt buckets, grain buckets, rotators, power booms. The agricultural range can also be fitted with three-point linkage and power take-off. The advantage of the telehandler is also its biggest limitation: as the boom extends or raises while bearing a load, it acts as a lever and causes the vehicle to become increasingly unstable, despite counterweights in the rear. This means that the lifting capacity quickly decreases as the working radius (distance between the front of the wheels and the centre of th
https://en.wikipedia.org/wiki/Problem%20solving%20environment
A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results. Purpose of PSE Many PSEs were introduced in the 1990s. They use the language of the respective field and often employ modern graphical user interfaces. The goal is to make the software easy to use for specialists in fields other than computer science. PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering like gas turbine design. History The Problem Solving Environment (PSE) released a few years after the release of Fortran and Algol 60. People thought that this system with high-level language would cause elimination of professional programmers. However, surprisingly, PSE has been accepted and even though scientists used it to write programs. The Problem Solving Environment for Parallel Scientific Computation was introduced in 1960, where this was the first Organised Collections with minor standardisation. In 1970, PSE was initially researched for providing high-class programming language rather than Fortran, also Libraries Plotting Packages advent. Development of Libraries were continued, and there were introduction of Emergence of Computational Packages and Graphical systems which is data visualisation. By 1990s, hypertext, point-and-click had moved towards inter-operability. Moving on, a "Software Parts" Industry finally existed. Throughout a few decades, recently, many PSEs have been developed and to solve problem and also support users from different categories, including education, general programming, CSE software learning, job executing
https://en.wikipedia.org/wiki/Finger%20binary
Finger binary is a system for counting and displaying binary numbers on the fingers of either or both hands. Each finger represents one binary digit or bit. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both: that is, up to 25−1 or 210−1 respectively. Modern computers typically store values as some whole number of 8-bit bytes, making the fingers of both hands together equivalent to 1 bytes of storage—in contrast to less than half a byte when using ten fingers to count up to 10. Mechanics In the binary number system, each numerical digit has two possible states (0 or 1) and each successive digit represents an increasing power of two. Note: What follows is but one of several possible schemes for assigning the values 1, 2, 4, 8, 16, etc. to fingers, not necessarily the best. (see below the illustrations.): The rightmost digit represents two to the zeroth power (i.e., it is the "ones digit"); the digit to its left represents two to the first power (the "twos digit"); the next digit to the left represents two to the second power (the "fours digit"); and so on. (The decimal number system is essentially the same, only that powers of ten are used: "ones digit", "tens digit" "hundreds digit", etc.) It is possible to use anatomical digits to represent numerical digits by using a raised finger to represent a binary digit in the "1" state and a lowered finger to represent it in the "0" state. Each successive finger represents a higher power of two. With palms oriented toward the counter's face, the values for when only the right hand is used are: When only the left hand is used: When both hands are used: And, alternately, with the palms oriented away from the counter: The values of each raised finger are added together to arrive at a total number. In the one-handed version, all fingers raised is thus 31 (16 + 8 + 4 + 2 + 1), and all fingers lowered (a fist) is 0. In the two-handed system, all fingers raised is 1,023 (512 + 256 + 1
https://en.wikipedia.org/wiki/Complementary%20code%20keying
Complementary code keying (CCK) is a modulation scheme used with wireless networks (WLANs) that employ the IEEE 802.11b specification. In 1999, CCK was adopted to supplement the Barker code in wireless digital networks to achieve data rate higher than 2 Mbit/s at the expense of shorter distance. This is due to the shorter chipping sequence in CCK (8 bits versus 11 bits in Barker code) that means less spreading to obtain higher data rate but more susceptible to narrowband interference resulting in shorter radio transmission range. Beside shorter chipping sequence, CCK also has more chipping sequences to encode more bits (4 chipping sequences at 5.5 Mbit/s and 8 chipping sequences at 11 Mbit/s) increasing the data rate even further. The Barker code, however, only has a single chipping sequence. The complementary codes first discussed by Golay were pairs of binary complementary codes and he noted that when the elements of a code of length N were either [−1 or 1] it followed immediately from their definition that the sum of their respective autocorrelation sequences was zero at all points except for the zero shift where it is equal to K×N. (K being the number of code words in the set). CCK is a variation and improvement on M-ary Orthogonal Keying and uses 'polyphase complementary codes'. They were developed by Lucent Technologies and Harris Semiconductor and were adopted by the 802.11 working group in 1998. CCK is the form of modulation used when 802.11b operates at either 5.5 or 11 Mbit/s. CCK was selected over competing modulation techniques as it used approximately the same bandwidth and could use the same preamble and header as pre-existing 1 and 2 Mbit/s wireless networks and thus facilitated interoperability. Polyphase complementary codes, first proposed by Sivaswamy, 1978, are codes where each element is a complex number of unit magnitude and arbitrary phase, or more specifically for 802.11b is one of [1, −1, j, −j]. Networks using the 802.11g specification e
https://en.wikipedia.org/wiki/Phenocopy
A phenocopy is a variation in phenotype (generally referring to a single trait) which is caused by environmental conditions (often, but not necessarily, during the organism's development), such that the organism's phenotype matches a phenotype which is determined by genetic factors. It is not a type of mutation, as it is non-hereditary. The term was coined by Richard Goldschmidt in 1935. He used it to refer to forms, produced by some experimental procedure, whose appearance duplicates or copies the phenotype of some mutant or combination of mutants. Examples The butterfly genus Vanessa can change phenotype based on the local temperature. If introduced to Lapland they mimic butterflies localised to this area; and if localised to Syria they mimic butterflies of this area. The larvae of Drosophila melanogaster have been found to be particularly vulnerable to environmental factors which produce phenocopies of known mutations; these factors include temperature, shock, radiation, and various chemical compounds. In fruit fly, Drosophila melanogaster, the normal body colour is brownish gray with black margins. A hereditary mutant for this was discovered by T.H. Morgan in 1910 where the body colour is yellow. This was a genotypic character which was constant in both the flies in all environments. However, in 1939, Rapoport discovered that if larvae of normal flies were fed with silver salts, they develop into yellow bodied flies irrespective of their genotype. The yellow bodied flies which are genetically brown is a variant of the original yellow bodied fly. Phenocopy can also be observed in Himalayan rabbits. When raised in moderate temperatures, Himalayan rabbits are white in colour with black tail, nose, and ears, making them phenotypically distinguishable from genetically black rabbits. However, when raised in cold temperatures, Himalayan rabbits show black colouration of their coats, resembling the genetically black rabbits. Hence this Himalayan rabbit is a phenocop
https://en.wikipedia.org/wiki/Autonegotiation
Autonegotiation is a signaling mechanism and procedure used by Ethernet over twisted pair by which two connected devices choose common transmission parameters, such as speed, duplex mode, and flow control. In this process, the connected devices first share their capabilities regarding these parameters and then choose the highest performance transmission mode they both support. Autonegotiation is defined in clause 28 of IEEE 802.3. and was originally an optional component in the Fast Ethernet standard. It is backwards compatible with the normal link pulses (NLP) used by 10BASE-T. The protocol was significantly extended in the Gigabit Ethernet standard, and is mandatory for 1000BASE-T gigabit Ethernet over twisted pair. In the OSI model, autonegotiation resides in the physical layer. Standardization and interoperability In 1995, the Fast Ethernet standard was released. Because this introduced a new speed option for the same wires, it included a means for connected network adapters to negotiate the best possible shared mode of operation. The autonegotiation protocol included in IEEE 802.3 clause 28 was developed from a patented technology by National Semiconductor known as NWay. The company gave a letter of assurance for anyone to use their system for a one time license fee. Another company has since bought the rights to that patent. The first version of the autonegotiation specification, in the 1995 IEEE 802.3u Fast Ethernet standard, was implemented differently by different manufacturers leading to interoperability issues. These problems led many network administrators to manually set the speed and duplex mode of each network interface. However, the use of manually set configuration may also lead to duplex mismatches. Duplex mismatch is difficult to diagnose because the network is nominally working; Simple programs used for network tests such as ping report a valid connection. However, network performance is significantly impacted. The autonegotiation specific
https://en.wikipedia.org/wiki/Resource%20holding%20potential
In biology, resource holding potential (RHP) is the ability of an animal to win an all-out fight if one were to take place. The term was coined by Geoff Parker to disambiguate physical fighting ability from the motivation to persevere in a fight (Parker, 1974). Originally the term used was 'resource holding power', but 'resource holding potential' has come to be preferred. The latter emphasis on 'potential' serves as a reminder that the individual with greater RHP does not always prevail. An individual with more RHP may lose a fight if, for example, it is less motivated (has less to gain by winning) than its opponent. Mathematical models of RHP and motivation ( resource value or V) have traditionally been based on the hawk-dove game (e.g. Hammerstein, 1981) in which subjective resource value is represented by the variable 'V'. In addition to RHP and V, George Barlow (Barlow et al., 1986) proposed that a third variable, which he termed 'daring', played a role in determining fight outcome. Daring (a.k.a. aggressiveness) represents an individual's tendency to initiate or escalate a contest independent of the effects of RHP and V. It is instinctive for all animals to live a life according to fitness (Parker 1974). Animals will do what they can to improve their fitness and therefore survive long enough to produce offspring. However when resources are not in abundance, this can be challenging; eventually, animals will begin to compete for resources. The competition for resources can be dangerous and for some animals, deadly. Some animals have developed adaptive traits that increase their chances of survival when competing for resources. This trait is Resource Holding Potential (RHP) (Parker 1974). Resource Holding Potential, or Resource Holding Power, is the term defining the motivation an individual has to continue to fight, work, or endure through situations that others may give up during. Animals that use RHP often evaluate the conditions of the danger they fac
https://en.wikipedia.org/wiki/PostSecret
PostSecret is an ongoing community mail art project, created by Frank Warren in 2005, in which people mail their secrets anonymously on a homemade postcard. Selected secrets are then posted on the PostSecret website, or used for PostSecret's books or museum exhibits. History The concept of the project was that completely anonymous people decorate a postcard and portray a secret that they had never previously revealed. No restrictions are made on the content of the secret; only that it must be completely truthful and must never have been spoken before. Entries range from admissions of sexual misconduct and criminal activity to confessions of secret desires, embarrassing habits, hopes and dreams. PostSecret collected and displayed over 2,500 original pieces of art from people across the United States and around the world between its founding on January 1, 2005 and 2007. The site, which started as an experiment on Blogger, was updated every Sunday with 10 new secrets, all of which share a relatively constant style, giving the artists who participate some guidelines on how their secrets should be represented. From June 24 to July 3, 2007, the "Comments" section of the site was enabled. While a comments feature is frequently present on blogs, it had been previously absent from the PostSecret site. Many visitors felt that the new section contradicted the purpose of the site, as evidenced in numerous comments criticizing a postcard in which the author claims to have fed bleach to her cat. In October 2007, the PostSecret Community was launched. Since its inception, more than 80,000 users have registered for the online discussion forum. According to Youth Trends' February 2008 "Top Ten List Report" PostSecret was the 10th most popular site among female students in the US, with 7% of those polled naming the site as their favorite. In April 2008, Warren teamed up with 1-800-suicide to answer some of these anonymous cries for help through peer run crisis hotlines on col
https://en.wikipedia.org/wiki/Photostimulation
Photostimulation is the use of light to artificially activate biological compounds, cells, tissues, or even whole organisms. Photostimulation can be used to noninvasively probe various relationships between different biological processes, using only light. In the long run, photostimulation has the potential for use in different types of therapy, such as migraine headache. Additionally, photostimulation may be used for the mapping of neuronal connections between different areas of the brain by “uncaging” signaling biomolecules with light. Therapy with photostimulation has been called light therapy, phototherapy, or photobiomodulation. Photostimulation methods fall into two general categories: one set of methods uses light to uncage a compound that then becomes biochemically active, binding to a downstream effector. For example, uncaging glutamate is useful for finding excitatory connections between neurons, since the uncaged glutamate mimics the natural synaptic activity of one neuron impinging upon another. The other major photostimulation method is the use of light to activate a light-sensitive protein such as rhodopsin, which can then excite the cell expressing the opsin. Scientists have long postulated the need to control one type of cell while leaving those surrounding it untouched and unstimulated. Well-known scientific advancements such as the use of electrical stimuli and electrodes have succeeded in neural activation but fail to achieve the aforementioned goal because of their imprecision and inability to distinguish between different cell types. The use of optogenetics (artificial cell activation via the use of light stimuli) is unique in its ability to deliver light pulses in a precise and timely fashion. Optogenetics is somewhat bidirectional in its ability to control neurons. Channels can be either depolarized or hyperpolarized depending on the wavelength of light that targets them. For instance, the technique can be applied to channelrhodopsin cation
https://en.wikipedia.org/wiki/Bandit%20Kings%20of%20Ancient%20China
Bandit Kings of Ancient China, also known as in Japan, is a turn-based strategy video game developed and published by Koei, and released in 1989 for MSX, MS-DOS, Amiga, and Macintosh and in 1990 for the Nintendo Entertainment System. In 1996, Koei issued a remake for the Japanese Sega Saturn and PlayStation featuring vastly improved graphics and new arrangements of the original songs. Gameplay Based on the 14th century Great Classical Novel Water Margin, the game takes place in ancient China during the reign of Emperor Huizong of the Song Dynasty. The Bandit Kings of Ancient China—a band of ten bandits—engage in war against China's Minister of War Gao Qiu, an evil minister with unlimited power. The objective of the game is to build, sustain, and command an army of troops to capture Gao Qiu before the Jurchen invasion in January 1127. Players hold certain attributes such as strength, dexterity, and wisdom. Players must also deal with other situations such as taxes, care for the troops, maintenance and replacement of weapons and equipment, forces of nature, and troop unrest and desertion. Battlefields take place on hexagonal grids, where players move their armies across various terrain in order to strategically engage and defeat the enemy army. Troops have the capability of fighting with either melee weapons, bows and arrows, magic, or dueling swords, as well as setting tiles on fire. When a player defeats an enemy army, they have the option of recruiting, imprisoning, exiling, or executing the captured enemy troops. The attacker has 30 days to defeat all deployed enemy troops, capture the commander, or be automatically defeated. The game ends in defeat for the player(s) when the game calendar hit January 1127. The game map shows the empire composed of 49 prefectures. Any prefecture may be invaded from an adjacent territory - the only exception is the current location of Gao Qiu (usually the capital, Kai Feng) where the player must have built up sufficient popula
https://en.wikipedia.org/wiki/Registered%20memory
Registered (also called buffered) memory modules have a register between the DRAM modules and the system's memory controller. They place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise. When compared with registered memory, conventional memory is usually referred to as unbuffered memory or unregistered memory. When manufactured as a dual in-line memory module (DIMM), a registered memory module is called an RDIMM, while unregistered memory is called UDIMM or simply DIMM. Registered memory is often more expensive because of the lower number of units sold and additional circuitry required, so it is usually found only in applications where the need for scalability and robustness outweighs the need for a low price for example, registered memory is usually used in servers. Although most registered memory modules also feature error-correcting code memory (ECC), it is also possible for registered memory modules to not be error-correcting or vice versa. Unregistered ECC memory is supported and used in workstation or entry-level server motherboards that do not support very large amounts of memory. Performance Normally, there is a performance penalty for using registered memory. Each read or write is buffered for one cycle between the memory bus and the DRAM, so the registered RAM can be thought of as running one clock cycle behind the equivalent unregistered DRAM. With SDRAM, this only applies to the first cycle of a burst. However, this performance penalty is not universal. There are many other factors involved in memory access speed. For example, the Intel Westmere 5600 series of processors access memory using interleaving, wherein memory access is distributed across three channels. If two memory DIMMs are used per channel, there is a reduction of maximum memory bandwidth for this configuration with UDIMM by some 5% in comparison to RDIMM. Compatibility Usually, the motherb
https://en.wikipedia.org/wiki/Angelfire
Angelfire is an Internet service that offers website services. It is owned by Lycos, which also owns Tripod.com. Angelfire operates separately from Tripod.com and includes features such as blog building and a photo gallery builder. Free webpages are no longer available to new registrants and have been replaced by paid services. History Angelfire was founded in 1996 and was originally a combination Web site building and medical transcription service. Eventually the site dropped the transcription service and focused solely on website hosting, offering only paid memberships. The site was bought by Mountain View, California–based WhoWhere on October 20, 1997, which, in turn, was subsequently purchased by the search engine company Lycos on August 11, 1998 for US$133million. See also References External links About Angelfire / History of site page Internet properties established in 1996 Web hosting Web 1.0
https://en.wikipedia.org/wiki/Series%2040
Series 40, often shortened as S40, is a software platform and application user interface (UI) software on Nokia's broad range of mid-tier feature phones, as well as on some of the Vertu line of luxury phones. It was one of the world's most widely used mobile phone platforms and found in hundreds of millions of devices. Nokia announced on 25 January 2012 that the company has sold over 1.5 billion Series 40 devices. It was not used for smartphones, with Nokia turning first to Symbian, then in 2012–2017 to Windows Phone, and most recently Android. However, in 2012 and 2013, several Series 40 phones from the Asha line, such as the 308, 309 and 311, were advertised as "smartphones" although they do not actually support smartphone features like multitasking or a fully fledged HTML browser. In 2014, Microsoft acquired Nokia's mobile phones business. As part of a licensing agreement with the company, Microsoft Mobile is allowed to use the Nokia brand on feature phones, such as the Series 40 range. However, a July 2014 company memo revealed that Microsoft would end future production of Series 40 devices. It was replaced by Series 30+. History Series 40 was introduced in 1999 with the release of the Nokia 7110. It had a 96 × 65 pixel monochrome display and was the first phone to come with a WAP browser. Over the years, the S40 UI evolved from a low-resolution UI to a high-resolution color UI with an enhanced graphical look. The third generation of Series 40 that became available in 2005 introduced support for devices with resolutions as high as QVGA (240×320). It is possible to customize the look and feel of the UI via comprehensive themes. In 2012, Nokia Asha mobile phones 200/201, 210, 302, 303, 305, 306, 308, 310 and 311 were released and all used Series 40. The final feature phone running Series 40 was the Nokia 515 from 2013, running the 6th Edition. Technical information Applications Series 40 provides communication applications such as telephone, Internet telephon
https://en.wikipedia.org/wiki/Multi-master%20replication
Multi-master replication is a method of database replication which allows data to be stored by a group of computers, and updated by any member of the group. All members are responsive to client data queries. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group and resolving any conflicts that might arise between concurrent changes made by different members. Multi-master replication can be contrasted with primary-replica replication, in which a single member of the group is designated as the "master" for a given piece of data and is the only node allowed to modify that data item. Other members wishing to modify the data item must first contact the master node. Allowing only a single master makes it easier to achieve consistency among the members of the group, but is less flexible than multi-master replication. Multi-master replication can also be contrasted with failover clustering where passive replica servers are replicating the master data in order to prepare for takeover in the event that the master stops functioning. The master is the only server active for client interaction. Often, communication and replication in Multi-master systems are handled via a type of Consensus algorithm, but can also be implemented via custom or proprietary algorithms specific to the software. The primary purposes of multi-master replication are increased availability and faster server response time. Advantages Availability: If one master fails, other masters continue to update the database. Distributed Access: Masters can be located in several physical sites, i.e. distributed across the network. Disadvantages Consistency: Most multi-master replication systems are only loosely consistent, i.e. lazy and asynchronous, violating ACID properties. Performance: Eager replication systems are complex and increase communication latency. Integrity: Issues such as conflict resolution can become intracta
https://en.wikipedia.org/wiki/Business%20rule%20management%20system
A BRMS or business rule management system is a software system used to define, deploy, execute, monitor and maintain the variety and complexity of decision logic that is used by operational systems within an organization or enterprise. This logic, also referred to as business rules, includes policies, requirements, and conditional statements that are used to determine the tactical actions that take place in applications and systems. Overview A BRMS includes, at minimum: This needs to be attributed: A repository, allowing decision logic to be externalized from core application code Tools, allowing both technical developers and business experts to define and manage decision logic A runtime environment, allowing applications to invoke decision logic managed within the BRMS and execute it using a business rules engine The top benefits of a BRMS include: Reduced or removed reliance on IT departments for changes in live systems. Although, QA and Rules testing would still be needed in any enterprise system. Increased control over implemented decision logic for compliance and better business management including audit logs, impact simulation and edit controls. The ability to express decision logic with increased precision, using a business vocabulary syntax and graphical rule representations (decision tables, decision models, trees, scorecards and flows) Improved efficiency of processes through increased decision automation. Some disadvantages of the BRMS include: Extensive subject matter expertise can be required for vendor specific products. In addition to appropriate design practices (such as Decision Modeling), technical developers must know how to write rules and integrate software with existing systems Poor rule harvesting approaches can lead to long development cycles, though this can be mitigated with modern approaches like the Decision Model and Notation (DMN) standard. Integration with existing systems is still required and a BRMS may add additiona
https://en.wikipedia.org/wiki/Millman%27s%20theorem
In electrical engineering, Millman's theorem (or the parallel generator theorem) is a method to simplify the solution of a circuit. Specifically, Millman's theorem is used to compute the voltage at the ends of a circuit made up of only branches in parallel. It is named after Jacob Millman, who proved the theorem. Explanation Let be the generators' voltages. Let be the resistances on the branches with voltage generators . Then Millman states that the voltage at the ends of the circuit is given by: That is, the sum of the short circuit currents in branch divided by the sum of the conductances in each branch. It can be proved by considering the circuit as a single supernode. Then, according to Ohm and Kirchhoff, the voltage between the ends of the circuit is equal to the total current entering the supernode divided by the total equivalent conductance of the supernode. The total current is the sum of the currents in each branch. The total equivalent conductance of the supernode is the sum of the conductance of each branch, since all the branches are in parallel. Branch variations Current sources One method of deriving Millman's theorem starts by converting all the branches to current sources (which can be done using Norton's theorem). A branch that is already a current source is simply not converted. In the expression above, this is equivalent to replacing the term in the numerator of the expression above with the current of the current generator, where the kth branch is the branch with the current generator. The parallel conductance of the current source is added to the denominator as for the series conductance of the voltage sources. An ideal current source has zero conductance (infinite resistance) and so adds nothing to the denominator. Ideal voltage sources If one of the branches is an ideal voltage source, Millman's theorem cannot be used, but in this case the solution is trivial, the voltage at the output is forced to the voltage of the ideal volt
https://en.wikipedia.org/wiki/WUNI
WUNI (channel 66) is a television station licensed to Marlborough, Massachusetts, United States, broadcasting the Spanish-language Univision network to the Boston area. It is owned by TelevisaUnivision alongside Derry, New Hampshire–licensed True Crime Network affiliate WWJE-DT (channel 50); Entravision Communications operates WUNI under a joint sales agreement (JSA), making it sister to Worcester, Massachusetts–licensed UniMás affiliate WUTF-TV (channel 27). WUNI and WWJE share studios and transmitter facilities on Parmenter Road in Hudson; under the JSA, master control and some internal operations of WUNI are based at WUTF's studios on 4th Avenue in Needham. History As an English-language independent station The station first signed on the air on January 1, 1970, as WSMW-TV, an independent station based in Worcester that featured English-language general entertainment programs including old movies (including the entire series of Abbott and Costello movies and the Bowery Boys/Dead-End Kids movies starring Huntz Hall), cartoons, religious shows (including the Jacob Brothers and The PTL Club), a cooking show (Cooking with Bernard), science fiction shows (such as UFO), dramas (including Maverick and Thriller), as well as sitcoms (including The Phil Silvers Show and Petticoat Junction). Though WSMW-TV was within the Boston market, it was far enough from Boston itself that the station was able to air some of the same shows as the Boston stations, in a similar situation to WMUR-TV (channel 9), the ABC affiliate in Manchester, New Hampshire. The station's call letters stood for State Mutual (Insurance Co.) in Worcester, the corporate owner of the station. WSMW also broadcast sports programs; from its debut through the end of the 1971–72 NBA season, the station was the television home of the Boston Celtics. In 1970 and 1971, WSMW broadcast (same-weekend tape-delayed coverage of) New England Patriots preseason games. WSMW also offered extensive coverage of college baske
https://en.wikipedia.org/wiki/Natural%20region
A natural region (landscape unit) is a basic geographic unit. Usually, it is a region which is distinguished by its common natural features of geography, geology, and climate. From the ecological point of view, the naturally occurring flora and fauna of the region are likely to be influenced by its geographical and geological factors, such as soil and water availability, in a significant manner. Thus most natural regions are homogeneous ecosystems. Human impact can be an important factor in the shaping and destiny of a particular natural region. Main terms The concept "natural region" is a large basic geographical unit, like the vast boreal forest region. The term may also be used generically, like in alpine tundra, or specifically to refer to a particular place. The term is particularly useful where there is no corresponding or coterminous official region. The Fens of eastern England, the Thai highlands, and the Pays de Bray in Normandy, are examples of this. Others might include regions with particular geological characteristics, like badlands, such as the Bardenas Reales, an upland massif of acidic rock, or The Burren, in Ireland. See also Ecoregion Natural regions of Chile Natural regions of Colombia Natural regions of Germany Natural regions of Venezuela Physiographic regions of the world References External links Natural regions of Texas Alberta's Natural Regions Natural regions in Valencia Ecology
https://en.wikipedia.org/wiki/Daina%20Taimi%C5%86a
Daina Taimiņa (born August 19, 1954) is a Latvian mathematician, retired adjunct associate professor of mathematics at Cornell University, known for developing a way of modeling hyperbolic geometry with crocheted objects. Education and career Taimiņa received all of her formal education in Riga, Latvia, where in 1977 she graduated summa cum laude from the University of Latvia and completed her graduate work in Theoretical Computer Science (with thesis advisor Prof. Rūsiņš Mārtiņš Freivalds) in 1990. As one of the restrictions of the Soviet system at that time, a doctoral thesis was not allowed to be defended in Latvia, so she defended hers in Minsk, receiving the title of Candidate of Sciences. This explains the fact that Taimiņa's doctorate was formally issued by the Institute of Mathematics of the National Academy of Sciences of Belarus. After Latvia regained independence in 1991, Taimiņa received her higher doctoral degree (doktor nauk) in mathematics from the University of Latvia, where she taught for 20 years. Daina Taimiņa joined the Cornell Math Department in December 1996. Combining her interests in mathematics and crocheting, she is one of 24 mathematicians and artists who make up the Mathemalchemy Team. Hyperbolic crochet While attending a geometry workshop at Cornell University about teaching geometry for university professors in 1997, Taimiņa was presented with a fragile paper model of a hyperbolic plane, made by the professor in charge of the workshop, David Henderson (designed by geometer William Thurston.) It was made «out of thin, circular strips of paper taped together». She decided to make more durable models, and did so by crocheting them. The first night after first seeing the paper model at the workshop she began experimenting with algorithms for a crocheting pattern, after visualising hyperbolic planes as exponential growth. The following fall, Taimiņa was scheduled to teach a geometry class at Cornell. She was determined to find what she
https://en.wikipedia.org/wiki/The%20Unix%20Programming%20Environment
The Unix Programming Environment, first published in 1984 by Prentice Hall, is a book written by Brian W. Kernighan and Rob Pike, both of Bell Labs and considered an important and early document of the Unix operating system. Unix philosophy The book addresses the Unix philosophy of small cooperating tools with standardized inputs and outputs. Kernighan and Pike gives a brief description of the Unix design and the Unix philosophy: The authors further write that their goal for this book is "to communicate the UNIX programming philosophy." Content and topics The book starts off with an introduction to Unix for beginners. Next, it goes into the basics of the file system and shell. The reader is led through topics ranging from the use of filters, to how to use C for programming robust Unix applications, and the basics of grep, sed, make, and awk. The book closes with a tutorial on making a programming language parser with yacc and how to use troff with ms and mm to format documents, the preprocessors tbl, eqn, and pic, and making man pages with the man macro set. The appendices cover the ed editor and the abovementioned programming language, named hoc, which stands for "high-order calculator". Historical context Although Unix still exists decades after the publication of this book, the book describes an already mature Unix: In 1984, Unix had already been in development for 15 years (since 1969), it had been published in a peer-reviewed journal 10 years earlier (SOSP, 1974, "The UNIX Timesharing System"), and at least seven official editions of its manuals had been published (see Version 7 Unix). In 1984, several commercial and academic variants of UNIX already existed (e.g., Xenix, SunOS, BSD, UNIX System V, HP-UX), and a year earlier Dennis Ritchie and Ken Thompson won the prestigious Turing Award for their work on UNIX. The book was written not when UNIX was just starting out, but when it was already popular enough to be worthy of a book published for the masses o
https://en.wikipedia.org/wiki/Logical%20Volume%20Manager%20%28Linux%29
In Linux, Logical Volume Manager (LVM) is a device mapper framework that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume. Heinz Mauelshagen wrote the original LVM code in 1998, when he was working at Sistina Software, taking its primary design guidelines from the HP-UX's volume manager. Uses LVM is used for the following purposes: Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping. On small systems (like a desktop), instead of having to estimate at installation time how big a partition might need to be, LVM allows filesystems to be easily resized as needed. Performing consistent backups by taking snapshots of the logical volumes. Encrypting multiple physical partitions with one password. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, repartitioning and backup. Features Basic functionality Volume groups (VGs) can be resized online by absorbing new physical volumes (PVs) or ejecting existing ones. Logical volumes (LVs) can be resized online by concatenating extents onto them or truncating extents from them. LVs can be moved between PVs. Creation of read-only snapshots of logical volumes (LVM1), leveraging a copy on write (CoW) feature, or read/write snapshots (LVM2) VGs can be split or merged in situ as long as no LVs span the split. This can be useful when migrating whole LVs to or from offline storage. LVM objects can be tagged for administrative convenience. VGs and LVs can be made active as the underlying devic
https://en.wikipedia.org/wiki/Fibre%20Channel%20time-out%20values
The FC-PH standard defines three time-out values used for error detection and recovery in Fibre Channel protocol. E_D_TOV stands for Error Detect TimeOut Value. This is the basic error timeout used for all Fibre Channel error detection. Its default value is 2 seconds. R_A_TOV stands for Resource Allocation TimeOut Value. This is the amount of time given to devices to allocate the resources needed to process received frames. In practice this may be the time for re-calculation of routing tables in network devices. Its default value is 10 seconds. R_T_TOV stands for Receiver-Transmitter TimeOut Value. This is the amount of time that the receiver logic uses to determine loss of sync on the wire. Its default value is 100 milliseconds, but can be changed to 100 microseconds. All devices must use the default values until they have established different values (if they desire) with other devices, usually specified during the login procedure. Fibre Channel
https://en.wikipedia.org/wiki/Signal/One
Signal/One was a manufacturer of high performance SSB and CW HF radio communications transceivers initially based in St. Petersburg, Florida, United States. History Signal/One's parent company was Electronic Communications, Inc. (ECI), a military division of NCR Corporation located in St. Petersburg, Florida. Key Signal/One executives were general manager Dick Ehrhorn (amateur radio call sign W4ETO), and project engineer Don Fowler (W4YET). Beginning in the 1960s with the Signal/One CX7, ("S1", as they were called) the company made radios that were priced well above the competition and offered many advanced features for the time, such as passband tuning, broadband transmission, dual receive, built-in IAMBIC keyer, electronic digital read out, solid state design, QSK and RF clipping. A Signal/One radio was said to be a complete high performance, station in a box. While marketed to the affluent radio amateur, it has been suggested that the primary market for Signal/One, like Collins, was military, State Department, and government communications. Although prized for the performance and advanced engineering, Signal/One's products did not sell as well as hoped, and the company gradually fell on hard times. From the 1970s though the 1990s, every few years, Signal/One was spun off, sold, and resurfaced at another location. Collectors The surviving Signal/One products are sought after and actively collected. These include the CX7, CX7A, CX7B, CX11 and Milspec models. The last Signal/One radio was a re-engineered ICOM IC-781. Information available indicates there were 1152 Signal Ones built: 850 CX7, 112 CX11, 168 MS1030 (number of "C" versions is not known), 6 MilSpec1030C, 15 MilSpec1030CI Icom IC-781 conversions and 1 Milspec1030E DSP Icom IC-756 Pro conversion. See also Collins Radio Icom Vintage amateur radio References Amateur radio companies Radio electronics Defunct manufacturing companies of the United States Defunct electronics companies
https://en.wikipedia.org/wiki/British%20Informatics%20Olympiad
The British Informatics Olympiad (BIO) is an annual computer-programming competition for secondary and sixth-form students. Any student under 19 who is in full-time pre-university education and resident in mainland Britain is eligible to compete. The competition is composed of two rounds - a preliminary 3-question, 3-hour exam paper sat at the participant's school and a final round. The top-15 performing students each year are invited to the finals (currently hosted by Trinity College, Cambridge) where they attempt to solve several more difficult problems, some written, some involving programming. Typically a score of 70 to 80 out of 100 is required on the first round of the competition to reach the final. Of these fifteen, four are chosen for the British team, and one or two are chosen as reserves. This team goes on to represent Britain in the International Olympiad in Informatics in the summer of that year. Mark schemes are available for all past papers at the competition's official site. Official worked solutions are available for papers 1995-1999 and 2004, whilst unofficial solutions are available for papers 2009-2014. Sponsors The BIO has been sponsored by video-games developer Lionhead Studios since 2002. In the past, it has also been sponsored by Data Connection. See also Young Scientists of the Year References External links BIO Official Website Annual events in the United Kingdom Competitions in the United Kingdom Computer science education in the United Kingdom Programming contests
https://en.wikipedia.org/wiki/Cabal%20%28set%20theory%29
The Cabal was, or perhaps is, a set of set theorists in Southern California, particularly at UCLA and Caltech, but also at UC Irvine. Organization and procedures range from informal to nonexistent, so it is difficult to say whether it still exists or exactly who has been a member, but it has included such notable figures as Donald A. Martin, Yiannis N. Moschovakis, John R. Steel, and Alexander S. Kechris. Others who have published in the proceedings of the Cabal seminar include Robert M. Solovay, W. Hugh Woodin, Matthew Foreman, and Steve Jackson. The work of the group is characterized by free use of large cardinal axioms, and research into the descriptive set theoretic behavior of sets of reals if such assumptions hold. Some of the philosophical views of the Cabal seminar were described in and . Publications References Descriptive set theory Set theory
https://en.wikipedia.org/wiki/Steel%20frame
Steel frame is a building technique with a "skeleton frame" of vertical steel columns and horizontal I-beams, constructed in a rectangular grid to support the floors, roof and walls of a building which are all attached to the frame. The development of this technique made the construction of the skyscraper possible. Concept The rolled steel "profile" or cross section of steel columns takes the shape of the letter "". The two wide flanges of a column are thicker and wider than the flanges on a beam, to better withstand compressive stress in the structure. Square and round tubular sections of steel can also be used, often filled with concrete. Steel beams are connected to the columns with bolts and threaded fasteners, and historically connected by rivets. The central "web" of the steel I-beam is often wider than a column web to resist the higher bending moments that occur in beams. Wide sheets of steel deck can be used to cover the top of the steel frame as a "form" or corrugated mold, below a thick layer of concrete and steel reinforcing bars. Another popular alternative is a floor of precast concrete flooring units with some form of concrete topping. Often in office buildings, the final floor surface is provided by some form of raised flooring system with the void between the walking surface and the structural floor being used for cables and air handling ducts. The frame needs to be protected from fire because steel softens at high temperature and this can cause the building to partially collapse. In the case of the columns this is usually done by encasing it in some form of fire resistant structure such as masonry, concrete or plasterboard. The beams may be cased in concrete, plasterboard or sprayed with a coating to insulate it from the heat of the fire or it can be protected by a fire-resistant ceiling construction. Asbestos was a popular material for fireproofing steel structures up until the early 1970s, before the health risks of asbestos fibres were ful
https://en.wikipedia.org/wiki/WWIVnet
WWIVnet was a Bulletin board system (BBS) network for WWIV-based BBSes. It was created by Wayne Bell on December 1, 1987. The system was similar to FidoNet in purpose, but used a very different routing mechanism that was more automated and distributed. Network layout WWIVnet consisted of several participating BBSes, each referenced by a unique number called a node number. Originally, WWIVnet nodes were numbered by area code. The format was TXYZZ, where X and Y were the first and last digits of the area code, and ZZ was a number that ranged from 00 to 49 in area codes with a middle digit of 0, or a number between 50 and 99 in area codes with a middle digit of 1. The T portion of the node number was only used if a particular area code ran out of node numbers in their assigned range and needed more, the T would become 1. Thus, node 5802 would be a node in the 508 area code, and node 12263 would be a node in a very busy 212 area code. This numbering system worked well until the telephone systems began using area codes that used numbers other than 1 or 0 as the middle digit. When this occurred, WWIVnet realized it had to change its numbering system so a group based system was adopted, where node numbers would change to an XZZZ system. In this system, X would be the group number, and ZZZ would be the system number under that group. The network's administration was set up where every area code had an Area Coordinator (AC) which was responsible for maintaining information about the nodes in their area code. The AC reported to the Group Coordinator (GC), which was responsible for updating the node lists for the area codes under them. The GC reported to the Network Coordinator (NC), who was responsible for sending out node list updates. The NC was the person who was ultimately in charge of WWIVnet. The network structure, however, had everything to do with administration but nothing to do with the way traffic was transmitted. Simply put the only way to control th
https://en.wikipedia.org/wiki/TeraGrid
TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011. The TeraGrid integrated high-performance computers, data resources and tools, and experimental facilities. Resources included more than a petaflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance computer network connections. Researchers could also access more than 100 discipline-specific databases. TeraGrid was coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the resource provider sites in the United States. History The US National Science Foundation (NSF) issued a solicitation asking for a "distributed terascale facility" from program director Richard L. Hilderbrandt. The TeraGrid project was launched in August 2001 with $53 million in funding to four sites: the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, the University of Chicago Argonne National Laboratory, and the Center for Advanced Computing Research (CACR) at the California Institute of Technology in Pasadena, California. The design was meant to be an extensible distributed open system from the start. In October 2002, the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh joined the TeraGrid as major new partners when NSF announced $35 million in supplementary funding. The TeraGrid network was transformed through the ETF project from a 4-site mesh to a dual-hub backbone network with connection points in Los Angeles and at the Starlight facilities in Chicago. In October 2003, NSF awarded $10 million to add four sites to TeraGrid as well as to establish a third network hub, in Atlanta. These
https://en.wikipedia.org/wiki/Resist%20%28semiconductor%20fabrication%29
In semiconductor fabrication, a resist is a thin layer used to transfer a circuit pattern to the semiconductor substrate which it is deposited upon. A resist can be patterned via lithography to form a (sub)micrometer-scale, temporary mask that protects selected areas of the underlying substrate during subsequent processing steps. The material used to prepare said thin layer is typically a viscous solution. Resists are generally proprietary mixtures of a polymer or its precursor and other small molecules (e.g. photoacid generators) that have been specially formulated for a given lithography technology. Resists used during photolithography are called photoresists. Background Semiconductor devices (as of 2005) are built by depositing and patterning many thin layers. The patterning steps, or lithography, define the function of the device and the density of its components. For example, in the interconnect layers of a modern microprocessor, a conductive material (copper or aluminum) is inlaid in an electrically insulating matrix (typically fluorinated silicon dioxide or another low-k dielectric). The metal patterns define multiple electrical circuits that are used to connect the microchip's transistors to one another and ultimately to external devices via the chip's pins. The most common patterning method used by the semiconductor device industry is photolithography -- patterning using light. In this process, the substrate of interest is coated with photosensitive resist and irradiated with short-wavelength light projected through a photomask, which is a specially prepared stencil formed of opaque and transparent regions - usually a quartz substrate with a patterned chromium layer. The shadow of opaque regions in the photomask forms a submicrometer-scale pattern of dark and illuminated regions in the resist layer -- the areal image. Chemical and physical changes occur in the exposed areas of the resist layer. For example, chemical bonds may be formed or destroyed,
https://en.wikipedia.org/wiki/Metastability%20%28electronics%29
In electronics, metastability is the ability of a digital electronic system to persist for an unbounded time in an unstable equilibrium or metastable state. In digital logic circuits, a digital signal is required to be within certain voltage or current limits to represent a '0' or '1' logic level for correct circuit operation; if the signal is within a forbidden intermediate range it may cause faulty behavior in logic gates the signal is applied to. In metastable states, the circuit may be unable to settle into a stable '0' or '1' logic level within the time required for proper circuit operation. As a result, the circuit can act in unpredictable ways, and may lead to a system failure, sometimes referred to as a "glitch". Metastability is an instance of the Buridan's ass paradox. Metastable states are inherent features of asynchronous digital systems, and of systems with more than one independent clock domain. In self-timed asynchronous systems, arbiters are designed to allow the system to proceed only after the metastability has resolved, so the metastability is a normal condition, not an error condition. In synchronous systems with asynchronous inputs, synchronizers are designed to make the probability of a synchronization failure acceptably small. Metastable states are avoidable in fully synchronous systems when the input setup and hold time requirements on flip-flops are satisfied. Example A simple example of metastability can be found in an SR NOR latch, when Set and Reset inputs are true (R=1 and S=1) and then both transition to false (R=0 and S=0) at about the same time. Both outputs Q and are initially held at 0 by the simultaneous Set and Reset inputs. After both Set and Reset inputs change to false, the flip-flop will (eventually) end up in one of two stable states, one of Q and true and the other false. The final state will depend on which of R or S returns to zero first, chronologically, but if both transition at about the same time, the resul
https://en.wikipedia.org/wiki/Special%20right%20triangle
A special right triangle is a right triangle with some regular feature that makes calculations on the triangle easier, or for which simple formulas exist. For example, a right triangle may have angles that form simple relationships, such as 45°–45°–90°. This is called an "angle-based" right triangle. A "side-based" right triangle is one in which the lengths of the sides form ratios of whole numbers, such as 3 : 4 : 5, or of other special numbers such as the golden ratio. Knowing the relationships of the angles or ratios of sides of these special right triangles allows one to quickly calculate various lengths in geometric problems without resorting to more advanced methods. Angle-based Angle-based special right triangles are specified by the relationships of the angles of which the triangle is composed. The angles of these triangles are such that the larger (right) angle, which is 90 degrees or radians, is equal to the sum of the other two angles. The side lengths are generally deduced from the basis of the unit circle or other geometric methods. This approach may be used to rapidly reproduce the values of trigonometric functions for the angles 30°, 45°, and 60°. Special triangles are used to aid in calculating common trigonometric functions, as below: The 45°–45°–90° triangle, the 30°–60°–90° triangle, and the equilateral/equiangular (60°–60°–60°) triangle are the three Möbius triangles in the plane, meaning that they tessellate the plane via reflections in their sides; see Triangle group. 45° - 45° - 90° triangle In plane geometry, constructing the diagonal of a square results in a triangle whose three angles are in the ratio 1 : 1 : 2, adding up to 180° or radians. Hence, the angles respectively measure 45° (), 45° (), and 90° (). The sides in this triangle are in the ratio 1 : 1 : , which follows immediately from the Pythagorean theorem. Of all right triangles, the 45° - 45° - 90° degree triangle has the smallest ratio of the hypotenuse to the sum of
https://en.wikipedia.org/wiki/It%27s%20Too%20Late%20to%20Stop%20Now
It's Too Late to Stop Now is a 1974 live double album by Northern Irish singer-songwriter Van Morrison. It features performances that were recorded in concerts at the Troubadour in Los Angeles, California, the Santa Monica Civic Auditorium, and the Rainbow in London, during Morrison's three-month tour with his eleven-piece band, the Caledonia Soul Orchestra, from May to July 1973. Frequently named as one of the best live albums ever, It's Too Late to Stop Now was recorded during what has often been said to be the singer's greatest phase as a live performer. Volumes II, III and IV of the album were released as a box-set in 2016, also including a DVD. Tour and performances Noted for being a mercurial and temperamental live performer, during this short period of time in 1973, Morrison went on one of his most diligent tours in years. With his eleven-piece band, The Caledonia Soul Orchestra, which included a horn and string section, he has often been said to have been at his live performing peak. The tour was a dynamic musical journey for both Van Morrison and the band. "He wasn't looking to repeat himself. He wanted to create a new show every night." said Jef Labes. Van was changing his phrasing with each performance and would be directing the band with hand gestures onstage. "We could take the songs anywhere Van wanted to take them" said guitarist Platania "Every performance of each song was different". The alternate recordings of songs on the various volumes from the 2016 box-set prove his point. Morrison said about touring during this period: "I am getting more into performing. It's incredible. When I played Carnegie Hall in the fall something just happened. All of a sudden I felt like 'you're back into performing' and it just happened like that...A lot of times in the past I've done gigs and it was rough to get through them. But now the combination seems to be right and it's been clicking a lot." Evidence of his newly invigorated joy in performing was on
https://en.wikipedia.org/wiki/Uniform%20access%20principle
The uniform access principle of computer programming was put forth by Bertrand Meyer (originally in his book Object-Oriented Software Construction). It states "All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation." This principle applies generally to the syntax of object-oriented programming languages. In simpler form, it states that there should be no syntactical difference between working with an attribute, pre-computed property, or method/query of an object. While most examples focus on the "read" aspect of the principle (i.e., retrieving a value), Meyer shows that the "write" implications (i.e., modifying a value) of the principle are harder to deal with in his monthly column on the Eiffel programming language official website. Explanation The problem being addressed by Meyer involves the maintenance of large software projects or software libraries. Sometimes when developing or maintaining software it is necessary, after much code is in place, to change a class or object in a way that transforms what was simply an attribute access into a method call. Programming languages often use different syntax for attribute access and invoking a method, (e.g., versus ). The syntax change would require, in popular programming languages of the day, changing the source code in all the places where the attribute was used. This might require changing source code in many different locations throughout a very large volume of source code. Or worse, if the change is in an object library used by hundreds of customers, each of those customers would have to find and change all the places the attribute was used in their own code and recompile their programs. Going the reverse way (from method to simple attribute) really was not a problem, as one can always just keep the function and have it simply return the attribute value. Meyer recognized the need for software
https://en.wikipedia.org/wiki/Sender%20Rewriting%20Scheme
The Sender Rewriting Scheme (SRS) is a scheme for bypassing the Sender Policy Framework's (SPF) methods of preventing forged sender addresses. Forging a sender address is also known as email spoofing. Background In a number of cases, including change of email address and mailing lists, a message transfer agent (MTA) accepts an email message that is not destined to a local mailbox but needs to be forwarded. In such cases, the question arises of who deserves to receive any related bounce message. In general, that is either the author, or a person or other entity who administers the forwarding itself. Sending bounces to the author is administratively simpler and used to be accomplished by just keeping the original envelope sender. However, if the author address is subject to a strict SPF policy () and the target MTA happens to enforce it, the forwarding transaction can be rejected. As a workaround, it is possible to synthesize on the fly a temporary bounce address that will direct any bounce back to the current MTA. The scheme provides for recovering the original envelope address, so that if a bounce does arrive, it can be forwarded along the reverse path—with an empty envelope sender this time. While there are other workarounds, SRS (sender Rewriting Scheme) is a fairly general one. Its notion of reversing the path resembles the original routing dispositions for email, see below. Please note: Using SRS protocol fails the SPF Alignment check for your DMARC record, and it's by design. Your DMARC record can still pass with a DKIM check. The rewriting scheme SRS is a form of variable envelope return path (VERP) inasmuch as it encodes the original envelope sender in the local part of the rewritten address. Consider forwarding a message originally destined to to his new address : ORIGINAL : : REWRITTEN : : The example above is adapted from Shevek. With respect to VERP, the local part () is moved after her domain name
https://en.wikipedia.org/wiki/Microdialysis
Microdialysis is a minimally-invasive sampling technique that is used for continuous measurement of free, unbound analyte concentrations in the extracellular fluid of virtually any tissue. Analytes may include endogenous molecules (e.g. neurotransmitter, hormones, glucose, etc.) to assess their biochemical functions in the body, or exogenous compounds (e.g. pharmaceuticals) to determine their distribution within the body. The microdialysis technique requires the insertion of a small microdialysis catheter (also referred to as microdialysis probe) into the tissue of interest. The microdialysis probe is designed to mimic a blood capillary and consists of a shaft with a semipermeable hollow fiber membrane at its tip, which is connected to inlet and outlet tubing. The probe is continuously perfused with an aqueous solution (perfusate) that closely resembles the (ionic) composition of the surrounding tissue fluid at a low flow rate of approximately 0.1-5μL/min. Once inserted into the tissue or (body)fluid of interest, small solutes can cross the semipermeable membrane by passive diffusion. The direction of the analyte flow is determined by the respective concentration gradient and allows the usage of microdialysis probes as sampling as well as delivery tools. The solution leaving the probe (dialysate) is collected at certain time intervals for analysis. History The microdialysis principle was first employed in the early 1960s, when push-pull canulas and dialysis sacs were implanted into animal tissues, especially into rodent brains, to directly study the tissues' biochemistry. While these techniques had a number of experimental drawbacks, such as the number of samples per animal or no/limited time resolution, the invention of continuously perfused dialytrodes in 1972 helped to overcome some of these limitations. Further improvement of the dialytrode concept resulted in the invention of the "hollow fiber", a tubular semipermeable membrane with a diameter of ~200-300μm
https://en.wikipedia.org/wiki/CRT%20projector
A CRT projector is a video projector that uses a small, high-brightness cathode ray tube (CRT) as the image generating element. The image is then focused and enlarged onto a screen using a lens kept in front of the CRT face. The first color CRT projectors came out in the early 1950s. Most modern CRT projectors are color and have three separate CRTs (instead of a single, color CRT), and their own lenses to achieve color images. The red, green and blue portions of the incoming video signal are processed and sent to the respective CRTs whose images are focused by their lenses to achieve the overall picture on the screen. Various designs have made it to production, including the "direct" CRT-lens design, and the Schmidt CRT, which employed a phosphor screen that illuminates a perforated spherical mirror, all within an evacuated cathode ray tube. The image in the Sinclair Microvision flat CRT is viewed from the same side of the phosphor struck by the electron beam. The other side of the screen can be connected directly to a heat sink, allowing the projector to run at much brighter power levels than the more common CRT arrangement. Though systems utilizing projected video at one time almost exclusively used CRT projectors, they have largely been replaced by other technologies such as LCD projection and Digital Light Processing. Improvements in these digital video projectors, and their subsequent increased availability and desirability, resulted in a drastic decline of CRT projector sales by the year 2009. , very few (if any) new units are manufactured, though a number of installers do sell refurbished units, generally higher-end 8" and 9" models. Some of the first CRT projection tubes were made in 1933, and by 1938 CRT projectors were already in use in theaters. Advantages and disadvantages Advantages Long service life; CRTs maintain good brightness to 10,000 hours, although this depends on the contrast adjustment setting of the projector. A projector that is set to
https://en.wikipedia.org/wiki/Veritas%20Storage%20Foundation
Veritas Storage Foundation (VSF), previously known as Veritas Foundation Suite, is a computer software product made by Veritas Software that combines Veritas Volume Manager (VxVM) and Veritas File System (VxFS) to provide online-storage management. Symantec Corporation developed and maintained VSF until January 29, 2016, at which point Veritas and Symantec separated. The latest product version, 7.0, was re-branded as "Veritas InfoScale 7.0". Veritas Storage Foundation provides: Dynamic storage tiering (DST) Dynamic multipathing (DMP) RAID support Major releases Veritas Storage Foundation was also packaged in bundles such as Veritas Storage Foundation Veritas Cluster Server, for databases, for Oracle RAC, and Veritas Cluster File System. Veritas InfoScale Enterprise 7.0, December 2015 Veritas Storage Foundation 6.0, December 2011 Veritas Storage Foundation 5.1, December 2009 Veritas Storage Foundation Basic 4.x and 5.x, February 2007, free version, impose usage limits Veritas Storage Foundation 5.0, July 2006 Veritas Storage Foundation 4.3 (Windows-only release), August 2005 Veritas Storage Foundation 4.2 (Windows-only release), December 2004 Support Microsoft Multipath I/O (MPIO) (only Windows 2003) Includes Veritas Volume Replicator (VVR) Veritas Storage Foundation 4.1, May 2004 Veritas Storage Foundation 4.0 Veritas Foundation Suite 3.5 Veritas Foundation Suite 3.4 Veritas Foundation Suite 2.2 Supported OS platforms included AIX, Solaris, HP-UX, Red Hat Linux, SUSE Linux and Microsoft Windows. See also Veritas Volume Manager (VxVM) Veritas File System (VxFS) Symantec Operations Readiness Tools (SORT) References Storage software
https://en.wikipedia.org/wiki/Propidium%20iodide
Propidium iodide (or PI) is a fluorescent intercalating agent that can be used to stain cells and nucleic acids. PI binds to DNA by intercalating between the bases with little or no sequence preference. When in an aqueous solution, PI has a fluorescent excitation maximum of 493 nm (blue-green), and an emission maximum of 636 nm (red). After binding DNA, the quantum yield of PI is enhanced 20-30 fold, and the excitation/emission maximum of PI is shifted to 535 nm (green) / 617 nm (orange-red). Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualize the nucleus and other DNA-containing organelles. Propidium Iodide is not membrane-permeable, making it useful to differentiate necrotic, apoptotic and healthy cells based on membrane integrity. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining. PI is widely used in fluorescence staining and visualization of the plant cell wall. See also Viability assay Vital stain SYBR Green I Ethidium bromide References Flow cytometry DNA-binding substances Iodides Phenanthridine dyes Staining dyes
https://en.wikipedia.org/wiki/Wirth%27s%20law
Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster. The adage is named after Niklaus Wirth, a computer scientist who discussed it in his 1995 article "A Plea for Lean Software". History Wirth attributed the saying to Martin Reiser, who in the preface to his book on the Oberon System wrote: "The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness." Other observers had noted this for some time before; indeed, the trend was becoming obvious as early as 1987. He states two contributing factors to the acceptance of ever-growing software as: "rapidly growing hardware performance" and "customers' ignorance of features that are essential versus nice-to-have". Enhanced user convenience and functionality supposedly justify the increased size of software, but Wirth argues that people are increasingly misinterpreting complexity as sophistication, that "these details are cute but not essential, and they have a hidden cost". As a result, he calls for the creation of "leaner" software and pioneered the development of Oberon, a software system developed between 1986 and 1989 based on nothing but hardware. Its primary goal was to show that software can be developed with a fraction of the memory capacity and processor power usually required, without sacrificing flexibility, functionality, or user convenience. Other names The law was restated in 2009 and attributed to Google co-founder Larry Page. It has been referred to as Page's law. The first use of that name is attributed to fellow Google co-founder Sergey Brin at the 2009 Google I/O Conference. Other common forms use the names of the leading hardware and software companies of the 1990s, Intel and Microsoft, or their CEOs, Andy Grove and Bill Gates, for example "What Intel giveth, Microsoft taketh away" and Andy and
https://en.wikipedia.org/wiki/Naturalisation%20%28biology%29
Naturalisation (or naturalization) is the ecological phenomenon through which a species, taxon, or population of exotic (as opposed to native) origin integrates into a given ecosystem, becoming capable of reproducing and growing in it, and proceeds to disseminate spontaneously. In some instances, the presence of a species in a given ecosystem is so ancient that it cannot be presupposed whether it is native or introduced. Generally, any introduced species may (in the wild) either go extinct or naturalise in its new environment. Some populations do not sustain themselves reproductively, but exist because of continued influx from elsewhere. Such a non-sustaining population, or the individuals within it, are said to be adventive. Cultivated plants are a major source of adventive populations. The above refers to naturalize as an intransitive verb, as in, "The species naturalized". In North America it is common to use naturalize as a transitive verb, as in, "City staff naturalized the park". This means to allow an environment to revert to its natural state. Botany In botany, naturalisation is the situation in which an exogenous plant reproduces and disperses on its own in a new environment. For example, northern white cedar is naturalised in the United Kingdom, where it reproduces on its own, while it is not in France, where human intervention via cuttings or seeds are essential for its dissemination. Two categories of naturalisation are defined from two distinct parameters: one, archaeonaturalised, refers to introduction before a given time (introduced over a hundred years ago), while the second, amphinaturalised or eurynaturalised, implies a notion of spatial extension (taxon assimilated indigenous and present over a vast space, opposed to stenonaturalised). Degrees of naturalisation The degrees of naturalisation are defined in relation to the status of nativity or introduction of taxons or species; Accidental taxon: non-native taxon growing spontaneously,
https://en.wikipedia.org/wiki/Broadcast%20encryption
Broadcast encryption is the cryptographic problem of delivering encrypted content (e.g. TV programs or data on DVDs) over a broadcast channel in such a way that only qualified users (e.g. subscribers who have paid their fees or DVD players conforming to a specification) can decrypt the content. The challenge arises from the requirement that the set of qualified users can change in each broadcast emission, and therefore revocation of individual users or user groups should be possible using broadcast transmissions, only, and without affecting any remaining users. As efficient revocation is the primary objective of broadcast encryption, solutions are also referred to as revocation schemes. Rather than directly encrypting the content for qualified users, broadcast encryption schemes distribute keying information that allows qualified users to reconstruct the content encryption key whereas revoked users find insufficient information to recover the key. The typical setting considered is that of a unidirectional broadcaster and stateless users (i.e., users do not keep bookmarking of previous messages by the broadcaster), which is especially challenging. In contrast, the scenario where users are supported with a bi-directional communication link with the broadcaster and thus can more easily maintain their state, and where users are not only dynamically revoked but also added (joined), is often referred to as multicast encryption. The problem of practical broadcast encryption has first been formally studied by Amos Fiat and Moni Naor in 1994. Since then, several solutions have been described in the literature, including combinatorial constructions, one-time revocation schemes based on secret sharing techniques, and tree-based constructions. In general, they offer various trade-offs between the increase in the size of the broadcast, the number of keys that each user needs to store, and the feasibility of an unqualified user or a collusion of unqualified users being able to
https://en.wikipedia.org/wiki/IEC%2060228
IEC 60228 is the International Electrotechnical Commission (IEC)'s international standard on conductors of insulated cables. the current version is Third Edition 2004-11 Among other things, it defines a set of standard wire cross-sectional areas: In engineering applications, it is often most convenient to describe a wire in terms of its cross-section area, rather than its diameter, because the cross section is directly proportional to its strength and weight, and inversely proportional to its resistance. The cross-sectional area is also related to the maximum current that a metallic wire can carry safely. This document is one considered fundamental in that it does not contain reference to any other standard. Description The document describes several aspects of the conductors for electrical cables Class This refers to the flexibility and thermal effects i.e temperature of a conductor. Class 1: Solid conductor Class 2: Stranded conductor intended for fixed installation Class 5: Flexible conductor Class 6: Very Flexible conductor Size The nominal (see below) cross-sectional area for standard conductors including the following: Class 2: Minimum number of strands required to make particular conductor size Class 5 and 6: Maximum diameter of any component strand of the conductor Resistance The maximum permissible resistance per unit length (in ohms per kilometre – Ω/km) of each conductor size, class and type (both plain copper and metal coated) Purpose of the document This document and its precursors were created due to a need for a standard definition of cable conductor size. The main problem being that not all copper has the same resistivity value, so, for example, a 4 mm2 conductor from two different suppliers may have different resistance values. Instead this document describes conductors by their nominal size, determined by resistance rather than physical dimensions. This is a key distinction as it makes a standardized definition of conductors based
https://en.wikipedia.org/wiki/Court-bouillon
Court-bouillon or court bouillon (in Louisiana, pronounced coo-bee-yon) is a quickly-cooked broth used for poaching other foods, most commonly fish or seafood. It is also sometimes used for poaching vegetables, eggs, sweetbreads, cockscombs, and delicate meats. It includes seasonings and salt but lacks animal gelatin. Description Court bouillon loosely translates as 'briefly boiled liquid' (French court) or "short broth" because the cooking time is brief in comparison with a rich and complex stock, and generally is not served as part of the finished dish. Because delicate foods do not cook for very long, it is prepared before the foods are added. Typically, cooking times do not exceed 60 minutes. Although a court bouillon may become the base for a stock or fumet, in traditional terms it is differentiated by the inclusion of acidulating ingredients such as wine, vinegar, or lemon juice. In addition to contributing their own flavor, acids help to draw flavors from the vegetable aromatics during the short preparation time prior to use. Court bouillon also includes salt and lacks animal gelatin. Types Traditionally, court bouillon is water, salt, white wine, vegetable aromatics (mirepoix of carrot, onion, and celery), and flavored with bouquet garni and black pepper. Court-bouillon need not be elaborate. Court bouillon used to prepare lobster may be as simple as water, salt, lemon juice, and perhaps thyme and bay leaf; that for poached eggs may be salt, water, and vinegar. In Louisiana Creole and Cajun cuisines, court-bouillon — often spelled "courtbouillon" — refers to a thick, rich fish stew most often prepared with redfish and thickened with roux. See also Nage References Further reading McGee, Harold. On Food and Cooking: The Science and Lore of the Kitchen. (, 2004) Larousse Gastronomique Oxford Companion to Food French cuisine Cajun cuisine Food ingredients
https://en.wikipedia.org/wiki/IBM%20Planning%20Analytics
IBM Planning Analytics powered by TM1 (formerly IBM Cognos TM1, formerly Applix TM1, formerly Sinper TM/1) is a business performance management software suite designed to implement collaborative planning, budgeting and forecasting solutions, interactive "what-if" analyses, as well as analytical and reporting applications. The database server component of the software platform retains its historical name TM1. Data is stored in in-memory multidimensional OLAP cubes, generally at the "leaf" level, and consolidated on demand. In addition to data, cubes can include encoded rules which define any on-demand calculations. By design, computations (typically aggregation along dimensional hierarchies using weighted summation) on the data are performed in near real-time, without the need to precalculate, due to a highly performant database design and calculation engine. These properties also allow the data to be updated frequently and by multiple users. TM1 is an example of a class of software products which implement the principles of the functional database model. The IBM Planning Analytics platform, in addition to the TM1 database server, includes an ETL tool, server management and monitoring tools and a number of user front ends which provide capabilities designed for common business planning and budgeting requirements, including workflow, adjustments, commentary, etc. The vendor currently offers the software both as a standalone on-premises product and in the SaaS model on the cloud. History While working at Exxon, Lilly Whaley suggested developing a planning system using the IBM mainframe time sharing option (TSO) to replace the previous IMS based planning system and thereby significantly reduce running costs. Manuel "Manny" Perez, who had been in IT for most of his career, took it upon himself to develop a prototype. Right away he realized that in order to provide the multidimensionality and interactivity necessary it would be necessary to keep the data structures i
https://en.wikipedia.org/wiki/History%20of%20thermodynamics
The history of thermodynamics is a fundamental strand in, the history of physics, the history of chemistry, and the history of science in general. Owing in the relevance of thermodynamics in much of science and technology, its history is finely woven with the developments of classical mechanics, quantum mechanics, magnetism, and chemical kinetics, to more distant applied fields such as meteorology, information theory, and biology (physiology), and to technological developments such as the steam engine, internal combustion engine, cryogenics and electricity generation. The development of thermodynamics both drove and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and statistics; see, for example, the timeline of thermodynamics. Antiquity The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies. The ancient Indian philosophy including Vedic philosophy believed that five classical elements (or pancha mahā bhūta) are the basis of all cosmic creations. In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers, Empedocles proposed a four-element theory, in which all substances derive from earth, water, air, and fire. The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the "flux and fire" philosopher for his proverbial utterance: "All things are flowing." Heraclitus argued that the three principal elements in nature were fire, earth, and water. Vacuum-abhorrence The 5th century BC Greek philosopher Parmenides, in his only known work, a poem conventionally titled On Nature, uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum, in nature could not occur. This view was supported by the arguments of Aristotle, but was
https://en.wikipedia.org/wiki/Genealogical%20DNA%20test
A genealogical DNA test is a DNA-based genetic test used in genetic genealogy that looks at specific locations of a person's genome in order to find or verify ancestral genealogical relationships, or (with lower reliability) to estimate the ethnic mixture of an individual. Since different testing companies use different ethnic reference groups and different matching algorithms, ethnicity estimates for an individual vary between tests, sometimes dramatically. Three principal types of genealogical DNA tests are available, with each looking at a different part of the genome and being useful for different types of genealogical research: autosomal (atDNA), mitochondrial (mtDNA), and Y-chromosome (Y-DNA). Autosomal tests may result in a large number of DNA matches to both males and females who have also tested with the same company. Each match will typically show an estimated degree of relatedness, i.e., a close family match, 1st-2nd cousins, 3rd-4th cousins, etc. The furthest degree of relationship is usually the "6th-cousin or further" level. However, due to the random nature of which, and how much, DNA is inherited by each tested person from their common ancestors, precise relationship conclusions can only be made for close relations. Traditional genealogical research, and the sharing of family trees, is typically required for interpretation of the results. Autosomal tests are also used in estimating ethnic mix. MtDNA and Y-DNA tests are much more objective. However, they give considerably fewer DNA matches, if any (depending on the company doing the testing), since they are limited to relationships along a strict female line and a strict male line respectively. MtDNA and Y-DNA tests are utilized to identify archeological cultures and migration paths of a person's ancestors along a strict mother's line or a strict father's line. Based on MtDNA and Y-DNA, a person's haplogroup(s) can be identified. The mtDNA test can be taken by both males and females, because everyo
https://en.wikipedia.org/wiki/Advanced%20Telecommunications%20Computing%20Architecture
Advanced Telecommunications Computing Architecture (ATCA or AdvancedTCA) is the largest specification effort in the history of the PCI Industrial Computer Manufacturers Group (PICMG), with more than 100 companies participating. Known as AdvancedTCA, the official specification designation PICMG 3.x (see below) was ratified by the PICMG organization in December 2002. AdvancedTCA is targeted primarily to requirements for "carrier grade" communications equipment, but has recently expanded its reach into more ruggedized applications geared toward the military/aerospace industries as well. This series of specifications incorporates the latest trends in high speed interconnect technologies, next-generation processors, and improved Reliability, Availability and Serviceability (RAS). Mechanical specifications An AdvancedTCA board (blade) is 280 mm deep and 322 mm high. The boards have a metal front panel and a metal cover on the bottom of the printed circuit board to limit electromagnetic interference and to limit the spread of fire. The locking injector-ejector handle (lever) actuates a microswitch to let the Intelligent Platform Management Controller (IPMC) know that an operator wants to remove a board, or that the board has just been installed, thus activating the hot-swap procedure. AdvancedTCA boards support the use of PCI Mezzanine Card (PMC) or Advanced Mezzanine Card (AMC) expansion mezzanines. The shelf supports RTMs (Rear Transition Modules). RTMs plug into the back of the shelf in slot locations that match the front boards. The RTM and the front board are interconnected through a Zone-3 connector. The Zone-3 connector is not defined by the AdvancedTCA specification. Each shelf slot is 30.48 mm wide. This allows for 14-board chassis to be installed in a 19-inch rack-mountable system and 16 boards in an ETSI rack-mountable system. A typical 14-slot system is 12 or 13 rack units high. The large AdvancedTCA shelves are targeted to the telecommunication market s
https://en.wikipedia.org/wiki/Infrared%20multiphoton%20dissociation
Infrared multiple photon dissociation (IRMPD) is a technique used in mass spectrometry to fragment molecules in the gas phase usually for structural analysis of the original (parent) molecule. How it works An infrared laser is directed through a window into the vacuum of the mass spectrometer where the ions are. The mechanism of fragmentation involves the absorption by a given ion of multiple infrared photons. The parent ion becomes excited into more energetic vibrational states until a bond(s) is broken resulting in gas phase fragments of the parent ion. In the case of powerful laser pulses, the dissociation proceeds via inner-valence ionization of electrons. IRMPD is most often used in Fourier transform ion cyclotron resonance mass spectrometry. Infrared photodissociation spectroscopy By applying intense tunable IR lasers, like IR-OPOs or IR free electron lasers, the wavelength dependence of the IRMPD yield can be studied. This infrared photodissociation spectroscopy allows for the measurement of vibrational spectra of (unstable) species that can only be prepared in the gas phase. Such species include molecular ions but also neutral species like metal clusters that can be gently ionized after interaction with the IR light for their mass spectrometric detection. Analytical applications of IRMPD The combination of mass spectrometry and IRMPD with tunable lasers (IR ion spectroscopy) is increasingly recognized as a powerful tool for small-molecule identification. Examples are metabomics, where biomarkers are identified in body fluids (urine, blood, cerebrospinal) and forensic sciences, where isomeric designer drugs were identified in seized samples. Isotope separation Due to the relatively large differences in IR absorption frequencies that are due to different resonance frequencies for molecules containing different isotopes, this technique has been suggested as a way to perform Isotope separation with difficult-to-separate isotopes, in a single pass. For e
https://en.wikipedia.org/wiki/Stepper
A stepper is a device used in the manufacture of integrated circuits (ICs) that is similar in operation to a slide projector or a photographic enlarger. Stepper is short for step-and-repeat camera. Steppers are an essential part of the complex process, called photolithography, which creates millions of microscopic circuit elements on the surface of silicon wafers out of which chips are made. These chips form the heart of ICs such as computer processors, memory chips, and many other devices. The stepper emerged in the late 1970s but did not become widespread until the 1980s. This was because it was replacing an earlier technology, the mask aligner. Aligners imaged the entire surface of a wafer at the same time, producing many chips in a single operation. In contrast, the stepper imaged only one chip at a time, and was thus much slower to operate. The stepper eventually displaced the aligner when the relentless forces of Moore's Law demanded that smaller feature sizes be used. Because the stepper imaged only one chip at a time it offered higher resolution and was the first technology to exceed the 1 micron limit. The addition of auto-alignment systems reduced the setup time needed to image multiple ICs, and by the late 1980s, the stepper had almost entirely replaced the aligner in the high-end market. The stepper was itself replaced by the step-and-scan systems (scanners) which offered an additional order of magnitude resolution advance, and work by scanning only a small portion of the mask for an individual IC, and thus require much longer operation times than the original steppers. These became widespread during the 1990s and essentially universal by the 2000s. Today, step-and-scan systems are so widespread that they are often simply referred to as steppers. History 1957: Attempts to miniaturize electronic circuits started back in 1957 when Jay Lathrop and James Nall of the U.S. Army's Diamond Ordnance Fuse Laboratories were granted a US2890395A patent for a ph
https://en.wikipedia.org/wiki/Conditioned%20taste%20aversion
Conditioned taste aversion occurs when an animal acquires an aversion to the taste of a food that was paired with aversive stimuli. The Garcia effect explains that the aversion develops more strongly for stimuli that cause nausea than other stimuli. This is considered an adaptive trait or survival mechanism that enables the organism to avoid poisonous substances (e.g., poisonous berries) before they cause harm. The aversion reduces consuming the same substance (or something that tastes similar) in the future, thus avoiding poisoning. Studies on conditioned taste aversion that involved irradiating rats were conducted in the 1950s by Dr. John Garcia, leading to it sometimes being called the Garcia effect. Conditioned taste aversion can occur when sickness is merely coincidental to, and not caused by, the substance consumed. For example, a person who becomes very sick after consuming vodka-and-orange-juice cocktails may then become averse to the taste of orange juice, even though the sickness was caused by the over-consumption of alcohol. Under these circumstances, conditioned taste aversion is sometimes known as the "Sauce-Bearnaise Syndrome", a term coined by Seligman and Hager. Garcia's study While studying the effects of radiation on various behaviors in the mid to late 1950s, Dr. Garcia noticed that rats developed an aversion to substances consumed prior to being irradiated. To examine this, Garcia put together a study in which three groups of rats were given sweetened water followed by either no radiation, mild radiation, or strong radiation. When rats were subsequently given a choice between sweetened water and regular tap water, rats who had been exposed to radiation drank much less sweetened water than those who had not. This finding was surprising in that the aversion could occur after just a single trial and with a long delay between the stimuli. Most research at the time found that learning required multiple trials and shorter latencies. Many scienti
https://en.wikipedia.org/wiki/Clay%20Shirky
Clay Shirky (born 1964) is an American writer, consultant and teacher on the social and economic effects of Internet technologies and journalism. In 2017 he was appointed Vice Provost of Educational Technologies of New York University (NYU), after serving as Chief Information Officer at NYU Shanghai from 2014 to 2017. He also is an associate professor at the Arthur L. Carter Journalism Institute and Associate Arts Professor at the Tisch School of the Arts' Interactive Telecommunications Program. His courses address, among other things, the interrelated effects of the topology of social networks and technological networks, how our networks shape culture and vice versa. He has written and been interviewed about the Internet since 1996. His columns and writings have appeared in Business 2.0, The New York Times, The Wall Street Journal, the Harvard Business Review and Wired. Shirky divides his time between consulting, teaching, and writing on the social and economic effects of Internet technologies. His consulting practice is focused on the rise of decentralized technologies such as peer-to-peer, web services, and wireless networks that provide alternatives to the wired client–server infrastructure that characterizes the World Wide Web. He is a member of the Wikimedia Foundation's advisory board. In The Long Tail, Chris Anderson calls Shirky "a prominent thinker on the social and economic effects of Internet technologies." Education and career After graduating from Yale University with a Bachelor of Arts degree in fine art in 1986, he moved to New York. In the 1990s he founded the Hard Place Theater, a theatre company that produced non-fiction theater using only found materials such as government documents, transcripts and cultural records and also worked as a lighting designer for other theater and dance companies, including the Wooster Group, Elevator Repair Service and Dana Reitz. During this time, Shirky was vice-president of the New York chapter of the Electron
https://en.wikipedia.org/wiki/Cell%20therapy
Cell therapy (also called cellular therapy, cell transplantation, or cytotherapy) is a therapy in which viable cells are injected, grafted or implanted into a patient in order to effectuate a medicinal effect, for example, by transplanting T-cells capable of fighting cancer cells via cell-mediated immunity in the course of immunotherapy, or grafting stem cells to regenerate diseased tissues. Cell therapy originated in the nineteenth century when scientists experimented by injecting animal material in an attempt to prevent and treat illness. Although such attempts produced no positive benefit, further research found in the mid twentieth century that human cells could be used to help prevent the human body rejecting transplanted organs, leading in time to successful bone marrow transplantation as has become common practice in treatment for patients that have compromised bone marrow after disease, infection, radiation or chemotherapy. In recent decades, however, stem cell and cell transplantation has gained significant interest by researchers as a potential new therapeutic strategy for a wide range of diseases, in particular for degenerative and immunogenic pathologies. Background Cell therapy can be defined as therapy in which cellular material is injected or otherwise transplanted into a patient. The origins of cell therapy can perhaps be traced to the nineteenth century, when Charles-Édouard Brown-Séquard (1817–1894) injected animal testicle extracts in an attempt to stop the effects of aging. In 1931 Paul Niehans (1882–1971) – who has been called the inventor of cell therapy – attempted to cure a patient by injecting material from calf embryos. Niehans claimed to have treated many people for cancer using this technique, though his claims have never been validated by research. In 1953 researchers found that laboratory animals could be helped not to reject organ transplants by pre-inoculating them with cells from donor animals; in 1968, in Minnesota, the first su
https://en.wikipedia.org/wiki/Encryption%20software
Encryption software is software that uses cryptography to prevent unauthorized access to digital information. Cryptography is used to protect digital information on computers as well as the digital information that is sent to other computers over the Internet. Classification There are many software products which provide encryption. Software encryption uses a cipher to obscure the content into ciphertext. One way to classify this type of software is the type of cipher used. Ciphers can be divided into two categories: public key ciphers (also known as asymmetric ciphers), and symmetric key ciphers. Encryption software can be based on either public key or symmetric key encryption. Another way to classify software encryption is to categorize its purpose. Using this approach, software encryption may be classified into software which encrypts "data in transit" and software which encrypts "data at rest". Data in transit generally uses public key ciphers, and data at rest generally uses symmetric key ciphers. Symmetric key ciphers can be further divided into stream ciphers and block ciphers. Stream ciphers typically encrypt plaintext a bit or byte at a time, and are most commonly used to encrypt real-time communications, such as audio and video information. The key is used to establish the initial state of a keystream generator, and the output of that generator is used to encrypt the plaintext. Block cipher algorithms split the plaintext into fixed-size blocks and encrypt one block at a time. For example, AES processes 16-byte blocks, while its predecessor DES encrypted blocks of eight bytes. There is also a well-known case where PKI is used for data in transit of data at rest. Data in transit Data in transit is data that is being sent over a computer network. When the data is between two endpoints, any confidential information may be vulnerable. The payload (confidential information) can be encrypted to secure its confidentiality, as well as its integrity and valid
https://en.wikipedia.org/wiki/Bruch%27s%20membrane
Bruch's membrane or lamina vitrea is the innermost layer of the choroid of the eye. It is also called the vitreous lamina or Membrane vitriae, because of its glassy microscopic appearance. It is 2–4 μm thick. Anatomy Structure Bruch's membrane consists of five layers (from inside to outside): the basement membrane of the retinal pigment epithelium the inner collagenous zone a central band of elastic fibers the outer collagenous zone the basement membrane of the choriocapillaris Development The membrane grows thicker with age. With age, lipid-containing extracellular deposits may accumulate between the membrane and the basal lamina of the retinal pigmental epithelium, impairing exchange of solutes and contributing to age-related pathology. Embryology Bruch's membrane is present by midterm in fetal development as an elastic sheet. Function The membrane is involved in the regulation of fluid and solute passage from the choroid to the retina. Pathology Bruch's membrane thickens with age, slowing the transport of metabolites. This may lead to the formation of drusen in age-related macular degeneration. There is also a buildup of deposits (Basal Linear Deposits or BLinD and Basal Lamellar Deposits BLamD) on and within the membrane, primarily consisting of phospholipids. The accumulation of lipids appears to be greater in the central fundus than in the periphery. This build up seems to fragment the membrane into a lamellar structure more like puff-pastry than a barrier. Inflammatory and neovascular mediators can then invite choroidal vessels to grow into and beyond the fragmented membrane. This neovascular membrane destroys the architecture of the outer retina and leads to sudden loss of central vision – wet age related macular degeneration. Pseudoxanthoma elasticum, myopia and trauma can also cause defects in Bruch's membrane which may lead to choroidal neovascularization. Alport's Syndrome, a genetic disorder affecting the alpha(IV) collagen chains, can also
https://en.wikipedia.org/wiki/Surface%20metrology
Surface metrology is the measurement of small-scale features on surfaces, and is a branch of metrology. Surface primary form, surface fractality, and surface finish (including surface roughness) are the parameters most commonly associated with the field. It is important to many disciplines and is mostly known for the machining of precision parts and assemblies which contain mating surfaces or which must operate with high internal pressures. Surface finish may be measured in two ways: contact and non-contact methods. Contact methods involve dragging a measurement stylus across the surface; these instruments are called profilometers. Non-contact methods include: interferometry, digital holography, confocal microscopy, focus variation, structured light, electrical capacitance, electron microscopy, photogrammetry and non-contact profilometers. Overview The most common method is to use a diamond stylus profilometer. The stylus is run perpendicular to the lay of the surface. The probe usually traces along a straight line on a flat surface or in a circular arc around a cylindrical surface. The length of the path that it traces is called the measurement length. The wavelength of the lowest frequency filter that will be used to analyze the data is usually defined as the sampling length. Most standards recommend that the measurement length should be at least seven times longer than the sampling length, and according to the Nyquist–Shannon sampling theorem it should be at least two times longer than the wavelength of interesting features. The assessment length or evaluation length is the length of data that will be used for analysis. Commonly one sampling length is discarded from each end of the measurement length. 3D measurements can be made with a profilometer by scanning over a 2D area on the surface. The disadvantage of a profilometer is that it is not accurate when the size of the features of the surface are close to the same size as the stylus. Another disadvantage is
https://en.wikipedia.org/wiki/Conway%27s%20law
Conway's law is an adage linking the communication structure of organizations to the systems they design. It is named after the computer programmer Melvin Conway, who introduced the idea in 1967. His original wording was: The law is based on the reasoning that in order for a product to function, the authors and designers of its component parts must communicate with each other in order to ensure compatibility between the components. Therefore, the technical structure of a system will reflect the social boundaries of the organizations that produced it, across which communication is more difficult. In colloquial terms, it means complex products end up "shaped like" the organizational structure they are designed in or designed for. The law is applied primarily in the field of software architecture, though Conway directed it more broadly and its assumptions and conclusions apply to most technical fields. Variations Eric S. Raymond, an open-source advocate, restated Conway's law in The New Hacker's Dictionary, a reference work based on the Jargon File. The organization of the software and the organization of the software team will be congruent, he said. Summarizing an example in Conway's paper, Raymond wrote: Raymond further presents Tom Cheatham's amendment of Conway's Law, stated as: Yourdon and Constantine, in their 1979 book on Structured Design, gave a more strongly stated variation of Conway's Law: James O. Coplien and Neil B. Harrison stated in a 2004 book concerned with organizational patterns of Agile software development: More recent commentators have noted a corollary - for software projects with a long lifetime of code reuse, such as Microsoft Windows, the structure of the code mirrors not only the communication structure of the organization which created the most recent release, but also the communication structures of every previous team which worked on that code. Interpretations The law is, in a strict sense, only about correspondence; it does not
https://en.wikipedia.org/wiki/SLAM%20project
The SLAM project, which was started in 1999 by Thomas Ball and Sriram Rajamani of Microsoft Research, aimed at verifying software safety properties using model checking techniques. It was implemented in OCaml, and has been used to find many bugs in Windows Device Drivers. It is distributed as part of the Microsoft Windows Driver Foundation development kit as the Static Driver Verifier (SDV). "SLAM originally was an acronym but we found it too cumbersome to explain. We now prefer to think of 'slamming' the bugs in a program." It initially stood for "software (specifications), programming languages, abstraction, and model checking". Note that Microsoft has since re-used SLAM to stand for "Social Location Annotation Mobile". See also Abstraction model checking the BLAST model checker, a model checker similar to SLAM that uses "lazy abstraction" References External links SLAM website Formal methods OCaml software Microsoft Research
https://en.wikipedia.org/wiki/VAXstation
The VAXstation is a discontinued family of workstation computers developed and manufactured by Digital Equipment Corporation using processors implementing the VAX instruction set architecture. VAXstation systems were typically shipped with either the OpenVMS or ULTRIX operating systems. Many members of the VAXstation family had corresponding MicroVAX variants, which primarily differ by the lack of graphics hardware. VAXstation 100 The VAXstation 100 was a VAXstation-branded graphics terminal introduced in May 1983. It used a Motorola 68000 microprocessor and connected to its VAX host via Unibus. It was used for developing the X Window System. VAXstation 500 The VAXstation 500 was a VAXstation system with color graphics, introduced in March 1985. It consisted of a MicroVAX I and a Tektronix 4125 colour terminal. VAXstation 520 The VAXstation 520 was a follow-on to the VAXstation 500 which used a MicroVAX II as the host system instead of a MicroVAX I. At the time of its introduction in September 1985, a configuration with 2MB of memory, a 32MB hard disk and two 400KB floppy disk drives cost $40,790. VAXstation I Introduced in October 1984, it was code named "Seahorse", and used the KD32 CPU module containing a 4 MHz (250 ns) MicroVAX I processor. VAXstation II Code named "Mayflower", it used the KA630 CPU module containing a 5 MHz (200 ns) MicroVAX 78032 microprocessor. It was essentially a MicroVAX II in a workstation configuration. VAXstation II/RC A short-lived, lower-cost "Reduced Configuration" variant of the VAXstation II. Compared with the standard VAXstation II, a number of the slots on the backplane were filled with epoxy to limit the system's upgradability. It was discontinued when Digital discovered that enterprising customers were removing the epoxy, or replacing the backplane in order to convert the RC into a standard VAXstation II. VAXstation II/GPX Introduced in December 1985, it was code named "Caylith", and was a variant of the VAXstat
https://en.wikipedia.org/wiki/Real-Time%20Multiprogramming%20Operating%20System
Real-Time Multiprogramming Operating System (RTMOS) was a 24-bit process control operating system developed in the 1960s by General Electric that supported both real-time computing and multiprogramming. Programming was done in assembly language or Process FORTRAN. The two languages could be used in the same program, allowing programmers to alternate between the two as desired. Multiprogramming operating systems are now considered obsolete, having been replaced by multitasking. References General Electric Real-time operating systems
https://en.wikipedia.org/wiki/Interrupt%20storm
In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time. Interrupt storms are typically caused by hardware devices that do not support interrupt rate limiting. Background Because interrupt processing is typically a non-preemptible task in time-sharing operating systems, an interrupt storm will cause sluggish response to user input, or even appear to freeze the system completely. This state is commonly known as live lock. In such a state, the system is spending most of its resources processing interrupts instead of completing other work. To the end-user, it does not appear to be processing anything at all as there is often no output. An interrupt storm is sometimes mistaken for thrashing, since they both have similar symptoms (unresponsive or sluggish response to user input, little or no output). Common causes include: misconfigured or faulty hardware, faulty device drivers, flaws in the operating system, or metastability in one or more components. The latter condition rarely occurs outside of prototype or amateur-built hardware. Most modern hardware and operating systems have methods for mitigating the effect of an interrupt storm. For example, most Ethernet controllers implement interrupt "rate limiting", which causes the controller to wait a programmable amount of time between each interrupt it generates. When not present within the device, similar functionality is usually written into the device driver, and/or the operating system itself. The most common cause is when a device "behind" another signals an interrupt to an APIC (Advanced Programmable Interrupt Controller). Most computer peripherals generate interrupts through an APIC as the number of interrupts is most always less (typically 15 for the modern PC) than the number of devices. The OS must then query each driver registered to that interrupt to ask if the interrupt originated from its h
https://en.wikipedia.org/wiki/List%20of%20types%20of%20XML%20schemas
This is a list of notable XML schemas in use on the Internet sorted by purpose. XML schemas can be used to create XML documents for a wide range of purposes such as syndication, general exchange, and storage of data in a standard format. Bookmarks XBEL - XML Bookmark Exchange Language Brewing BeerXML - a free XML based data description standard for the exchange of brewing data Business Auto-lead Data Format - for communicating consumer purchase requests to automotive dealerships. ACORD data standards - Insurance Industry XML schemas specifications by Association for Cooperative Operations Research and Development Europass XML - XML vocabulary describing the information contained in a Curriculum Vitae (CV), Language Passport (LP) and European Skills Passport (ESP) OSCRE - Open Standards Consortium for Real Estate format for data exchange within the real estate industry UBL - Defining a common XML library of business documents (purchase orders, invoices, etc.) by Oasis XBRL Extensible Business Reporting Language for International Financial Reporting Standards (IFRS) and United States generally accepted accounting principles (GAAP) business accounting. Elections EML - Election Markup Language, is an OASIS standard to support end-to-end management of election processes. It defines over thirty schemas, for example EML 510 for vote count reporting and EML 310 for voter registration. Engineering gbXML - an open schema developed to facilitate transfer of building data stored in Building Information Models (BIMs) to engineering analysis tools. IFC-XML - Building Information Models for architecture, engineering, construction, and operations. XMI - an Object Management Group (OMG) standard for exchanging metadata information, commonly used for exchange of UML information XTCE - XML Telemetric and Command Exchange is an XML based data exchange format for spacecraft telemetry and command meta-data Financial FIXatdl - FIX algorithmic trading definition language.
https://en.wikipedia.org/wiki/ESTREAM
eSTREAM is a project to "identify new stream ciphers suitable for widespread adoption", organised by the EU ECRYPT network. It was set up as a result of the failure of all six stream ciphers submitted to the NESSIE project. The call for primitives was first issued in November 2004. The project was completed in April 2008. The project was divided into separate phases and the project goal was to find algorithms suitable for different application profiles. Profiles The submissions to eSTREAM fall into either or both of two profiles: Profile 1: "Stream ciphers for software applications with high throughput requirements" Profile 2: "Stream ciphers for hardware applications with restricted resources such as limited storage, gate count, or power consumption." Both profiles contain an "A" subcategory (1A and 2A) with ciphers that also provide authentication in addition to encryption. In Phase 3 none of the ciphers providing authentication are being considered (The NLS cipher had authentication removed from it to improve its performance). eSTREAM portfolio the following ciphers make up the eSTREAM portfolio: These are all free for any use. Rabbit was the only one that had a patent pending during the eStream competition, but it was released into the public domain in October 2008. The original portfolio, published at the end of Phase 3, consisted of the above ciphers plus F-FCSR which was in Profile 2. However, cryptanalysis of F-FCSR led to a revision of the portfolio in September 2008 which removed that cipher. Phases Phase 1 Phase 1 included a general analysis of all submissions with the purpose of selecting a subset of the submitted designs for further scrutiny. The designs were scrutinized based on criteria of security, performance (with respect to the block cipher AES—a US Government approved standard, as well as the other candidates), simplicity and flexibility, justification and supporting analysis, and clarity and completeness of the documentation. Submi
https://en.wikipedia.org/wiki/Mathematical%20joke
A mathematical joke is a form of humor which relies on aspects of mathematics or a stereotype of mathematicians. The humor may come from a pun, or from a double meaning of a mathematical term, or from a lay person's misunderstanding of a mathematical concept. Mathematician and author John Allen Paulos in his book Mathematics and Humor described several ways that mathematics, generally considered a dry, formal activity, overlaps with humor, a loose, irreverent activity: both are forms of "intellectual play"; both have "logic, pattern, rules, structure"; and both are "economical and explicit". Some performers combine mathematics and jokes to entertain and/or teach math. Humor of mathematicians may be classified into the esoteric and exoteric categories. Esoteric jokes rely on the intrinsic knowledge of mathematics and its terminology. Exoteric jokes are intelligible to the outsiders, and most of them compare mathematicians with representatives of other disciplines or with common folk. Pun-based jokes Some jokes use a mathematical term with a second non-technical meaning as the punchline of a joke. Occasionally, multiple mathematical puns appear in the same jest: This invokes four double meanings: adder (snake) vs. addition (algebraic operation); multiplication (biological reproduction) vs. multiplication (algebraic operation); log (a cut tree trunk) vs. log (logarithm); and table (set of facts) vs. table (piece of furniture). Other jokes create a double meaning from a direct calculation involving facetious variable names, such as this retold from Gravity's Rainbow: The first part of this joke relies on the fact that the primitive (formed when finding the antiderivative) of the function 1/x is log(x). The second part is then based on the fact that the antiderivative is actually a class of functions, requiring the inclusion of a constant of integration, usually denoted as C—something which calculus students may forget. Thus, the indefinite integral of 1/cabin i
https://en.wikipedia.org/wiki/Active%20shape%20model
Active shape models (ASMs) are statistical models of the shape of objects which iteratively deform to fit to an example of the object in a new image, developed by Tim Cootes and Chris Taylor in 1995. The shapes are constrained by the PDM (point distribution model) Statistical Shape Model to vary only in ways seen in a training set of labelled examples. The shape of an object is represented by a set of points (controlled by the shape model). The ASM algorithm aims to match the model to a new image. The ASM works by alternating the following steps: Generate a suggested shape by looking in the image around each point for a better position for the point. This is commonly done using what is called a "profile model", which looks for strong edges or uses the Mahalanobis distance to match a model template for the point. Conform the suggested shape to the point distribution model, commonly called a "shape model" in this context. The figure to the right shows an example. The technique has been widely used to analyse images of faces, mechanical assemblies and medical images (in 2D and 3D). It is closely related to the active appearance model. It is also known as a "Smart Snakes" method, since it is an analog to an active contour model which would respect explicit shape constraints. See also Procrustes analysis Point distribution model References External links Matlab code open-source ASM implementation. Description of AAMs from Manchester University. Tim Cootes' home page (one of the original co-inventors of ASMs). Source code for ASMs (the "stasm" library). ASMlib-OpenCV, An open source C++/OpenCV implementation of ASM. Computer vision
https://en.wikipedia.org/wiki/White%20box%20%28software%20engineering%29
A white box (or glass box, clear box, or open box) is a subsystem whose internals can be viewed but usually not altered. The term is used in systems engineering, software engineering, and in intelligent user interface design, where it is closely related to recent interest in explainable artificial intelligence. Having access to the subsystem internals in general makes the subsystem easier to understand, but also easier to hack; for example, if a programmer can examine source code, weaknesses in an algorithm are much easier to discover. That makes white-box testing much more effective than black-box testing but considerably more difficult from the sophistication needed on the part of the tester to understand the subsystem. The notion of a "Black Box in a Glass Box" was originally used as a metaphor for teaching complex topics to computing novices. See also Black box Gray-box testing References Software testing
https://en.wikipedia.org/wiki/White%20box%20%28computer%20hardware%29
In computer hardware, a white box is a personal computer or server without a well-known brand name. The term is usually applied to systems assembled by small system integrators and to homebuilt computer systems assembled by end users from parts purchased separately at retail. In this sense, building a white box system is part of the DIY movement. The term is also applied to high volume production of unbranded PCs that began in the mid-1980s with 8 MHz Turbo XT systems selling for just under $1000. In 2002, around 30% of personal computers sold annually were white box systems. Operating systems While PCs built by system manufacturers generally come with a pre-installed operating system, white boxes from both large and small system vendors and other VAR channels can be ordered with or without a pre-installed OS. Usually when ordered with an operating system, the system builder uses an OEM copy of the OS. Whitebook or Intel "Common Building Blocks" Intel defined form factor and interconnection standards for notebook computer components, including "Barebones" (chassis and motherboard), hard disk drive, optical disk drive, LCD, battery pack, keyboard, and AC/DC adapter. These building blocks are primarily marketed to computer building companies, rather than DIY users. Costs While saving money is a common motivation for building one's own PC, today it is generally more expensive to build a low-end PC than to buy a pre-built one from a well-known manufacturer, due to the build quality and the total cost of the parts being used. For these reasons, it is usually better to just buy a pre-assembled computer from a well-known manufacturer or brand name rather than just have people build it themselves (unless one has the talent, skills, budget, and the knowledge to do so). See also Beige box Enthusiast computing Homebuilt computer White-label product References Computer enclosure Electronics manufacturing Personal computers Computer jargon
https://en.wikipedia.org/wiki/Crystal%20earpiece
A crystal earpiece is a type of piezoelectric earphone, producing sound by using a piezoelectric crystal, a material that changes its shape when electricity is applied to it. It is usually designed to plug into the ear canal of the user. Operation A crystal earpiece typically consists of a piezoelectric crystal with metal electrodes attached to either side, glued to a conical plastic or metal foil diaphragm, enclosed in a plastic case. The piezoelectric material used in early crystal earphones was Rochelle salt, but modern earphones use barium titanate, or less often quartz. When the audio signal is applied to the electrodes, the crystal bends back and forth a little with the signal, vibrating the diaphragm. The diaphragm pushes on the air, creating sound waves. The plastic earpiece casing confines the sound waves and conducts them efficiently into the ear canal, to the eardrum. The diaphragm is generally fixed at its outer edge, relying on bending to operate. The air path in the earpiece is generally a horn shape, with a narrowing column of air which increases the air displacement at the eardrum, increasing the volume. Application Crystal earpieces are usually monaural devices with very low sound fidelity, but high sensitivity and impedance. Their peak use was probably with 1960s era transistor radios and hearing aids. They are not used with modern portable media players due to unacceptable sound quality. The main causes of poor performance with these earpieces are low diaphragm excursion, nonlinearity, in-band resonance and the very short horn shape of the earpiece casing. The resulting sound is very tinny and lacking in bass. Modern headphones use electromagnetic drivers that work similarly to speakers, with moving coils or moving iron cores in a magnetic field. One remaining use for crystal earpieces is in crystal radios. Their very high sensitivity enables them to use the very weak signals produced by crystal radios, and their high impedance (on the or
https://en.wikipedia.org/wiki/Concrete%20security
In cryptography, concrete security or exact security is a practice-oriented approach that aims to give more precise estimates of the computational complexities of adversarial tasks than polynomial equivalence would allow. It quantifies the security of a cryptosystem by bounding the probability of success for an adversary running for a fixed amount of time. Security proofs with precise analyses are referred to as concrete. Traditionally, provable security is asymptotic: it classifies the hardness of computational problems using polynomial-time reducibility. Secure schemes are defined to be those in which the advantage of any computationally bounded adversary is negligible. While such a theoretical guarantee is important, in practice one needs to know exactly how efficient a reduction is because of the need to instantiate the security parameter - it is not enough to know that "sufficiently large" security parameters will do. An inefficient reduction results either in the success probability for the adversary or the resource requirement of the scheme being greater than desired. Concrete security parametrizes all the resources available to the adversary, such as running time and memory, and other resources specific to the system in question, such as the number of plaintexts it can obtain or the number of queries it can make to any oracles available. Then the advantage of the adversary is upper bounded as a function of these resources and of the problem size. It is often possible to give a lower bound (i.e. an adversarial strategy) matching the upper bound, hence the name exact security. Examples Concrete security estimates have been applied to cryptographic algorithms: In 1996, schemes for digital signatures based on the RSA and Rabin cryptosystems were proposed, which were shown to be approximately as difficult to break as the original cryptosystems. In 1997, some notions of concrete security (left-or-right indistinguishability, real-or-random indistinguishabil
https://en.wikipedia.org/wiki/Negative%20thermal%20expansion
Negative thermal expansion (NTE) is an unusual physicochemical process in which some materials contract upon heating, rather than expand as most other materials do. The most well-known material with NTE is water at 0 to 3.98 °C. Also, the density of water ice is smaller than the density of liquid water. Water's NTE is the reason why water ice floats, rather than sinks, in liquid water. Materials which undergo NTE have a range of potential engineering, photonic, electronic, and structural applications. For example, if one were to mix a negative thermal expansion material with a "normal" material which expands on heating, it could be possible to use it as a thermal expansion compensator that might allow for forming composites with tailored or even close to zero thermal expansion. Origin of negative thermal expansion There are a number of physical processes which may cause contraction with increasing temperature, including transverse vibrational modes, rigid unit modes and phase transitions. In 2011, Liu et al. showed that the NTE phenomenon originates from the existence of high pressure, small volume configurations with higher entropy, with their configurations present in the stable phase matrix through thermal fluctuations. They were able to predict both the colossal positive thermal expansion (In cerium) and zero and infinite negative thermal expansion (in ). Alternatively, large negative and positive thermal expansion may result from the design of internal microstructure. Negative thermal expansion in close-packed systems Negative thermal expansion is usually observed in non-close-packed systems with directional interactions (e.g. ice, graphene, etc.) and complex compounds (e.g. , , beta-quartz, some zeolites, etc.). However, in a paper, it was shown that negative thermal expansion (NTE) is also realized in single-component close-packed lattices with pair central force interactions. The following sufficient condition for potential giving rise to NTE behavio
https://en.wikipedia.org/wiki/Magic%20angle
The magic angle is a precisely defined angle, the value of which is approximately 54.7356°. The magic angle is a root of a second-order Legendre polynomial, , and so any interaction which depends on this second-order Legendre polynomial vanishes at the magic angle. This property makes the magic angle of particular importance in magic angle spinning solid-state NMR spectroscopy. In magnetic resonance imaging, structures with ordered collagen, such as tendons and ligaments, oriented at the magic angle may appear hyperintense in some sequences; this is called the magic angle artifact or effect. Mathematical definition The magic angle θm is where arccos and arctan are the inverse cosine and tangent functions respectively. θm is the angle between the space diagonal of a cube and any of its three connecting edges, see image. Another representation of the magic angle is half of the opening angle formed when a cube is rotated from its space diagonal axis, which may be represented as arccos − or 2 arctan  radians ≈ 109.4712°. This double magic angle is directly related to tetrahedral molecular geometry and is the angle between two vertices and the exact center of a tetrahedron (i.e., the edge central angle also known as the tetrahedral angle). Magic angle and nuclear magnetic resonance In nuclear magnetic resonance (NMR) spectroscopy, three prominent nuclear magnetic interactions, dipolar coupling, chemical shift anisotropy (CSA), and first-order quadrupolar coupling, depend on the orientation of the interaction tensor with the external magnetic field. By spinning the sample around a given axis, their average angular dependence becomes: where θ is the angle between the principal axis of the interaction and the magnetic field, θr is the angle of the axis of rotation relative to the magnetic field and β is the (arbitrary) angle between the axis of rotation and principal axis of the interaction. For dipolar couplings, the principal axis corresponds to the internucl
https://en.wikipedia.org/wiki/Research%20Institute%20of%20Computer%20Science%20and%20Random%20Systems
The Institut de recherche en informatique et systèmes aléatoires is a joint computer science research center of CNRS, University of Rennes 1, ENS Rennes, INSA Rennes and Inria, in Rennes in Brittany. It is one of the eight Inria research centers. Created in 1975 as a spin-off of the University of Rennes 1, merging the young computer science department and a few mathematicians, more specifically probabilists, among them Michel Métivier, who was to become the first president of IRISA. Research topics span from theoretical computer science, such as formal languages, formal methods, or more mathematically oriented topics such as information theory, optimization, complex system... to application-driven topics like bioinformatics, image and video compression, handwriting recognition, computer graphics, medical imaging, content-based image retrieval. See also French space program Space program of France Aerospace engineering organizations Computer science institutes in France France Research institutes in France French National Centre for Scientific Research 1975 establishments in France
https://en.wikipedia.org/wiki/El%20Bulli
El Bulli () was a restaurant near the town of Roses, Spain, run by chef Ferran Adrià, later joined by Albert Adrià, and renowned for its modernist cuisine. Established in 1964, the restaurant overlooked Cala Montjoi, a bay on the Costa Brava of Catalonia. El Bulli held three Michelin stars and was described as "the most imaginative generator of haute cuisine on the planet" in 2006. The restaurant closed 30 July 2011 and relaunched as El Bulli Foundation, a center for culinary creativity. Restaurant The restaurant had a limited season: the PIXA season, for example, ran from 15 June to 20 December. Bookings for the next year were taken on a single day after the closing of the current season. It accommodated only 8,000 diners a season, but got more than two million requests. The average cost of a meal was €250 (US$325). The restaurant itself had operated at a loss since 2000, with operating profit coming from El Bulli-related books and lectures by Adrià. As of April 2008, the restaurant employed 42 chefs. Restaurant magazine judged El Bulli to be No. 1 on its Top 50 list of the world's best restaurants for a record five times—in 2002, 2006, 2007, 2008 and 2009, and No. 2 in 2010. History The restaurant's location was selected in 1961 by Hans Schilling, a German, and his Czech wife Marketa, who wanted a piece of land for a planned holiday resort. By the year 1963, the resulting holiday resort included a small makeshift bar of the type known in Spanish as a "chiringuito", which bar was variously called "El Bulli-bar" and "Hacienda El Bulli"; this little bar was the nucleus of the future restaurant El Bulli. The name "El Bulli" came from a colloquial term used to describe the French bulldogs the Schillings owned. The first restaurant was opened in 1964. The restaurant won its first Michelin star in 1976 while under French chef Jean-Louis Neichel. Ferran Adrià joined the staff in 1984, and was put in sole charge of the kitchen in 1987. In 1990 the restaurant gained i
https://en.wikipedia.org/wiki/Windows%20Vista
Windows Vista is a major release of Microsoft's Windows NT operating system. It was released to manufacturing on November 8, 2006, and became generally available on January 30, 2007, on the Windows Marketplace, the first release of Windows to be made available through a digital distribution platform. Vista succeeded Windows XP (2001); at the time, the five-year gap between the two was the longest time span between successive Windows releases. Microsoft began developing Vista under the codename "Longhorn" in 2001, shortly before the release of XP. It was intended as a small upgrade to bridge the gap between XP and the next major Windows version, codenamed Blackcomb. As development progressed, it assimilated many of Blackcomb's features and was repositioned as a major Windows release. Vista introduced the updated graphical user interface and visual style Aero, Windows Search, redesigned networking, audio, print, and display sub-systems, and new multimedia tools such as Windows DVD Maker among other changes. Vista aimed to increase the level of communication between machines on a home network, using peer-to-peer technology to simplify sharing files and media between computers and devices. Vista included version 3.0 of the .NET Framework, allowing software developers to write applications without traditional Windows APIs. It removed support for Itanium and devices without ACPI. While its new features and security improvements garnered praise, Vista was the target of significant criticism, such as its high system requirements, more restrictive licensing terms, lack of compatibility, longer boot time, and excessive authorization prompts from User Account Control. It saw lower adoption and satisfaction rates than XP, and it is generally considered a market failure. However, Vista usage did exceed Microsoft's pre-launch two-year-out expectations of achieving 200 million users, with an estimated 330 million internet users in January 2009. On October 22, 2010, Microsoft ce
https://en.wikipedia.org/wiki/179%20%28number%29
179 (one hundred [and] seventy-nine) is the natural number following 178 and preceding 180. In mathematics 179 is part of the Cunningham chain of prime numbers 89, 179, 359, 719, 1439, 2879, in which each successive number is two times the previous number, plus one. Among Cunningham chains of this length, this one has the smallest numbers. Because 179 is neither the start nor the end of this chain, it is both a safe prime and a Sophie Germain prime. It is also a super-prime number, because it is the 41st smallest prime and 41 is also prime. Since 971 (the digits of 179 reversed) is prime, 179 is an emirp. In other fields Astronomers have suggested that sunspot frequency undergoes a cycle of approximately 179 years in length. See also AD 179 and 179 BC List of highways numbered 179 External links References Integers
https://en.wikipedia.org/wiki/181%20%28number%29
181 (one hundred [and] eighty-one) is the natural number following 180 and preceding 182. In mathematics 181 is an odd number 181 is a centered number 181 is a centered pentagonal number 181 is a centered 12-gonal number 181 is a centered 18-gonal number 181 is a centered 30-gonal number 181 is a centered square number 181 is a star number that represents a centered hexagram (as in the game of Chinese checkers) 181 is a deficient number, as 1 is less than 181 181 is an odious number 181 is a prime number 181 is a Chen prime 181 is a dihedral prime 181 is a full reptend prime 181 is a palindromic prime 181 is a strobogrammatic prime, the same when viewed upside down 181 is a twin prime with 179 181 is a square-free number 181 is an undulating number, if written in the ternary, the negaternary, or the nonary numeral systems 181 is the difference of 2 square numbers: 912 – 902 181 is the sum of 2 consecutive square numbers: 92 + 102 181 is the sum of 5 consecutive prime numbers: 29 + 31 + 37 + 41 + 43 In geography Langenburg No. 181, Saskatchewan rural municipality in Saskatchewan, Canada 181 Fremont Street proposed skyscraper in San Francisco, California 181 West Madison Street, Chicago In the military 181st (Brandon) Battalion, CEF was a unit in the Canadian Expeditionary Force during World War I 181st Airlift Squadron is a unit of the Texas Air National Guard 181st Infantry Brigade of the United States Army based at Fort McCoy, Wisconsin 181st Intelligence Wing is a unit of the United States Air Force located at Hulman Field, Terre Haute, Indiana AN/APQ-181 an all-weather, low probability of intercept (LPI) radar system for the U.S. Air Force B-2A Spirit bomber aircraft Bücker Bü 181 Bestmann single-engine trainer aircraft during World War II was a ship scheduled to be acquired by the United States Navy, however, the program was cancelled was a United States Navy troop transport during World War II was a United States Navy
https://en.wikipedia.org/wiki/191%20%28number%29
191 (one hundred [and] ninety-one) is the natural number following 190 and preceding 192. In mathematics 191 is a prime number, part of a prime quadruplet of four primes: 191, 193, 197, and 199. Because doubling and adding one produces another prime number (383), 191 is a Sophie Germain prime. It is the smallest prime that is not a full reptend prime in any base from 2 to 10; in fact, the smallest base for which 191 is a full period prime is base 19. See also 191 (disambiguation) References Integers
https://en.wikipedia.org/wiki/193%20%28number%29
193 (one hundred [and] ninety-three) is the natural number following 192 and preceding 194. In mathematics 193 is the number of compositions of 14 into distinct parts. In decimal, it is the seventeenth full repetend prime, or long prime. It is the only odd prime known for which 2 is not a primitive root of . It is the thirteenth Pierpont prime, which implies that a regular 193-gon can be constructed using a compass, straightedge, and angle trisector. It is part of the fourteenth pair of twin primes , the seventh trio of prime triplets , and the fourth set of prime quadruplets . Aside from itself, the friendly giant (the largest sporadic group) holds a total of 193 conjugacy classes. It also holds at least 44 maximal subgroups aside from the double cover of (the forty-fourth prime number is 193). 193 is also the eighth numerator of convergents to Euler's number; correct to three decimal places: The denominator is 71, which is the largest supersingular prime that uniquely divides the order of the friendly giant. In other fields 193 is the telephonic number of the 27 Brazilian Military Firefighters Corpses. 193 is the number of internationally recognized nations by the United Nations Organization (UNO). See also 193 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Security%20parameter
In cryptography, a security parameter is a way of measuring of how "hard" it is for an adversary to break a cryptographic scheme. There are two main types of security parameter: computational and statistical, often denoted by and , respectively. Roughly speaking, the computational security parameter is a measure for the input size of the computational problem on which the cryptographic scheme is based, which determines its computational complexity, whereas the statistical security parameter is a measure of the probability with which an adversary can break the scheme (whatever that means for the protocol). Security parameters are usually expressed in unary representation - i.e. is expressed as a string of s, , conventionally written as - so that the time complexity of the cryptographic algorithm is polynomial in the size of the input. Computational security The security of cryptographic primitives relies on the hardness of some hard problems. One sets the computational security parameter such that computation is considered intractable. Examples If the security of a scheme depends on the secrecy of a key for a pseudorandom function (PRF), then we may specify that the PRF key should be sampled from the space so that a brute-force search requires computational power. In the RSA cryptosystem, the security parameter denotes the length in bits of the modulus n; the positive integer n must therefore be a number in the set {0, ..., 2 - 1}. Statistical security Security in cryptography often relies on the fact that statistical distance between a distribution predicated on a secret, and a simulated distribution produced by an entity that does not know the secret is small. We formalise this using the statistical security parameter by saying that the distributions are statistically close if the statistical distance between distributions can be expressed as a negligible function in the security parameter. One sets the statistical security parameter such t
https://en.wikipedia.org/wiki/197%20%28number%29
197 (one hundred [and] ninety-seven) is the natural number following 196 and preceding 198. In mathematics 197 is a prime number, the third of a prime quadruplet: 191, 193, 197, 199 197 is the smallest prime number that is the sum of 7 consecutive primes: 17 + 19 + 23 + 29 + 31 + 37 + 41, and is the sum of the first twelve prime numbers: 2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 197 is a centered heptagonal number, a centered figurate number that represents a heptagon with a dot in the center and all other dots surrounding the center dot in successive heptagonal layers 197 is a Schröder–Hipparchus number, counting for instance the number of ways of subdividing a heptagon by a non-crossing set of its diagonals. In other fields 197 is also: A police emergency telephone number in Tunisia Number enquiry telephone number in Nepal a song by Norwegian alternative rock group Major Parkinson from their self-titled debut album See also The year AD 197 or 197 BC List of highways numbered 197 References Integers
https://en.wikipedia.org/wiki/199%20%28number%29
199 (one hundred [and] ninety-nine) is the natural number following 198 and preceding 200. In mathematics 199 is a centered triangular number. It is a prime number and the fourth part of a prime quadruplet: 191, 193, 197, 199. 199 is the smallest natural number that takes more than two iterations to compute its digital root as a repeated digit sum: Thus, its additive persistence is three, and it is the smallest number of persistence three. See also The year AD 199 or 199 BC List of highways numbered 199 References Integers
https://en.wikipedia.org/wiki/Interface%20Message%20Processor
The Interface Message Processor (IMP) was the packet switching node used to interconnect participant networks to the ARPANET from the late 1960s to 1989. It was the first generation of gateways, which are known today as routers. An IMP was a ruggedized Honeywell DDP-516 minicomputer with special-purpose interfaces and software. In later years the IMPs were made from the non-ruggedized Honeywell 316 which could handle two-thirds of the communication traffic at approximately one-half the cost. An IMP requires the connection to a host computer via a special bit-serial interface, defined in BBN Report 1822. The IMP software and the ARPA network communications protocol running on the IMPs was discussed in , the first of a series of standardization documents published by what later became the Internet Engineering Task Force (IETF). History The concept of an "Interface computer" was first proposed in 1966 by Donald Davies for the NPL network in England. The same idea was independently developed in early 1967 at a meeting of principal investigators for the Department of Defense's Advanced Research Projects Agency (ARPA) to discuss interconnecting machines across the country. Larry Roberts, who led the ARPANET implementation, initially proposed a network of host computers. Wes Clark suggested inserting "a small computer between each host computer and the network of transmission lines", i.e. making the IMP a separate computer. The IMPs were built by the Massachusetts-based company Bolt Beranek and Newman (BBN) in 1969. BBN was contracted to build four IMPs, the first being due at UCLA by Labor Day; the remaining three were to be delivered in one-month intervals thereafter, completing the entire network in a total of twelve months. When Massachusetts Senator Edward Kennedy learned of BBN's accomplishment in signing this million-dollar agreement, he sent a telegram congratulating the company for being contracted to build the "Interfaith Message Processor". The team worki
https://en.wikipedia.org/wiki/Francis%20Heylighen
Francis Paul Heylighen (born 27 September 1960) is a Belgian cyberneticist investigating the emergence and evolution of intelligent organization. He presently works as a research professor at the Vrije Universiteit Brussel (the Dutch-speaking Free University of Brussels), where he directs the transdisciplinary "Center Leo Apostel" and the research group on "Evolution, Complexity and Cognition". He is best known for his work on the Principia Cybernetica Project, his model of the Internet as a global brain, and his contributions to the theories of memetics and self-organization. He is also known, albeit to a lesser extent, for his work on gifted people and their problems. Biography Heylighen was born on September 27, 1960, in Vilvoorde, Belgium. He received his high school education from the "Koninklijk Atheneum Pitzemburg" in Mechelen, in the section Latin-Mathematics. He received his MSc in mathematical physics in 1982 from the Vrije Universiteit Brussel (VUB), where he also received his PhD Summa cum Laude in Sciences in 1987 for his thesis, published in 1990, as "Representation and Change. A Metarepresentational Framework for the Foundations of Physical and Cognitive Science." In 1983 he started working as a researcher for the Belgian National Fund for Scientific Research (NFWO). In 1994 he became a tenured researcher at the NFWO and in 2001 a research professor at the VUB. Since 1995 he has been affiliated with the VUB's Center Leo Apostel for interdisciplinary studies. In 2004 he created the ECCO research group which he presently directs. Thanks to a grant from a private sponsor, in 2012 he additionally founded the Global Brain Institute at the Vrije Universiteit Brussel, becoming its first director. In 1989 Valentin Turchin and Cliff Joslyn founded the Principia Cybernetica Project, and Heylighen joined a year later. In 1993 he created the project's encyclopedic site, one of the first complex websites in the world. In 1996, Heylighen founded the "Global Brai
https://en.wikipedia.org/wiki/Common%20iliac%20vein
In human anatomy, the common iliac veins are formed by the external iliac veins and internal iliac veins. The left and right common iliac veins come together in the abdomen at the level of the fifth lumbar vertebra, forming the inferior vena cava. They drain blood from the pelvis and lower limbs. Both common iliac veins are accompanied along their course by common iliac arteries. Structure The external iliac vein and internal iliac vein unite in front of the sacroiliac joint to form the common iliac veins. Both common iliac veins ascend to form the inferior vena cava behind the right common iliac artery at the level of the fifth lumbar vertebra. The vena cava is to the right of the midline and therefore the left common iliac vein is longer than the right. The left common iliac vein occasionally travels upwards to the left of the aorta to the level of the kidney, where it receives the left renal vein and crosses in front of the aorta to join the inferior vena cava. The right common iliac vein is virtually vertical and lies behind and then lateral to its artery. Each common iliac vein receives iliolumbar veins, while the left also receives the median sacral vein which lies on the right of the corresponding artery. Clinical significance Overlying arterial structures may cause compression of the upper part of the left common iliac vein. Compression of the left common iliac vein against the fifth lumbar vertebral body by the right common iliac artery as the artery crosses in front of it traditionally happens in May–Thurner syndrome. Continuous pulsation of the common iliac artery may trigger an inflammatory response within the common iliac vein. The resulting intraluminal elastin and collagen deposition can cause intimal fibrosis and the formation of venous spurs and webs. This can lead to narrowing of the vein and cause persistent unilateral leg swelling, contributing to venous thromboembolism. References Veins of the torso Angiology Anatomy Veins
https://en.wikipedia.org/wiki/Genetically%20modified%20crops
Genetically modified crops (GM crops) are plants used in agriculture, the DNA of which has been modified using genetic engineering methods. Plant genomes can be engineered by physical methods or by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. In most cases, the aim is to introduce a new trait to the plant which does not occur naturally in the species. Examples in food crops include resistance to certain pests, diseases, environmental conditions, reduction of spoilage, resistance to chemical treatments (e.g. resistance to a herbicide), or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. Farmers have widely adopted GM technology. Acreage increased from 1.7 million hectares in 1996 to 185.1 million hectares in 2016, some 12% of global cropland. As of 2016, major crop (soybean, maize, canola and cotton) traits consist of herbicide tolerance (95.9 million hectares) insect resistance (25.2 million hectares), or both (58.5 million hectares). In 2015, 53.6 million ha of Genetically modified maize were under cultivation (almost 1/3 of the maize crop). GM maize outperformed its predecessors: yield was 5.6 to 24.5% higher with less mycotoxins (−28.8%), fumonisin (−30.6%) and thricotecens (−36.5%). Non-target organisms were unaffected, except for Braconidae, represented by a parasitoid of European corn borer, the target of Lepidoptera active Bt maize. Biogeochemical parameters such as lignin content did not vary, while biomass decomposition was higher. A 2014 meta-analysis concluded that GM technology adoption had reduced chemical pesticide use by 37%, increased crop yields by 22%, and increased farmer profits by 68%. This reduction in pesticide use has been ecologically beneficial, but benefits may be reduced by overuse. Yield gains and pesticide reductions are larger for insect-resistant crops
https://en.wikipedia.org/wiki/Predictive%20learning
Predictive learning is a technique of machine learning in which an agent tries to build a model of its environment by trying out different actions in various circumstances. It uses knowledge of the effects its actions appear to have, turning them into planning operators. These allow the agent to act purposefully in its world. Predictive learning is one attempt to learn with a minimum of pre-existing mental structure. It may have been inspired by Piaget's account of how children construct knowledge of the world by interacting with it. Gary Drescher's book 'Made-up Minds' was seminal for the area. The idea that predictions and Unconscious inference are used by the brain to construct a model of the world, in which it can identify causes of percepts, is however even older and goes at least back to Hermann von Helmholtz. Those ideas were later picked up in the field of Predictive coding. Another related predictive learning theory is Jeff Hawkins' memory-prediction framework, which is laid out in his On Intelligence. See also Reinforcement learning Predictive coding References Machine learning
https://en.wikipedia.org/wiki/Global%20dimension
In ring theory and homological algebra, the global dimension (or global homological dimension; sometimes just called homological dimension) of a ring A denoted gl dim A, is a non-negative integer or infinity which is a homological invariant of the ring. It is defined to be the supremum of the set of projective dimensions of all A-modules. Global dimension is an important technical notion in the dimension theory of Noetherian rings. By a theorem of Jean-Pierre Serre, global dimension can be used to characterize within the class of commutative Noetherian local rings those rings which are regular. Their global dimension coincides with the Krull dimension, whose definition is module-theoretic. When the ring A is noncommutative, one initially has to consider two versions of this notion, right global dimension that arises from consideration of the right , and left global dimension that arises from consideration of the left . For an arbitrary ring A the right and left global dimensions may differ. However, if A is a Noetherian ring, both of these dimensions turn out to be equal to weak global dimension, whose definition is left-right symmetric. Therefore, for noncommutative Noetherian rings, these two versions coincide and one is justified in talking about the global dimension. Examples Let A = K[x1,...,xn] be the ring of polynomials in n variables over a field K. Then the global dimension of A is equal to n. This statement goes back to David Hilbert's foundational work on homological properties of polynomial rings; see Hilbert's syzygy theorem. More generally, if R is a Noetherian ring of finite global dimension k and A = R[x] is a ring of polynomials in one variable over R then the global dimension of A is equal to k + 1. A ring has global dimension zero if and only if it is semisimple. The global dimension of a ring A is less than or equal to one if and only if A is hereditary. In particular, a commutative principal ideal domain which is not a field has global
https://en.wikipedia.org/wiki/Principal%20variation%20search
Principal variation search (sometimes equated with the practically identical NegaScout) is a negamax algorithm that can be faster than alpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing the minimax value of a node in a tree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage. NegaScout works best when there is a good move ordering. In practice, the move ordering is often determined by previous shallower searches. It produces more cutoffs than alpha–beta by assuming that the first explored node is the best. In other words, it supposes the first node is in the principal variation. Then, it can check whether that is true by searching the remaining nodes with a null window (also known as a scout window; when alpha and beta are equal), which is faster than searching with the regular alpha–beta window. If the proof fails, then the first node was not in the principal variation, and the search continues as normal alpha–beta. Hence, NegaScout works best when the move ordering is good. With a random move ordering, NegaScout will take more time than regular alpha–beta; although it will not explore any nodes alpha–beta did not, it will have to re-search many nodes. Alexander Reinefeld invented NegaScout several decades after the invention of alpha–beta pruning. He gives a proof of correctness of NegaScout in his book. Another search algorithm called SSS* can theoretically result in fewer nodes searched. However, its original formulation has practical issues (in particular, it relies heavily on an OPEN list for storage) and nowadays most chess engines still use a form of NegaScout in their search. Most chess engines use a transposition table in which the relevant part of the search tree is stored. This part of the tree has the same size as SSS*'s OPEN list would have. A reformula
https://en.wikipedia.org/wiki/Square%20principle
In mathematical set theory, a square principle is a combinatorial principle asserting the existence of a cohering sequence of short closed unbounded (club) sets so that no one (long) club set coheres with them all. As such they may be viewed as a kind of incompactness phenomenon. They were introduced by Ronald Jensen in his analysis of the fine structure of the constructible universe L. Definition Define Sing to be the class of all limit ordinals which are not regular. Global square states that there is a system satisfying: is a club set of . ot If is a limit point of then and Variant relative to a cardinal Jensen introduced also a local version of the principle. If is an uncountable cardinal, then asserts that there is a sequence satisfying: is a club set of . If , then If is a limit point of then Jensen proved that this principle holds in the constructible universe for any uncountable cardinal κ. Notes Set theory Constructible universe
https://en.wikipedia.org/wiki/ACARS
In aviation, ACARS (; an acronym for Aircraft Communications Addressing and Reporting System) is a digital datalink system for transmission of short messages between aircraft and ground stations via airband radio or satellite. The protocol was designed by ARINC and deployed in 1978, using the Telex format. More ACARS radio stations were added subsequently by SITA. History of ACARS Prior to the introduction of datalink in aviation, all communication between the aircraft and ground personnel was performed by the flight crew using voice communication, using either VHF or HF voice radios. In many cases, the voice-relayed information involved dedicated radio operators and digital messages sent to an airline teletype system or successor systems. Further, the hourly rates for flight and cabin crew salaries depended on whether the aircraft was airborne or not, and if on the ground whether it was at the gate or not. The flight crews reported these times by voice to geographically dispersed radio operators. Airlines wanted to eliminate self-reported times to preclude inaccuracies, whether accidental or deliberate. Doing so also reduced the need for human radio operators to receive the reports. In an effort to reduce crew workload and improve data integrity, the engineering department at ARINC introduced the ACARS system in July 1978, as an automated time clock system. Teledyne Controls produced the avionics and the launch customer was Piedmont Airlines. The original expansion of the abbreviation was "Arinc Communications Addressing and Reporting System". Later, it was changed to "Aircraft Communications, Addressing and Reporting System". The original avionics standard was ARINC 597, which defined an ACARS Management Unit consisting of discrete inputs for the doors, parking brake and weight on wheels sensors to automatically determine the flight phase and generate and send as telex messages. It also contained a MSK modem, which was used to transmit the reports over existing
https://en.wikipedia.org/wiki/Sequence%20transformation
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods. Overview Classical examples for sequence transformations include the binomial transform, Möbius transform, Stirling transform and others. Definitions For a given sequence the transformed sequence is where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, i.e. for some which often depends on (cf. e.g. Binomial transform). In the simplest case, the and the are real or complex numbers. More generally, they may be elements of some vector space or algebra. In the context of acceleration of convergence, the transformed sequence is said to converge faster than the original sequence if where is the limit of , assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit . If the mapping is linear in each of its arguments, i.e., for for some constants (which may depend on n), the sequence transformation is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations. Examples Simplest examples of (linear) sequence transformations include shifting all elements, (resp. = 0 if n + k < 0) for a fixed k, and scalar multiplication of the sequence. A less trivial example would be the discrete convolution with a fixed sequence. A particular
https://en.wikipedia.org/wiki/211%20%28number%29
211 (two hundred [and] eleven) is the natural number following 210 and preceding 212. It is also a prime number. In mathematics 211 is an odd number. 211 is a primorial prime, the sum of three consecutive primes (), a Chen prime, a centered decagonal prime, and a self prime. 211 is the smallest prime separated by eight or more from the nearest primes (199 and 223). It is thus a balanced prime and an isolated prime. 211 is a repdigit in tetradecimal (111). In decimal, multiplying 211's digits results in a prime (); adding its digits results in a square (). Rearranging its digits, 211 becomes 121, which also is a square (). Adding any two of 211's digits will result in a prime (2 or 3). 211 is a super-prime. In science and technology 2-1-1 is special abbreviated telephone number reserved in Canada and the United States as an easy-to-remember three-digit telephone number. It is meant to provide quick information and referrals to health and human service organizations for both services from charities and from governmental agencies. In chemistry, 211 is also associated with E211, the preservative sodium benzoate. In religions In Islam, Sermon 211 is about the strength and greatness of Allah. In other fields 211 is also the California Penal Code section defining robbery. It is sometimes paired with 187, California PC section for murder. 211 is also an EDI (Electronic data interchange) document known as an Electronic Bill of Lading. 211 is also a nickname for Steel Reserve, a malt liquor alcoholic beverage. 211 is also SMTP status code for system status. +211 is the code for international direct-dial phone calls to South Sudan. See also 211 Crew References Integers ca:Nombre 210#Nombres del 211 al 219
https://en.wikipedia.org/wiki/Pseudoholomorphic%20curve
In mathematics, specifically in topology and geometry, a pseudoholomorphic curve (or J-holomorphic curve) is a smooth map from a Riemann surface into an almost complex manifold that satisfies the Cauchy–Riemann equation. Introduced in 1985 by Mikhail Gromov, pseudoholomorphic curves have since revolutionized the study of symplectic manifolds. In particular, they lead to the Gromov–Witten invariants and Floer homology, and play a prominent role in string theory. Definition Let be an almost complex manifold with almost complex structure . Let be a smooth Riemann surface (also called a complex curve) with complex structure . A pseudoholomorphic curve in is a map that satisfies the Cauchy–Riemann equation Since , this condition is equivalent to which simply means that the differential is complex-linear, that is, maps each tangent space to itself. For technical reasons, it is often preferable to introduce some sort of inhomogeneous term and to study maps satisfying the perturbed Cauchy–Riemann equation A pseudoholomorphic curve satisfying this equation can be called, more specifically, a -holomorphic curve. The perturbation is sometimes assumed to be generated by a Hamiltonian (particularly in Floer theory), but in general it need not be. A pseudoholomorphic curve is, by its definition, always parametrized. In applications one is often truly interested in unparametrized curves, meaning embedded (or immersed) two-submanifolds of , so one mods out by reparametrizations of the domain that preserve the relevant structure. In the case of Gromov–Witten invariants, for example, we consider only closed domains of fixed genus and we introduce marked points (or punctures) on . As soon as the punctured Euler characteristic is negative, there are only finitely many holomorphic reparametrizations of that preserve the marked points. The domain curve is an element of the Deligne–Mumford moduli space of curves. Analogy with the classical Cauchy–Riemann equations The