source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Electronic%20game
An electronic game is a game that uses electronics to create an interactive system with which a player can play. Video games are the most common form today, and for this reason the two terms are often used interchangeably. There are other common forms of electronic game including handheld electronic games, standalone systems (e.g. pinball, slot machines, or electro-mechanical arcade games), and exclusively non-visual products (e.g. audio games). Teletype games The earliest form of computer game to achieve any degree of mainstream use was the text-based Teletype game. Teletype games lack video display screens and instead present the game to the player by printing a series of characters on paper which the player reads as it emerges from the platen. Practically this means that each action taken will require a line of paper and thus a hard-copy record of the game remains after it has been played. This naturally tends to reduce the size of the gaming universe or alternatively to require a great amount of paper. As computer screens became standard during the rise of the third generation computer, text-based command line-driven language parsing Teletype games transitioned into visual interactive fiction allowing for greater depth of gameplay and reduced paper requirements. This transition was accompanied by a simultaneous shift from the mainframe environment to the personal computer. Several of these subsequently were ported to systems with video displays, eliminating the need for a teletype printer. Examples of text-based Teletype games include: The Oregon Trail (1971) Trek73 (1973) Dungeon (1975) Super Star Trek (1975) Colossal Cave Adventure (1976) Zork (1977) Electronic handhelds The earliest form of dedicated console, handheld electronic games are characterized by their size and portability. Used to play interactive games, handheld electronic games are often miniaturized versions of video games. The controls, display and speakers are all part of a single unit, an
https://en.wikipedia.org/wiki/Donald%20B.%20Gillies
Donald Bruce Gillies (October 15, 1928 – July 17, 1975) was a Canadian computer scientist and mathematician who worked in the fields of computer design, game theory, and minicomputer programming environments. Early life and education Donald B. Gillies was born in Toronto, Ontario, Canada, to John Zachariah Gillies (a Canadian) and Anne Isabelle Douglas MacQueen (an American). He attended the University of Toronto Schools, a laboratory school originally affiliated with the university. Gillies attended the University of Toronto from 1946 to 1950, majoring in mathematics. He began his graduate education at the University of Illinois and helped with the checkout of ORDVAC computer in the summer of 1951. After one year he transferred to Princeton to work for John von Neumann and developed the first theorems of core (game theory) in his PhD thesis. Gillies ranked among the top ten participants in the William Lowell Putnam Mathematical Competition held in 1950. Career Gillies moved to England for two years to work for the National Research Development Corporation. He returned to the US in 1956, married Alice E. Dunkle, and began a job as a professor at the University of Illinois at Urbana-Champaign. Starting in 1957, Gillies designed the three-stage pipeline control of the ILLIAC II supercomputer at the University of Illinois. The pipelined stages were named "advanced control", "delayed control", and "interplay". This work competed with the IBM 7030 Stretch computer and was in the public domain. Gillies presented a talk on ILLIAC II at the University of Michigan Engineering Summer Conference in 1962. During checkout of ILLIAC II, Gillies found three new Mersenne primes, one of which was the largest prime number known at the time. Death and legacy Gillies died unexpectedly at age 46 on July 17, 1975, of a rare viral myocarditis. In 1975, the Donald B. Gillies Memorial lecture was established at the University of Illinois, with one leading researcher from compu
https://en.wikipedia.org/wiki/Legion%20%28demons%29
Legion means a large group or in another parlance it may mean "many". In the Christian Bible, it is used to refer to the group of demons, particularly those in two of three versions of the exorcism of the Gerasene demoniac, an account in the New Testament of an incident in which Jesus performs an exorcism. Development of the story The earliest version of this story exists in the Gospel of Mark, described as taking place in "the country of the Gerasenes". Jesus encounters a possessed man and calls on the demon to emerge, demanding to know its name – an important element of traditional exorcism practice. He finds the man is possessed by a multitude of demons who give the collective name of "Legion". Fearing that Jesus will drive them out of the world and into the abyss, they beg him instead to cast them into a herd of pigs on a nearby hill, which he does. The pigs then rush into the sea and are drowned (). This story is also in the other two Synoptic Gospels. The Gospel of Luke shortens the story but retains most of the details including the name (). The Gospel of Matthew shortens it more dramatically, changes the possessed man to two men (a particular stylistic device of this writer) and changes the location to "the country of the Gadarenes". This is probably because the author was aware that Gerasa is actually around 50 km away from the Sea of Galilee (although Gadara is still 10 km distant). In this version, the demons are unnamed (). Cultural background According to Michael Willett Newheart, professor of New Testament Language and Literature at the Howard University School of Divinity (2004), the author of the Gospel of Mark could well have expected readers to associate the name Legion with the Roman military formation, active in the area at the time (around 70 AD). The intention may be to show that Jesus is stronger than the occupying force of the Romans. The Biblical scholar Seyoon Kim, however, points out that the Latin legio was commonly used as a loanwor
https://en.wikipedia.org/wiki/Fluorescence%20in%20situ%20hybridization
Fluorescence in situ hybridization (FISH) is a molecular cytogenetic technique that uses fluorescent probes that bind to only particular parts of a nucleic acid sequence with a high degree of sequence complementarity. It was developed by biomedical researchers in the early 1980s to detect and localize the presence or absence of specific DNA sequences on chromosomes. Fluorescence microscopy can be used to find out where the fluorescent probe is bound to the chromosomes. FISH is often used for finding specific features in DNA for use in genetic counseling, medicine, and species identification. FISH can also be used to detect and localize specific RNA targets (mRNA, lncRNA and miRNA) in cells, circulating tumor cells, and tissue samples. In this context, it can help define the spatial-temporal patterns of gene expression within cells and tissues. Probes – RNA and DNA In biology, a probe is a single strand of DNA or RNA that is complementary to a nucleotide sequence of interest. RNA probes can be designed for any gene or any sequence within a gene for visualization of mRNA, lncRNA and miRNA in tissues and cells. FISH is used by examining the cellular reproduction cycle, specifically interphase of the nuclei for any chromosomal abnormalities. FISH allows the analysis of a large series of archival cases much easier to identify the pinpointed chromosome by creating a probe with an artificial chromosomal foundation that will attract similar chromosomes. The hybridization signals for each probe when a nucleic abnormality is detected. Each probe for the detection of mRNA and lncRNA is composed of ~20-50 oligonucleotide pairs, each pair covering a space of 40–50 bp. The specifics depend on the specific FISH technique used. For miRNA detection, the probes use proprietary chemistry for specific detection of miRNA and cover the entire miRNA sequence. Probes are often derived from fragments of DNA that were isolated, purified, and amplified for use in the Human Genome Projec
https://en.wikipedia.org/wiki/Steve%20Chen%20%28computer%20engineer%29
Steve Chen (; pinyin: Chén Shìqīng) (born 1944 in Taiwan) is a Taiwanese computer engineer and internet entrepreneur. Chen was elected to the US National Academy of Engineering in 1991 for leadership in the development of super-computer architectures and their realization. Life Chen earned a BS from National Taiwan University in 1966. MS from Villanova University in 1971 and a PhD under David Kuck from the University of Illinois at Urbana-Champaign in 1975. From 1975 through 1978 he worked for Burroughs Corporation on the design of the Burroughs large systems line of supercomputers. He is best known as the principal designer of the Cray X-MP and Cray Y-MP multiprocessor supercomputers. Chen left Cray Research in September 1987 after it dropped the MP line. With IBM's financial support, Chen founded Supercomputer Systems Incorporated (SSI) in January 1988. SSI was devoted to development of the SS-1 supercomputer, which was nearly completed before the estimated $150 million investment ran out. The Eau Claire, Wisconsin-based company went bankrupt in early 1993, leaving more than 300 employees jobless. An attempt to salvage the work was made by forming a new company, SuperComputer International (SCI), later that year. SCI was renamed Chen Systems in 1995. It was acquired by Sequent Computer Systems the following year. John Markoff, a technology journalist, wrote in the New York Times that Chen was considered "one of the nation's most brilliant supercomputer designers while working in this country for the technology pioneer Seymour Cray in the 1980s." In 1999, Chen became founder and CEO of Galactic Computing, a developer of supercomputing blade systems, based in Shenzhen, China. At Tonbu, Inc., his team designed and implemented the world's first fully scalable cloud computing system. A fully scalable dynamic process and application engine. By 2005 he started to focus on grid computing to model a human brain instead. By 2010, he was reported to be working on
https://en.wikipedia.org/wiki/Amazon%20Web%20Services
Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered, pay-as-you-go basis. Clients will often use this in combination with autoscaling (a process that allows a client to use more computing in times of high application usage, and then scale down to reduce costs when there is less traffic). These cloud computing web services provide various services related to networking, compute, storage, middleware, IoT and other processing capacity, as well as software tools via AWS server farms. This frees clients from managing, scaling, and patching hardware and operating systems. One of the foundational services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, with extremely high availability, which can be interacted with over the internet via REST APIs, a CLI or the AWS console. AWS's virtual computers emulate most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM). AWS services are delivered to customers via a network of AWS server farms located throughout the world. Fees are based on a combination of usage (known as a "Pay-as-you-go" model), hardware, operating system, software, or networking features chosen by the subscriber require availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. Amazon provides select portions of security for subscribers (e.g. physical security of the data centers) while other aspects of security are the responsibility of the subscriber (e.g. account management, vulnerabil
https://en.wikipedia.org/wiki/Group%20of%20Lie%20type
In mathematics, specifically in group theory, the phrase group of Lie type usually refers to finite groups that are closely related to the group of rational points of a reductive linear algebraic group with values in a finite field. The phrase group of Lie type does not have a widely accepted precise definition, but the important collection of finite simple groups of Lie type does have a precise definition, and they make up most of the groups in the classification of finite simple groups. The name "groups of Lie type" is due to the close relationship with the (infinite) Lie groups, since a compact Lie group may be viewed as the rational points of a reductive linear algebraic group over the field of real numbers. and are standard references for groups of Lie type. Classical groups An initial approach to this question was the definition and detailed study of the so-called classical groups over finite and other fields by . These groups were studied by L. E. Dickson and Jean Dieudonné. Emil Artin investigated the orders of such groups, with a view to classifying cases of coincidence. A classical group is, roughly speaking, a special linear, orthogonal, symplectic, or unitary group. There are several minor variations of these, given by taking derived subgroups or central quotients, the latter yielding projective linear groups. They can be constructed over finite fields (or any other field) in much the same way that they are constructed over the real numbers. They correspond to the series An, Bn, Cn, Dn,2An, 2Dn of Chevalley and Steinberg groups. Chevalley groups Chevalley groups can be thought of as Lie groups over finite fields. The theory was clarified by the theory of algebraic groups, and the work of on Lie algebras, by means of which the Chevalley group concept was isolated. Chevalley constructed a Chevalley basis (a sort of integral form but over finite fields) for all the complex simple Lie algebras (or rather of their universal enveloping algebras), whic
https://en.wikipedia.org/wiki/HijackThis
HijackThis (also HiJackThis or HJT) is a free and open-source tool to detect malware and adware on Microsoft Windows. It was originally created by Merijn Bellekom, and later sold to Trend Micro. The program is notable for quickly scanning a user's computer to display the most common locations of malware, rather than relying on a database of known spyware. HijackThis is used primarily for diagnosis of malware, not to remove or detect spyware—as uninformed use of its removal facilities can cause significant software damage to a computer. Browser hijacking can cause malware to be installed on a computer. On February 16, 2012, Trend Micro released the HijackThis source code as open source and it is now available on the SourceForge site. Use HijackThis can generate a plain-text logfile detailing all entries it finds, and some entries can be fixed by HijackThis. Inexperienced users are advised to exercise caution or seek help when using the latter option. Except for a small whitelist of known safe entries, HijackThis does not discriminate between legitimate and unwanted items. HijackThis attempts to create backups of the files and registry entries that it fixes, which can be used to restore the system in the event of a mistake. A common use is to post the logfile to a forum where more experienced users can help decipher which entries need to be removed. Automated tools also exist that analyze saved logs and attempt to provide recommendations to the user, or to clean entries automatically. Use of such tools, however, is generally discouraged by those who specialize in manually dealing with HijackThis logs: they consider the tools dangerous for inexperienced users, and neither accurate nor reliable enough to substitute for consulting with a trained human analyst. Later versions of HijackThis include such additional tools as a task manager, a hosts-file editor, and an alternate-data-stream scanner. HijackThis reached end-of-life in 2013 and is no longer developed. Howe
https://en.wikipedia.org/wiki/Motoko%20Kusanagi
Major , or just "Major", is the main protagonist in Masamune Shirow's Ghost in the Shell manga and anime series. She is a synthetic "full-body prosthesis" augmented-cybernetic human employed as the field commander of Public Security Section 9, a fictional anti-cybercrime law-enforcement division of the Japanese National Public Safety Commission. A strong-willed, physically powerful, and highly intelligent cyberhero, she is well known for her skills in deduction, hacking and military tactics. Conception and creation Motoko Kusanagi's body was designed by the manga author and artist Masamune Shirow to be a mass production model so she would not be conspicuous. Her electrical and mechanical system within is special and features parts unavailable on the civilian market. Shirow intentionally chose this appearance so Motoko would not be harvested for those parts. Character In the 1995 anime film adaptation, character designer and key animator supervisor Hiroyuki Okiura made her different from her original manga counterpart, stating, "Motoko Kusanagi is a cyborg. Therefore, her body is strong and youthful. However her human mentality is considerably older than she looks. I tried to depict this maturity in her character instead of the original girl created by Masamune Shirow." In nearly all portrayals, Kusanagi is depicted as a self-made woman. She is a fiercely independent and capable leader who has proven herself under fire countless times. Kenji Kamiyama had a difficult time identifying her and could not understand her motives during the first season of the anime series Ghost in the Shell: Stand Alone Complex. Due to this, he created an episode in the second season where he recounted her past. He was then able to describe her as a human who was chosen to gain this superhuman power; she probably believes that she has an obligation to use that ability for the benefit of others. English voice actor and director Mary Elizabeth McGlynn states she loved playing the role of
https://en.wikipedia.org/wiki/Tom%20Pepper
Tom Pepper (born August 24, 1975 in Des Moines, Iowa) is a computer programmer best known for his collaboration with Justin Frankel on the Gnutella peer-to-peer system. He and Frankel co-founded Nullsoft, whose most popular program is Winamp, which was sold to AOL in May 1999. He subsequently worked for AOL developing SHOUTcast, an Internet streaming audio service, with Frankel and Stephen "Tag" Loomis. After leaving AOL in 2004. he worked at RAZZ, Inc. He continues to collaborate with Frankel on independent projects like Ninjam. See also WASTE Friend-to-friend (F2F) File sharing Peer-to-peer (P2P) Gnutella Nullsoft Justin Frankel References Computer programmers People from Des Moines, Iowa Living people 1975 births American chief technology officers 21st-century American businesspeople
https://en.wikipedia.org/wiki/DVD%20region%20code
DVD region codes are a digital rights management technique introduced in 1997. It is designed to allow rights holders to control the international distribution of a DVD release, including its content, release date, and price, all according to the appropriate region. This is achieved by way of region-locked DVD players, which will play back only DVDs encoded to their region (plus those without any region code). The American DVD Copy Control Association also requires that DVD player manufacturers incorporate the regional-playback control (RPC) system. However, region-free DVD players, which ignore region coding, are also commercially available, and many DVD players can be modified to be region-free, allowing playback of all discs. DVDs may use one code, multiple codes (multi-region), or all codes (region free). Region codes and countries Any combination of regions can be applied to a single disc. For example, a DVD designated Region 2/4 is suitable for playback in Europe, Latin America, Oceania, and any other Region 2 or Region 4 area. So-called "Region 0" and "ALL" discs are meant to be playable worldwide. The term "Region 0" also describes the DVD players designed or modified to incorporate Regions 1–8, thereby providing compatibility with most discs, regardless of region. This apparent solution was popular in the early days of the DVD format, but studios quickly responded by adjusting discs to refuse to play in such machines by implementing a system known as "Regional Coding Enhancement" (RCE). DVDs sold in the Baltic states use both region 2 and 5 codes, having previously been in region 5 (because of their history as part of the USSR), but EU single market law concerning the free movement of goods caused a switch to region 2. European region 2 DVDs may be sub-coded "D1" to "D4". "D1" are the UK only releases; "D2" and "D3" are not sold in the UK and Ireland; "D4" are distributed throughout Europe. Overseas territories of the United Kingdom and France (both in
https://en.wikipedia.org/wiki/SUPER-UX
SUPER-UX was a version of the Unix operating system from NEC that is used on its SX series of supercomputers. History The initial version of SUPER-UX was based on UNIX System V version 3.1 with features from BSD 4.3. The version for the NEC SX-9 was based on SVR4.2MP with BSD enhancements. Features SUPER-UX is a 64-bit UNIX operating system. It supports the Supercomputer File System (SFS). Earth Simulator The Earth Simulator uses a custom OS called "ESOS" (Earth Simulator Operating System) based on SUPER-UX. It has many enhanced features custom designed for the Earth Simulator which are not in the regular SUPER-UX OS. See also EWS-UX References External links NEC Europe HPC NEC Japan HPC Official NEC SUPER-UX page, Archived 6 May 2008 UNIX System V NEC supercomputers Supercomputer operating systems
https://en.wikipedia.org/wiki/Generation%20of%20Animals
The Generation of Animals (or On the Generation of Animals; Greek: Περὶ ζῴων γενέσεως (Peri Zoion Geneseos); Latin: De Generatione Animalium) is one of the biological works of the Corpus Aristotelicum, the collection of texts traditionally attributed to Aristotle (384–322 BC). The work provides an account of animal reproduction, gestation and heredity. Content Generation of Animals consists of five books, which are themselves split into varying numbers of chapters. Most editions of this work categorise it with Bekker numbers. In general, each book covers a range of related topics, however there is also a significant amount of overlap in the content of the books. For example, while one of the two principal topics covered in book I is the function of semen (gone, sperma), this account is not finalised until partway through book II. Book I (715a – 731b) Chapter 1 begins with Aristotle claiming to have already addressed the parts of animals, referencing the author's work of the same name. While this and possibly his other biological works, have addressed three of the four causes pertaining to animals, the final, formal, and material, the efficient cause has yet to be spoken of. He argues that the efficient cause, or "that from which the source of movement comes" can be addressed with an inquiry into the generation of animals. Aristotle then provides a general overview of the processes of reproduction adopted by the various genera, for instance most 'blooded' animals reproduce by coition of a male and female of the same species, but cases vary for 'bloodless' animals. The reproductive organs of males and females are also investigated. Through chapters 2–5 Aristotle successively describes the general reproductive features common to each sex, the differences in reproductive parts among blooded animals, the causes of differences of testes in particular, and why some animals do not have external reproductive organs. The latter provides clear examples of Aristotle's te
https://en.wikipedia.org/wiki/Privilege%20%28computing%29
In computing, privilege is defined as the delegation of authority to perform security-relevant functions on a computer system. A privilege allows a user to perform an action with security consequences. Examples of various privileges include the ability to create a new user, install software, or change kernel functions. Users who have been delegated extra levels of control are called privileged. Users who lack most privileges are defined as unprivileged, regular, or normal users. Theory Privileges can either be automatic, granted, or applied for. An automatic privilege exists when there is no requirement to have permission to perform an action. For example, on systems where people are required to log into a system to use it, logging out will not require a privilege. Systems that do not implement file protection - such as MS-DOS - essentially give unlimited privilege to perform any action on a file. A granted privilege exists as a result of presenting some credential to the privilege granting authority. This is usually accomplished by logging on to a system with a username and password, and if the username and password supplied are correct, the user is granted additional privileges. A privilege is applied for by either an executed program issuing a request for advanced privileges, or by running some program to apply for the additional privileges. An example of a user applying for additional privileges is provided by the sudo command to run a command as superuser (root) user, or by the Kerberos authentication system. Modern processor architectures have multiple CPU modes that allows the OS to run at different privilege levels. Some processors have two levels (such as user and supervisor); i386+ processors have four levels (#0 with the most, #3 with the least privileges). Tasks are tagged with a privilege level. Resources (segments, pages, ports, etc.) and the privileged instructions are tagged with a demanded privilege level. When a task tries to use a re
https://en.wikipedia.org/wiki/Triple%20bar
The triple bar or tribar, ≡, is a symbol with multiple, context-dependent meanings indicating equivalence of two different things. Its main uses are in mathematics and logic. It has the appearance of an equals sign  with a third line. Encoding The triple bar character in Unicode is code point . The closely related code point is the same symbol with a slash through it, indicating the negation of its mathematical meaning. In LaTeX mathematical formulas, the code \equiv produces the triple bar symbol and \not\equiv produces the negated triple bar symbol as output. Uses Mathematics and philosophy In logic, it is used with two different but related meanings. It can refer to the if and only if connective, also called material equivalence. This is a binary operation whose value is true when its two arguments have the same value as each other. Alternatively, in some texts ⇔ is used with this meaning, while ≡ is used for the higher-level metalogical notion of logical equivalence, according to which two formulas are logically equivalent when all models give them the same value. Gottlob Frege used a triple bar for a more philosophical notion of identity, in which two statements (not necessarily in mathematics or formal logic) are identical if they can be freely substituted for each other without change of meaning. In mathematics, the triple bar is sometimes used as a symbol of identity or an equivalence relation (although not the only one; other common choices include ~ and ≈). Particularly, in geometry, it may be used either to show that two figures are congruent or that they are identical. In number theory, it has been used beginning with Carl Friedrich Gauss (who first used it with this meaning in 1801) to mean modular congruence: if N divides a − b. In category theory, triple bars may be used to connect objects in a commutative diagram, indicating that they are actually the same object rather than being connected by an arrow of the category. This symbol is als
https://en.wikipedia.org/wiki/Genetic%20viability
Genetic viability is the ability of the genes present to allow a cell, organism or population to survive and reproduce. The term is generally used to mean the chance or ability of a population to avoid the problems of inbreeding. Less commonly genetic viability can also be used in respect to a single cell or on an individual level. Inbreeding depletes heterozygosity of the genome, meaning there is a greater chance of identical alleles at a locus. When these alleles are non-beneficial, homozygosity could cause problems for genetic viability. These problems could include effects on the individual fitness (higher mortality, slower growth, more frequent developmental defects, reduced mating ability, lower fecundity, greater susceptibility to disease, lowered ability to withstand stress, reduced intra- and inter-specific competitive ability) or effects on the entire population fitness (depressed population growth rate, reduced regrowth ability, reduced ability to adapt to environmental change). See Inbreeding depression. When a population of plants or animals loses their genetic viability, their chance of going extinct increases. Necessary conditions To be genetically viable, a population of plants or animals requires a certain amount of genetic diversity and a certain population size. For long-term genetic viability, the population size should consist of enough breeding pairs to maintain genetic diversity. The precise effective population size can be calculated using a minimum viable population analysis.  Higher genetic diversity and a larger population size will decrease the negative effects of genetic drift and inbreeding in a population. When adequate measures have been met, the genetic viability of a population will increase. Causes for decrease The main cause of a decrease in genetic viability is loss of habitat. This loss can occur because of, for example urbanization or deforestation causing habitat fragmentation. Natural events like earthquakes, floods
https://en.wikipedia.org/wiki/Turmite
In computer science, a turmite is a Turing machine which has an orientation in addition to a current state and a "tape" that consists of an infinite two-dimensional grid of cells. The terms ant and vant are also used. Langton's ant is a well-known type of turmite defined on the cells of a square grid. Paterson's worms are a type of turmite defined on the edges of an isometric grid. It has been shown that turmites in general are exactly equivalent in power to one-dimensional Turing machines with an infinite tape, as either can simulate the other. History Langton's ants were invented in 1986 and declared "equivalent to Turing machines". Independently, in 1988, Allen H. Brady considered the idea of two-dimensional Turing machines with an orientation and called them "TurNing machines". Apparently independently of both of these, Greg Turk investigated the same kind of system and wrote to A. K. Dewdney about them. A. K. Dewdney named them "tur-mites" in his "Computer Recreations" column in Scientific American in 1989. Rudy Rucker relates the story as follows: Relative vs. absolute turmites Turmites can be categorised as being either relative or absolute. Relative turmites, alternatively known as "Turning machines", have an internal orientation. Langton's Ant is such an example. Relative turmites are, by definition, isotropic; rotating the turmite does not affect its outcome. Relative turmites are so named because the directions are encoded relative to the current orientation, equivalent to using the words "left" or "backwards". Absolute turmites, by comparison, encode their directions in absolute terms: a particular instruction may direct the turmite to move "North". Absolute turmites are two-dimensional analogues of conventional Turing machines, so are occasionally referred to as simply "Two-dimensional Turing machines". The remainder of this article is concerned with the relative case. Specification The following specification is specific to turmites on a two-
https://en.wikipedia.org/wiki/Kiki%20Kaikai
is a shoot 'em up video game developed and published by Taito for arcades in 1986. Set in Feudal Japan, the player assumes the role of a Shinto shrine maiden who must use her o-fuda scrolls and gohei wand to defeat renegade spirits and monsters from Japanese mythology. The game is noteworthy for using a traditional fantasy setting in a genre otherwise filled with science fiction motifs. The game received a number of home ports, both as a stand-alone title and as part of compilations. The original arcade game was only ever released in Japan, but a bootleg version called Knight Boy was released outside Japan. Kiki Kaikai was followed by a sequel for the Super NES in 1992 known as Pocky & Rocky outside Japan. The series, known as Kiki Kaikai in Japan and Pocky & Rocky outside Japan, has continued since then and includes several games. Plot The game follows the adventures of "Sayo-chan", a young Shinto shrine maiden living in Feudal Japan. One night, while Sayo-chan is fanning a ceremonial fire, she is visited by the Seven Lucky Gods, who warn her of a great, impending danger. Suddenly, a band of mischievous youkai appear and kidnap the gods, quickly retreating to a faraway mountain range. Sayo-chan, determined to help the gods, sets off on a journey across the countryside, where she confronts a number of strange creatures from Japanese mythology, including obake, and yurei. Gameplay Kiki Kaikai is an overhead multi-directional shooter game that requires the player to move in four directions through various levels while attacking harmful enemies as they approach from off screen. As Sayo-chan, the player can attack by either throwing her special o-fuda scrolls in eight separate directions, or by swinging her purification rod directly in front of her. These techniques can be upgraded by finding special paper slips left by defeated enemies that will either enhance their power or improve their range. Sayo can be damaged by coming in contact with an enemy, and can only
https://en.wikipedia.org/wiki/Hilbert%27s%20ninth%20problem
Hilbert's ninth problem, from the list of 23 Hilbert's problems (1900), asked to find the most general reciprocity law for the norm residues of k-th order in a general algebraic number field, where k is a power of a prime. Progress made The problem was partially solved by Emil Artin by establishing the Artin reciprocity law which deals with abelian extensions of algebraic number fields. Together with the work of Teiji Takagi and Helmut Hasse (who established the more general Hasse reciprocity law), this led to the development of the class field theory, realizing Hilbert's program in an abstract fashion. Certain explicit formulas for norm residues were later found by Igor Shafarevich (1948; 1949; 1950). The non-abelian generalization, also connected with Hilbert's twelfth problem, is one of the long-standing challenges in number theory and is far from being complete. See also List of unsolved problems in mathematics References External links English translation of Hilbert's original address Algebraic number theory Unsolved problems in number theory 09
https://en.wikipedia.org/wiki/Hilbert%27s%20fourteenth%20problem
In mathematics, Hilbert's fourteenth problem, that is, number 14 of Hilbert's problems proposed in 1900, asks whether certain algebras are finitely generated. The setting is as follows: Assume that k is a field and let K be a subfield of the field of rational functions in n variables, k(x1, ..., xn ) over k. Consider now the k-algebra R defined as the intersection Hilbert conjectured that all such algebras are finitely generated over k. Some results were obtained confirming Hilbert's conjecture in special cases and for certain classes of rings (in particular the conjecture was proved unconditionally for n = 1 and n = 2 by Zariski in 1954). Then in 1959 Masayoshi Nagata found a counterexample to Hilbert's conjecture. The counterexample of Nagata is a suitably constructed ring of invariants for the action of a linear algebraic group. History The problem originally arose in algebraic invariant theory. Here the ring R is given as a (suitably defined) ring of polynomial invariants of a linear algebraic group over a field k acting algebraically on a polynomial ring k[x1, ..., xn] (or more generally, on a finitely generated algebra defined over a field). In this situation the field K is the field of rational functions (quotients of polynomials) in the variables xi which are invariant under the given action of the algebraic group, the ring R is the ring of polynomials which are invariant under the action. A classical example in nineteenth century was the extensive study (in particular by Cayley, Sylvester, Clebsch, Paul Gordan and also Hilbert) of invariants of binary forms in two variables with the natural action of the special linear group SL2(k) on it. Hilbert himself proved the finite generation of invariant rings in the case of the field of complex numbers for some classical semi-simple Lie groups (in particular the general linear group over the complex numbers) and specific linear actions on polynomial rings, i.e. actions coming from finite-dimensional repres
https://en.wikipedia.org/wiki/Intrinsic%20function
In computer software, in compiler theory, an intrinsic function (or built-in function) is a function (subroutine) available for use in a given programming language whose implementation is handled specially by the compiler. Typically, it may substitute a sequence of automatically generated instructions for the original function call, similar to an inline function. Unlike an inline function, the compiler has an intimate knowledge of an intrinsic function and can thus better integrate and optimize it for a given situation. Compilers that implement intrinsic functions generally enable them only when a program requests optimization, otherwise falling back to a default implementation provided by the language runtime system (environment). Intrinsic functions are often used to explicitly implement vectorization and parallelization in languages which do not address such constructs. Some application programming interfaces (API), for example, AltiVec and OpenMP, use intrinsic functions to declare, respectively, vectorizable and multiprocessing-aware operations during compiling. The compiler parses the intrinsic functions and converts them into vector math or multiprocessing object code appropriate for the target platform. Some intrinsics are used to provide additional constraints to the optimizer, such as values a variable cannot assume. C and C++ Compilers for C and C++, of Microsoft, Intel, and the GNU Compiler Collection (GCC) implement intrinsics that map directly to the x86 single instruction, multiple data (SIMD) instructions (MMX, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX512, FMA, ...). The Microsoft Visual C++ compiler of Microsoft Visual Studio does not support inline assembly for x86-64. To compensate for this, new intrinsics have been added that map to standard assembly instructions that are not normally accessible through C/C++, e.g., bit scan. Some C and C++ compilers provide non-portable platform-specific intrinsics. Other intr
https://en.wikipedia.org/wiki/Spring%20%28operating%20system%29
Spring is a discontinued project in building an experimental microkernel-based object-oriented operating system (OS) developed at Sun Microsystems in the early 1990s. Using technology substantially similar to concepts developed in the Mach kernel, Spring concentrated on providing a richer programming environment supporting multiple inheritance and other features. Spring was also more cleanly separated from the operating systems it would host, divorcing it from its Unix roots and even allowing several OSes to be run at the same time. Development faded out in the mid-1990s, but several ideas and some code from the project was later re-used in the Java programming language libraries and the Solaris operating system. History Spring started in a roundabout fashion in 1987, as part of Sun and AT&T's collaboration to create a merged UNIX. Both companies decided it was also a good opportunity to "reimplement UNIX in an object-oriented fashion". However, after only a few meetings, this part of the project died. Sun decided to keep their team together and instead explore a system on the leading edge. Along with combining Unix flavours, the new system would also be able to run almost any other system, and in a distributed fashion. The system was first running in a "complete" fashion in 1993, and produced a series of research papers. In 1994, a "research quality" release was made under a non-commercial license, but it is unclear how widely this was used. Described as a "clean slate" intended to help Sun improve its existing Unix products, the software was made available at a cost of $75, with Sun targeting universities and computer scientists. Commercial research institutions could obtain the software at a cost of $750. The team broke up and moved to other projects within Sun, using some of the Spring concepts on a variety of other projects. Background The Spring project began soon after the release of Mach 3. In earlier versions Mach was simply a modified version of existi
https://en.wikipedia.org/wiki/Authentication%20and%20Key%20Agreement
Authentication and Key Agreement (AKA) is a security protocol used in 3G networks. AKA is also used for one-time password generation mechanism for digest access authentication. AKA is a challenge–response based mechanism that uses symmetric cryptography. AKA in CDMA AKA – Authentication and Key Agreement a.k.a. 3G Authentication, Enhanced Subscriber Authorization (ESA). The basis for the 3G authentication mechanism, defined as a successor to CAVE-based authentication, AKA provides procedures for mutual authentication of the Mobile Station (MS) and serving system. The successful execution of AKA results in the establishment of a security association (i.e., set of security data) between the MS and serving system that enables a set of security services to be provided. Major advantages of AKA over CAVE-based authentication include: Larger authentication keys (128-bit ) Stronger hash function (SHA-1) Support for mutual authentication Support for signaling message data integrity Support for signaling information encryption Support for user data encryption Protection from rogue MS when dealing with R-UIM AKA is not yet implemented in CDMA2000 networks, although it is expected to be used for IMS. To ensure interoperability with current devices and partner networks, support for AKA in CDMA networks and handsets will likely be in addition to CAVE-based authentication. Air interface support for AKA is included in all releases following CDMA2000 Rev C. TIA-41 MAP support for AKA was defined in TIA-945 (3GPP2 X.S0006), which has been integrated into TIA-41 (3GPP2 X.S0004). For information on AKA in roaming, see CDG Reference Document #138. AKA in UMTS AKA a mechanism which performs authentication and session key distribution in Universal Mobile Telecommunications System (UMTS) networks. AKA is a challenge–response based mechanism that uses symmetric cryptography. AKA is typically run in a UMTS IP Multimedia Services Identity Module (ISIM), which is an application on a
https://en.wikipedia.org/wiki/Dot%20crawl
Dot crawl (also known as chroma crawl or cross-luma) is a visual defect of color analog video standards when signals are transmitted as composite video, as in terrestrial broadcast television. It consists of moving checkerboard patterns which appear along horizontal color transitions (vertical edges). It results from intermodulation or crosstalk between chrominance and luminance components of the signal, which are imperfectly multiplexed in the frequency domain. The term is more associated with the NTSC analog color TV system, but is also present in PAL (see Chroma dots). Although the interference patterns are slightly different depending on the system used, they have the same cause and the same general principles apply. A related effect, color bleed or rainbow artifacts, is discussed below. Description Intermodulation or crosstalk problems take two forms: chrominance interference in luminance (chrominance being interpreted as luminance), luminance interference in chrominance. Dot crawl is most visible when the chrominance is transmitted with a high bandwidth, so that its spectrum reaches well into the band of frequencies used by the luminance signal in the composite video signal. This causes high-frequency chrominance detail at color transitions to be interpreted as luminance detail. Some (mostly older) video-game consoles and home computers use nonstandard colorburst phases, thereby producing dot crawl that appears quite different from that seen in broadcast NTSC or PAL. The effect is more noticeable on these cases due to the saturated colors and small pixel scale details normally present on computer graphics. The opposite problem, luminance interference in chroma, is the appearance of a colored noise in image areas with high levels of detail. This results from high-frequency luminance detail crossing into the frequencies used by the chrominance channel and producing false coloration, known as color bleed or rainbow artifacts. Bleed can also make narr
https://en.wikipedia.org/wiki/Marching%20cubes
Marching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (the elements of which are sometimes called voxels). The applications of this algorithm are mainly concerned with medical visualizations such as CT and MRI scan data images, and special effects or 3-D modelling with what is usually called metaballs or other metasurfaces. The marching cubes algorithm is meant to be used for 3-D; the 2-D version of this algorithm is called the marching squares algorithm. History The algorithm was developed by William E. Lorensen (1946-2019) and Harvey E. Cline as a result of their research for General Electric. At General Electric they worked on a way to efficiently visualize data from CT and MRI devices. The premise of the algorithm is to divide the input volume into a discrete set of cubes. By assuming linear reconstruction filtering, each cube, which contains a piece of a given isosurface, can easily be identified because the sample values at the cube vertices must span the target isosurface value. For each cube containing a section of the isosurface, a triangular mesh that approximates the behavior of the trilinear interpolant in the interior cube is generated. The first published version of the algorithm exploited rotational and reflective symmetry and also sign changes to build the table with 15 unique cases. However, due to the existence of ambiguities in the trilinear interpolant behavior in the cube faces and interior, the meshes extracted by the Marching Cubes presented discontinuities and topological issues. Given a cube of the grid, a face ambiguity occurs when its face vertices have alternating signs. That is, the vertices of one diagonal on this face are positive and the vertices on the other are negative. Observe that in this case, the signs of the face vertices are insufficient to determine the correct way t
https://en.wikipedia.org/wiki/Optical%20burst%20switching
Optical burst switching (OBS) is an optical networking technique that allows dynamic sub-wavelength switching of data. OBS is viewed as a compromise between the yet unfeasible full optical packet switching (OPS) and the mostly static optical circuit switching (OCS). It differs from these paradigms because OBS control information is sent separately in a reserved optical channel and in advance of the data payload. These control signals can then be processed electronically to allow the timely setup of an optical light path to transport the soon-to-arrive payload. This is known as delayed reservation. Purpose The purpose of optical burst switching (OBS) is to dynamically provision sub-wavelength granularity by optimally combining electronics and optics. OBS considers sets of packets with similar properties called bursts. Therefore, OBS granularity is finer than optical circuit switching (OCS). OBS provides more bandwidth flexibility than wavelength routing but requires faster switching and control technology. OBS can be used for realizing dynamic end-to-end all optical communications. Method In OBS, packets are aggregated into data bursts at the edge of the network to form the data payload. Various assembling schemes based on time and/or size exist (see burst switching). Edge router architectures have been proposed (see ). OBS features the separation between the control plane and the data plane. A control signal (also termed burst header or control packet) is associated to each data burst. The control signal is transmitted in optical form in a separated wavelength termed the control channel, but signaled out of band and processed electronically at each OBS router, whereas the data burst is transmitted in all optical form from one end to the other end of the network. The data burst can cut through intermediate nodes, and data buffers such as fiber delay lines may be used. In OBS data is transmitted with full transparency to the intermediate nodes in the network. After
https://en.wikipedia.org/wiki/Chip%20log
A chip log, also called common log, ship log, or just log, is a navigation tool mariners use to estimate the speed of a vessel through water. The word knot, to mean nautical mile per hour, derives from this measurement method. History All nautical instruments that measure the speed of a ship through water are known as logs. This nomenclature dates back to the days of sail, when sailors tossed a log attached to a rope knotted at regular intervals off the stern of a ship. Sailors counted the number of knots that passed through their hands in a given time to determine the ship's speed. Today, sailors and aircraft pilots still express speed in knots. Construction A chip log consists of a wooden board attached to a line (the log-line). The log-line has a number of knots at uniform intervals. The log-line is wound on a reel so the user can easily pay it out. Over time, log construction standardized. The shape is a quarter circle, or quadrant with a radius of or , and thick. The log-line attaches to the board with a bridle of three lines that connect to the vertex and to the two ends of the quadrant's arc. To ensure the log submerges and orients correctly in the water, the bottom of the log is weighted with lead. This provides more resistance in the water, and a more accurate and repeatable reading. The bridle attaches in such a way that a strong tug on the log-line makes one or two of the bridle's lines release, enabling a sailor to retrieve the log. Use A navigator who needed to know the speed of the vessel had a sailor drop the log over the ship's stern. The log acted as a drogue, remaining roughly in place while the vessel moved away. The sailor let the log-line run out for a fixed time while counting the knots that passed over. The length of log-line passing (the number of knots) determined the reading. Origins The first known device that measured speed is often claimed to be the Dutchman's log. This invention is attributed to the Portuguese Bartolomeu
https://en.wikipedia.org/wiki/ThinkCentre
The ThinkCentre is a line of business-oriented desktop computers designed, developed and marketed by Lenovo, and formerly by IBM from 2003 to 2005. ThinkCentre computers typically include mid-range to high-end processors, options for discrete graphics cards, and multi-monitor support. History Launch The ThinkCentre line of desktop computers was introduced by IBM in 2003. The first three models in this line were the S50, the M50, and A50p. All three desktops were equipped with Intel Pentium 4 processors. The chassis was made of steel and designed for easy component access without the use of tools. The hard disk was fixed in place by a 'caddy' without the use of screws. The caddy had rubber bumpers to reduce vibration and operational noise. Additional updates to the desktops included greater use of ThinkVantage technologies. All desktop models were made available with ImageUltra. The three desktop models also included an 'Access IBM' button, allowing access to onboard resources, diagnostic tools, automated software, and links to online updates and services. Select models featured IBM's Embedded Security Subsystem, with an integrated security chip and IBM Client Security Software. Acquisition by Lenovo In 2005, after completing its acquisition of IBM's personal computing business, leading to the IBM/Lenovo partnership, IBM/Lenovo announced the ThinkCentre E Series desktops, designed specifically for small businesses. The ThinkCentre E50 was made available in tower and small form factor, with a silver and black design. In 2005, Technology Business Research (TBR) observed an increase in the customer satisfaction rate for ThinkCentre desktops. According to TBR's "Corporate IT Buying Behavior and Customer Satisfaction Study” published in the second quarter of 2005, Lenovo was the only one of four surveyed companies that displayed a substantial increase in ratings. In May 2005, the ThinkCentre M52 and A52 desktops were announced by Lenovo. These desktops marked the
https://en.wikipedia.org/wiki/DVB-H
DVB-H (digital video broadcasting - handheld) is one of three prevalent mobile TV formats. It is a technical specification for bringing broadcast services to mobile handsets. DVB-H was formally adopted as ETSI standard EN 302 304 in November 2004. The DVB-H specification (EN 302 304) can be downloaded from the official DVB-H website. From March 2008, DVB-H is officially endorsed by the European Union as the "preferred technology for terrestrial mobile broadcasting". The major competitors of this technology are Qualcomm's MediaFLO system, the 3G cellular system based MBMS mobile-TV standard, and the ATSC-M/H format in the U.S. DVB-SH (Satellite to Handhelds) now and DVB-NGH (Next Generation Handheld) in the future are possible enhancements to DVB-H, providing improved spectral efficiency and better modulation flexibility. DVB-H has been a commercial failure, and the service is no longer on-air. Ukraine was the last country with a nationwide broadcast in DVB-H, which began transitioning to DVB-T2 during 2019. Technical explanation DVB-H technology is a superset of the successful DVB-T (Digital Video Broadcasting - Terrestrial) system for digital terrestrial television, with additional features to meet the specific requirements of handheld, battery-powered receivers. In 2002 four main requirements of the DVB-H system were agreed: broadcast services for portable and mobile usage with 'acceptable quality'; a typical user environment, and so geographical coverage, as mobile radio; access to service while moving in a vehicle at high speed (as well as imperceptible handover when moving from one cell to another); and as much compatibility with existing digital terrestrial television (DVB-T), to allow sharing of network and transmission equipment. DVB-H can offer a downstream channel at high data rates which can be used as standalone or as an enhancement of mobile telecommunication networks which many typical handheld terminals are able to access anyway. Time slicing techn
https://en.wikipedia.org/wiki/Center%20of%20population
In demographics, the center of population (or population center) of a region is a geographical point that describes a centerpoint of the region's population. There are several ways of defining such a "center point", leading to different geographical locations; these are often confused. Definitions Three commonly used (but different) center points are: the mean center, also known as the centroid or center of gravity; the median center, which is the intersection of the median longitude and median latitude; the geometric median, also known as Weber point, Fermat–Weber point, or point of minimum aggregate travel. A further complication is caused by the curved shape of the Earth. Different center points are obtained depending on whether the center is computed in three-dimensional space, or restricted to the curved surface, or computed using a flat map projection. Mean center The mean center, or centroid, is the point on which a rigid, weightless map would balance perfectly, if the population members are represented as points of equal mass. Mathematically, the centroid is the point to which the population has the smallest possible sum of squared distances. It is easily found by taking the arithmetic mean of each coordinate. If defined in the three-dimensional space, the centroid of points on the Earth's surface is actually inside the Earth. This point could then be projected back to the surface. Alternatively, one could define the centroid directly on a flat map projection; this is, for example, the definition that the US Census Bureau uses. Contrary to a common misconception, the centroid does not minimize the average distance to the population. That property belongs to the geometric median. Median center The median center is the intersection of two perpendicular lines, each of which divides the population into two equal halves. Typically these two lines are chosen to be a parallel (a line of latitude) and a meridian (a line of longitude). In that case, th
https://en.wikipedia.org/wiki/Nyquist%20stability%20criterion
In control theory and stability theory, the Nyquist stability criterion or Strecker–Nyquist stability criterion, independently discovered by the German electrical engineer at Siemens in 1930 and the Swedish-American electrical engineer Harry Nyquist at Bell Telephone Laboratories in 1932, is a graphical technique for determining the stability of a dynamical system. Because it only looks at the Nyquist plot of the open loop systems, it can be applied without explicitly computing the poles and zeros of either the closed-loop or open-loop system (although the number of each type of right-half-plane singularities must be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast to Bode plots, it can handle transfer functions with right half-plane singularities. In addition, there is a natural generalization to more complex systems with multiple inputs and multiple outputs, such as control systems for airplanes. The Nyquist stability criterion is widely used in electronics and control system engineering, as well as other fields, for designing and analyzing systems with feedback. While Nyquist is one of the most general stability tests, it is still restricted to linear time-invariant (LTI) systems. Nevertheless, there are generalizations of the Nyquist criterion (and plot) for non-linear systems, such as the circle criterion and the scaled relative graph of a nonlinear operator. Additionally, other stability criteria like Lyapunov methods can also be applied for non-linear systems. Although Nyquist is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques like Bode plots, while less general, are sometimes a more useful design tool. Nyquist plot A Nyquist plot is a parametric plot of a frequency response used in automatic control and signal processing. The most common use of Nyquist p
https://en.wikipedia.org/wiki/Dynamic%20web%20page
A dynamic web page is a web page constructed at runtime (during software execution), as opposed to a static web page, delivered as it is stored. A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, and including the setting up of more client-side processing. A client-side dynamic web page processes the web page using JavaScript running in the browser as it loads. JavaScript can interact with the page via Document Object Model (DOM), to query page state and modify it. Even though a web page can be dynamic on the client-side, it can still be hosted on a static hosting service such as GitHub Pages or Amazon S3 as long as there is not any server-side code included. A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using AJAX technologies will neither create a page to go back to, nor truncate the web browsing history forward of the displayed page. Using AJAX, the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The AJAX engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server. A particular application server could offer a standardized REST style interface to offer services to the web application. DHTML is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-side scripting, or a combination of these
https://en.wikipedia.org/wiki/Abuse%20of%20notation
In mathematics, abuse of notation occurs when an author uses a mathematical notation in a way that is not entirely formally correct, but which might help simplify the exposition or suggest the correct intuition (while possibly minimizing errors and confusion at the same time). However, since the concept of formal/syntactical correctness depends on both time and context, certain notations in mathematics that are flagged as abuse in one context could be formally correct in one or more other contexts. Time-dependent abuses of notation may occur when novel notations are introduced to a theory some time before the theory is first formalized; these may be formally corrected by solidifying and/or otherwise improving the theory. Abuse of notation should be contrasted with misuse of notation, which does not have the presentational benefits of the former and should be avoided (such as the misuse of constants of integration). A related concept is abuse of language or abuse of terminology, where a term — rather than a notation — is misused. Abuse of language is an almost synonymous expression for abuses that are non-notational by nature. For example, while the word representation properly designates a group homomorphism from a group G to GL(V), where V is a vector space, it is common to call V "a representation of G". Another common abuse of language consists in identifying two mathematical objects that are different, but canonically isomorphic. Other examples include identifying a constant function with its value, identifying a group with a binary operation with the name of its underlying set, or identifying to the Euclidean space of dimension three equipped with a Cartesian coordinate system. Examples Structured mathematical objects Many mathematical objects consist of a set, often called the underlying set, equipped with some additional structure, such as a mathematical operation or a topology. It is a common abuse of notation to use the same notation for the underlying
https://en.wikipedia.org/wiki/Base%20address
In computing, a base address is an address serving as a reference point ("base") for other addresses. Related addresses can be accessed using an addressing scheme. Under the relative addressing scheme, to obtain an absolute address, the relevant base address is taken and an offset (aka displacement) is added to it. Under this type of scheme, the base address is the lowest numbered address within a prescribed range, to facilitate adding related positive-valued offsets. In IBM System/360 architecture, the base address is a 24-bit value in a general register (extended in steps to 64 bits in z/Architecture), and the offset is a 12 bit value in the instruction (extended to 20 bits in z/Architecture). See also Index register Rebasing Computer memory
https://en.wikipedia.org/wiki/Composite%20monitor
A composite monitor or composite video monitor is any analog video display that receives input in the form of an analog composite video signal to a defined specification. A composite video signal encodes all information on a single conductor; a composite cable has a single live conductor plus earth. Other equipment with display functionality includes monitors with more advanced interfaces and connectors giving a better picture, including analog VGA, and digital DVI, HDMI, and DisplayPort; and television (TV) receivers which are self-contained, receiving and displaying video RF broadcasts received with an internal tuner. Video monitors are used for displaying computer output, closed-circuit television (e.g. security cameras) and other applications requiring a two-dimensional monochrome or colour image. Inputs Composite monitors usually use RCA jacks or BNC connectors for video input. Earlier equipment (1970s) often used UHF connectors. Typically simple composite monitors give a picture inferior to other interfaces. In principle a monitor can have one or several of multiple types of input, including composite—in addition to composite monitors as such, many monitors accept composite input among other standards. In practice computer monitors ceased to support composite input as other interfaces became predominant. A composite monitor must have a two-dimensional approximately flat display device with circuitry to accept a composite signal with picture and synchronisation information, process it into monochrome chrominance and luminance, or the red, green, and blue of RGB, plus synchronisation pulses, and display it on a screen, which was predominantly a CRT until the 21st century, and then a thin panel using LCD or other technology. A critical factor in the quality of this display is the type of encoding used in the TV camera to combine the signal together and the decoding used in the monitor to separate the signals back to RGB for display. Composite monitors can b
https://en.wikipedia.org/wiki/Homogeneous%20polynomial
In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function. An algebraic form, or simply form, is a function defined by a homogeneous polynomial. A binary form is a form in two variables. A form is also a function defined on a vector space, which may be expressed as a homogeneous function of the coordinates over any basis. A polynomial of degree 0 is always homogeneous; it is simply an element of the field or ring of the coefficients, usually called a constant or a scalar. A form of degree 1 is a linear form. A form of degree 2 is a quadratic form. In geometry, the Euclidean distance is the square root of a quadratic form. Homogeneous polynomials are ubiquitous in mathematics and physics. They play a fundamental role in algebraic geometry, as a projective algebraic variety is defined as the set of the common zeros of a set of homogeneous polynomials. Properties A homogeneous polynomial defines a homogeneous function. This means that, if a multivariate polynomial P is homogeneous of degree d, then for every in any field containing the coefficients of P. Conversely, if the above relation is true for infinitely many then the polynomial is homogeneous of degree d. In particular, if P is homogeneous then for every This property is fundamental in the definition of a projective variety. Any nonzero polynomial may be decomposed, in a unique way, as a sum of homogeneous polynomials of different degrees, which are called the homogeneous components of the polynomial. Given a polynomial ring over a field (or, more generally, a ring) K, the homogeneous polynomials of degree d
https://en.wikipedia.org/wiki/Networking%20hardware
Networking hardware, also known as network equipment or computer networking devices, are electronic devices that are required for communication and interaction between devices on a computer network. Specifically, they mediate data transmission in a computer network. Units which are the last receiver or generate data are called hosts, end systems or data terminal equipment. Range Networking devices includes a broad range of equipment which can be classified as core network components which interconnect other network components, hybrid components which can be found in the core or border of a network and hardware or software components which typically sit on the connection point of different networks. The most common kind of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices. Other networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery. Taking a wider view, mobile phones, tablet computers and devices associated with the internet of things may also be considered networking hardware. As technology advances and IP-based networks are integrated into building infrastructure and household utilities, network hardware will become an ambiguous term owing to the vastly increasing number of network-capable endpoints. Specific devices Network hardware can be classified by its location and role in the network. Core Core network components interconnect other network components. Gateway: an interface providing a compatibility between networks by converting transmission speeds, protocols, codes, or security measures. Router: a networking device that forwards data packets between computer networks. Routers perform the "traffic directing" fun
https://en.wikipedia.org/wiki/Z-order%20curve
In mathematical analysis and computer science, functions which are Z-order, Lebesgue curve, Morton space-filling curve, Morton order or Morton code map multidimensional data to one dimension while preserving locality of the data points. It is named in France after Henri Lebesgue, who studied it in 1904, and named in the United States after Guy Macdonald Morton, who first applied the order to file sequencing in 1966. The z-value of a point in multidimensions is simply calculated by interleaving the binary representations of its coordinate values. Once the data are sorted into this ordering, any one-dimensional data structure can be used, such as simple one dimensional arrays, binary search trees, B-trees, skip lists or (with low significant bits truncated) hash tables. The resulting ordering can equivalently be described as the order one would get from a depth-first traversal of a quadtree or octree. Coordinate values The figure below shows the Z-values for the two dimensional case with integer coordinates 0 ≤ x ≤ 7, 0 ≤ y ≤ 7 (shown both in decimal and binary). Interleaving the binary coordinate values (starting to the right with the x-bit (in blue) and alternating to the left with the y-bit (in red)) yields the binary z-values (tilted by 45° as shown). Connecting the z-values in their numerical order produces the recursively Z-shaped curve. Two-dimensional Z-values are also known as quadkey values. The Z-values of the x coordinates are described as binary numbers from the Moser–de Bruijn sequence, having nonzero bits only in their even positions: x[] = {0b000000, 0b000001, 0b000100, 0b000101, 0b010000, 0b010001, 0b010100, 0b010101} The sum and difference of two x values are calculated by using bitwise operations: x[i+j] = ((x[i] | 0b10101010) + x[j]) & 0b01010101 x[i−j] = ((x[i] & 0b01010101) − x[j]) & 0b01010101 if i ≥ j This property can be used to offset a Z-value, for example in two dimensions the coordinates to the top (decreasing y), bottom (increasi
https://en.wikipedia.org/wiki/Power-on%20self-test
A power-on self-test (POST) is a process performed by firmware or software routines immediately after a computer or other digital electronic device is powered on. This article mainly deals with POSTs on personal computers, but many other embedded systems such as those in major appliances, avionics, communications, or medical equipment also have self-test routines which are automatically invoked at power-on. The results of the POST may be displayed on a panel that is part of the device, output to an external device, or stored for future retrieval by a diagnostic tool. Since a self-test might detect that the system's usual human-readable display is non-functional, an indicator lamp or a speaker may be provided to show error codes as a sequence of flashes or beeps. In addition to running tests, the POST process may also set the initial state of the device from firmware. In the case of a computer, the POST routines are part of a device's pre-boot sequence; if they complete successfully, the bootstrap loader code is invoked to load an operating system. IBM-compatible PC POST In IBM PC compatible computers, the main duties of POST are handled by the BIOS/UEFI, which may hand some of these duties to other programs designed to initialize very specific peripheral devices, notably for video and SCSI initialization. These other duty-specific programs are generally known collectively as option ROMs or individually as the video BIOS, SCSI BIOS, etc. The principal duties of the main BIOS during POST are as follows: verify CPU registers verify the integrity of the BIOS code itself verify some basic components like DMA, timer, interrupt controller initialize, size, and verify system main memory initialize BIOS pass control to other specialized extension BIOSes (if installed) identify, organize, and select which devices are available for booting The functions above are served by the POST in all BIOS versions back to the very first. In later BIOS versions, POST will also
https://en.wikipedia.org/wiki/List%20of%20application%20servers
This list compares the features and functionality of application servers, grouped by the hosting environment that is offered by that particular application server. BASIC Run BASIC An all-in-one BASIC scriptable application server, can automatically manage session and state. C Enduro/X A middleware platform for distributed transaction processing, based on XATMI and XA standards, open source, C API C++ Tuxedo Based on the ATMI standard, is one of the original application servers. Wt A web toolkit similar to Qt permitting GUI-application-like web development with built-in Ajax abilities. POCO C++ Libraries A set of open source class libraries including Poco.Net.HTTPServer.html CppCMS Enduro/X A middleware platform for distributed transaction processing, based on XATMI and XA standards, open source Go Enduro/X ASG Application server for Go. This provides XATMI and XA facilities for Golang. Go application can be built by normal Go executable files which in turn provides stateless services, which can be load balanced, clustered and reloaded on the fly without service interruption by means of administrative work only. Framework provides distributed transaction processing facility for Go. Java Apache MINA an abstract event-driven asynchronous API over various transports such as TCP/IP and UDP/IP via Java NIO Netty a non-blocking I/O client-server framework for the development of Java network applications similar in spirit to Node.js JavaScript Broadvision Server-side JavaScript AS. One of the early entrants in the market during the eCommerce dot-com bubble, they have vertical solution packages catering to the eCommerce industry. Wakanda Server Server-side JavaScript application server integrating a NoSQL database engine (WakandaDB), a dedicated HTTP server, user, and group management and an optional client-side JavaScript framework. Node.js implements Google's V8 engine as a standalone (outside the browser) asynchronous Javascript inter
https://en.wikipedia.org/wiki/Dialysis%20tubing
Dialysis tubing, also known as Visking tubing, is an artificial semi-permeable membrane tubing used in separation techniques, that facilitates the flow of tiny molecules in solution based on differential diffusion. In the context of life science research, dialysis tubing is typically used in the sample clean-up and processing of proteins and DNA samples or complex biological samples such as blood or serums. Dialysis tubing is also frequently used as a teaching aid to demonstrate the principles of diffusion, osmosis, Brownian motion and the movement of molecules across a restrictive membrane. For the principles and usage of dialysis in a research setting, see Dialysis (biochemistry). History, properties and composition Dialysis occurs throughout nature and the principles of dialysis have been exploited by humans for thousands of years using natural animal or plant-based membranes. The term dialysis was first routinely used for scientific or medical purposes in the late 1800s and early 1900s, pioneered by the work of Thomas Graham. The first mass-produced man-made membranes suitable for dialysis were not available until the 1930s, based on materials used in the food packaging industry such as cellophane. In the 1940s, Willem Kolff constructed the first dialyzer (artificial kidney), and successfully treated patients with kidney failure using dialysis across semi-permeable membranes. Today, dialysis tubing for laboratory applications comes in a variety of dimensions and molecular-weight cutoffs (MWCO). In addition to tubing, dialysis membranes are also found in a wide range of different preformatted devices, significantly improving the performance and ease of use of dialysis. Different dialysis tubing or flat membranes are produced and characterized as differing molecular-weight cutoffs (MWCO) ranging from 1–1,000,000 kDa. The MWCO determination is the result of the number and average size of the pores created during the production of the dialysis membrane. The MWC
https://en.wikipedia.org/wiki/Interactive%20programming
Interactive programming is the procedure of writing parts of a program while it is already active. This focuses on the program text as the main interface for a running process, rather than an interactive application, where the program is designed in development cycles and used thereafter (usually by a so-called "user", in distinction to the "developer"). Consequently, here, the activity of writing a program becomes part of the program itself. It thus forms a specific instance of interactive computation as an extreme opposite to batch processing, where neither writing the program nor its use happens in an interactive way. The principle of rapid feedback in extreme programming is radicalized and becomes more explicit. Synonyms: on-the-fly-programming, just in time programming, conversational programming Application fields Interactive programming techniques are especially useful in cases where no clear specification of the problem that is to be solved can be given in advance. In such situations (which are not unusual in research), the formal language provides the necessary environment for the development of an appropriate question or problem formulation. Interactive programming has also been used in applications that need to be rewritten without stopping them, a feature which the computer language Smalltalk is famous for. Generally, dynamic programming languages provide the environment for such an interaction, so that typically prototyping and iterative and incremental development is done while other parts of the program are running. As this feature is an apparent need in sound design and algorithmic composition, it has evolved significantly there. More recently, researchers have been using this method to develop sonification algorithms. Using dynamic programming languages for sound and graphics, interactive programming is also used as an improvisational performance style live coding, mainly in algorithmic music and video. Example code Live coding of 3D graphi
https://en.wikipedia.org/wiki/Cognitive%20architecture
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R (Adaptive Control of Thought - Rational) and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990. The Institute for Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments." History Herbert A. Simon, one of the founders of the field of artificial intelligence, stated that the 1960 thesis by his student Ed Feigenbaum, EPAM provided a possible "architecture for cognition" because it included some commitments for how more than one fundamental aspect of the human mind worked (in EPAM's case, human memory and human learning). John R. Anderson started research on human memory in the early 1970s and his 1973 thesis with Gordon H. Bower provided a theory of human associative memory. He included more aspects of his research on long-term memory and thinking processes into this research and eventually designed a cognitive architecture he eventually called ACT. He and his students were influenced by Allen Newell's use of the term "cognitive architecture". Anderson's lab used the term to refer to the ACT theory as embodied in a collection of papers and designs (there was not a complete implementation of ACT at the time). In 1983 John R. Anderson published the seminal work in this area, entitled The Architecture o
https://en.wikipedia.org/wiki/Trigger%20%28horse%29
Trigger (July 4, 1934 – July 3, 1965) was a palomino horse made famous in American Western films with his owner and rider, cowboy star Roy Rogers. Pedigree The original Trigger, named Golden Cloud, was born in San Diego, California. Though often mistaken for a Tennessee Walking Horse, his sire was a Thoroughbred and his dam a grade (unregistered) mare that, like Trigger, was a palomino. Movie director William Witney, who directed Roy and Trigger in many of their movies, claimed a slightly different lineage, that his sire was a "registered" palomino stallion (though no known palomino registry existed at the time of Trigger's birth) and his dam was by a Thoroughbred and out of a "cold-blood" mare. Horses other than Golden Cloud also portrayed "Trigger" over the years, none of which was related to Golden Cloud; the two most prominent were palominos known as "Little Trigger" and "Trigger Jr." (a Tennessee Walking Horse listed as "Allen's Gold Zephyr" in the Tennessee Walking Horse registry). Though Trigger remained a stallion his entire life, he was never bred and has no descendants. Rogers used "Trigger Jr."/"Allen's Golden Zephyr", though, at stud for many years, and the horse named "Triggerson" that actor Val Kilmer led on stage as a tribute to Rogers and his cowboy peers during the Academy Awards show in March 1999 was reportedly a grandson of Trigger Jr. Film career Golden Cloud made an early appearance as the mount of Maid Marian, played by Olivia de Havilland in The Adventures of Robin Hood (1938). A short while later, when Roy Rogers was preparing to make his first movie in a starring role, he was offered a choice of five rented "movie" horses to ride and chose Golden Cloud. Rogers bought him eventually in 1943 and renamed him Trigger for his quickness of both foot and mind. Trigger learned 150 trick cues and could walk 50 ft (15 m) on his hind legs (according to sources close to Rogers). They were said to have run out of places to cue Trigger. Trigger bec
https://en.wikipedia.org/wiki/School%20Mathematics%20Study%20Group
The School Mathematics Study Group (SMSG) was an American academic think tank focused on the subject of reform in mathematics education. Directed by Edward G. Begle and financed by the National Science Foundation, the group was created in the wake of the Sputnik crisis in 1958 and tasked with creating and implementing mathematics curricula for primary and secondary education, which it did until its termination in 1977. The efforts of the SMSG yielded a reform in mathematics education known as New Math which was promulgated in a series of reports, culminating in a series published by Random House called the New Mathematical Library (Vol. 1 is Ivan Niven's Numbers: Rational and Irrational). In the early years, SMSG also produced a set of draft textbooks in typewritten paperback format for elementary, middle and high school students. Perhaps the most authoritative collection of materials from the School Mathematics Study Group is now housed in the Archives of American Mathematics in the University of Texas at Austin's Center for American History. See also Foundations of geometry Further reading 1958 Letter from Ralph A. Raimi to Fred Quigley concerning the New Math Whatever Happened to the New Math by Ralph A. Raimi Some Technical Commentaries on Mathematics Education and History by Ralph A. Raimi External links The SMSG Collection at The Center for American History at UT Archives of American Mathematics at the Center for American History at UT Mathematics education Curricula
https://en.wikipedia.org/wiki/Monoboard
A monoboard is a device or product that consists of a single printed circuit board (PCB). Benefits The primary benefit of a monoboard solution is cost savings. There are a number of ways that incorporating all parts on a single board can reduce costs. The primary reason is that solutions with multiple boards require connections between the boards via edge connectors, and sometimes include a ribbon cable. By connecting devices directly together on the same PCB, there is no need for these additional connectors and cables. Additionally, PCB space may be optimized using electronic design automation (EDA) tools, resulting in a smaller device as well as further cost savings. Aesthetically enhanced micro branding is used as well. Disadvantages The disadvantages of using a monoboard solution is that they are inflexible to upgrades. For example, take a personal computer; in most personal computers, the video card is an external device that plugs into a dedicated slot in the motherboard and may be swapped out with a higher performing card. However, in small form-factor designs, a graphics processing unit (GPU) may be placed directly on the board. The typical reason is to reduce size and cost, but this removed the ability to later upgrade the device. Another common example is with data acquisition hardware. Many DAQ solutions involve the use of DAQ modules, which are cards that plug into a backplane. Different DAQ modules can be purchased with different functionalities depending on the speed and resolution of signals being acquired. Examples of this can be seen in products ranging from Fluke DAQs to Tektronix Logic Analyzer modules (such as the TLA7000 series frames which support more than 4 different series of acquisition cards). Examples Personal computers, specifically small form-factor versions such as the UMPC or MacBook Air Data acquisition hardware Cameras Mobile phones Single-board computer Motherboard Printed circuit board manufacturing
https://en.wikipedia.org/wiki/Sol%E2%80%93gel%20process
In materials science, the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides, especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers into a colloidal solution (sol) that acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides. Sol-gel process is used to produce ceramic nanoparticles. Stages In this chemical procedure, a "sol" (a colloidal solution) is formed that then gradually evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks. In the case of the colloid, the volume fraction of particles (or particle density) may be so low that a significant amount of fluid may need to be removed initially for the gel-like properties to be recognized. This can be accomplished in any number of ways. The simplest method is to allow time for sedimentation to occur, and then pour off the remaining liquid. Centrifugation can also be used to accelerate the process of phase separation. Removal of the remaining liquid (solvent) phase requires a drying process, which is typically accompanied by a significant amount of shrinkage and densification. The rate at which the solvent can be removed is ultimately determined by the distribution of porosity in the gel. The ultimate microstructure of the final component will clearly be strongly influenced by changes imposed upon the structural template during this phase of processing. Afterwards, a thermal treatment, or firing process, is often necessary in order to favor further polycondensation and enhance mechanical properties and structural stability via final sintering, densification, and grain growth. One of the distinct advantages of using this methodology as opposed to the more traditional p
https://en.wikipedia.org/wiki/Sequence%20space
In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space. The most important sequence spaces in analysis are the spaces, consisting of the -power summable sequences, with the p-norm. These are special cases of Lp spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted c and c0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called FK-space. Definition A sequence in a set is just an -valued map whose value at is denoted by instead of the usual parentheses notation Space of all sequences Let denote the field either of real or complex numbers. The set of all sequences of elements of is a vector space for componentwise addition and componentwise scalar multiplication A sequence space is any linear subspace of As a topological space, is naturally endowed with the product topology. Under this topology, is Fréchet, meaning that it is a complete, metrizable, locally convex topological vector space (TVS). However, this topology is rather pathological: there are no continuous norms on (and thus the product topology cannot be defined by any norm). Among Fréchet spaces,
https://en.wikipedia.org/wiki/Table%20%28database%29
A table is a collection of related data held in a table format within a database. It consists of columns and rows. In relational databases, and flat file databases, a table is a set of data elements (values) using a model of vertical columns (identifiable by name) and horizontal rows, the cell being the unit where a row and column intersect. A table has a specified number of columns, but can have any number of rows. Each row is identified by one or more values appearing in a particular column subset. A specific choice of columns which uniquely identify rows is called the primary key. "Table" is another term for "relation"; although there is the difference in that a table is usually a multiset (bag) of rows where a relation is a set and does not allow duplicates. Besides the actual data rows, tables generally have associated with them some metadata, such as constraints on the table or on the values within particular columns. The data in a table does not have to be physically stored in the database. Views also function as relational tables, but their data are calculated at query time. External tables (in Informix or Oracle, for example) can also be thought of as views. In many systems for computational statistics, such as R and Python's pandas, a data frame or data table is a data type supporting the table abstraction. Conceptually, it is a list of records or observations all containing the same fields or columns. The implementation consists of a list of arrays or vectors, each with a name. Tables versus relations In terms of the relational model of databases, a table can be considered a convenient representation of a relation, but the two are not strictly equivalent. For instance, a SQL table can potentially contain duplicate rows, whereas a true relation cannot contain duplicate rows that we call tuples. Similarly, representation as a table implies a particular ordering to the rows and columns, whereas a relation is explicitly unordered. However, the database
https://en.wikipedia.org/wiki/Dream%20dictionary
A dream dictionary (also known as oneirocritic literature) is a tool made for interpreting images in a dream. Dream dictionaries tend to include specific images which are attached to specific interpretations. However, dream dictionaries are generally not considered scientifically viable by those within the psychology community. History Since the 19th century, the art of dream interpretation has been transferred to a scientific ground, making it a distinct part of psychology. However, the dream symbols of the "unscientific" days—the outcome of hearsay interpretations that differ around the world among different cultures—continued to mark the day of an average human-being, who is most likely unfamiliar with Freudian analysis of dreams. The dream dictionary includes interpretations of dreams, giving each symbol in a dream a specific meaning. The argument of what dreams represent has greatly changed over time. With this changing, so have the interpretation of dreams. Dream dictionaries have changed in content since they were first published. The Greeks and Romans saw dreams as having a religious meaning. This made them believe that their dreams were an insight into the future and held the key to the solutions of their problems. Aristotle's view on dreams were that they were merely a function of our physiological make up. He did not believe dreams have a greater meaning, solely that they're the result of how we sleep. In the Middle Ages, dreams were seen as an interpretation of good or evil. Although the dream dictionary is not recognized in the psychology world, Freud is said to have revolutionized the interpretation and study of dreams. Freud came to the conclusion that dreams were a form of wish fulfillment. Dream dictionaries were first based upon Freudian thoughts and ancient interpretations of dreams. Some examples of dream interpretation are: dreaming you are on a beach means you are facing negativity in your life, or a lion may represent a need to control oth
https://en.wikipedia.org/wiki/Coiflet
Coiflets are discrete wavelets designed by Ingrid Daubechies, at the request of Ronald Coifman, to have scaling functions with vanishing moments. The wavelet is near symmetric, their wavelet functions have vanishing moments and scaling functions , and has been used in many applications using Calderón–Zygmund operators. Theory Some theorems about Coiflets: Theorem 1 For a wavelet system , the following three equations are equivalent: and similar equivalence holds between and Theorem 2 For a wavelet system , the following six equations are equivalent: and similar equivalence holds between and Theorem 3 For a biorthogonal wavelet system , if either or possesses a degree L of vanishing moments, then the following two equations are equivalent: for any such that Coiflet coefficients Both the scaling function (low-pass filter) and the wavelet function (high-pass filter) must be normalised by a factor . Below are the coefficients for the scaling functions for C6–30. The wavelet coefficients are derived by reversing the order of the scaling function coefficients and then reversing the sign of every second one (i.e. C6 wavelet = {−0.022140543057, 0.102859456942, 0.544281086116, −1.205718913884, 0.477859456942, 0.102859456942}). Mathematically, this looks like , where k is the coefficient index, B is a wavelet coefficient, and C a scaling function coefficient. N is the wavelet index, i.e. 6 for C6. Matlab function F = coifwavf(W) returns the scaling filter associated with the Coiflet wavelet specified by the string W where W = "coifN". Possible values for N are 1, 2, 3, 4, or 5. References Orthogonal wavelets Wavelets
https://en.wikipedia.org/wiki/Sturmian%20word
In mathematics, a Sturmian word (Sturmian sequence or billiard sequence), named after Jacques Charles François Sturm, is a certain kind of infinitely long sequence of characters. Such a sequence can be generated by considering a game of English billiards on a square table. The struck ball will successively hit the vertical and horizontal edges labelled 0 and 1 generating a sequence of letters. This sequence is a Sturmian word. Definition Sturmian sequences can be defined strictly in terms of their combinatoric properties or geometrically as cutting sequences for lines of irrational slope or codings for irrational rotations. They are traditionally taken to be infinite sequences on the alphabet of the two symbols 0 and 1. Combinatorial definitions Sequences of low complexity For an infinite sequence of symbols w, let σ(n) be the complexity function of w; i.e., σ(n) = the number of distinct contiguous subwords (factors) in w of length n. Then w is Sturmian if σ(n) = n + 1 for all n. Balanced sequences A set X of binary strings is called balanced if the Hamming weight of elements of X takes at most two distinct values. That is, for any |s|1 = k or |s|1 = k where |s|1 is the number of 1s in s. Let w be an infinite sequence of 0s and 1s and let denote the set of all length-n subwords of w. The sequence w is Sturmian if is balanced for all n and w is not eventually periodic. Geometric definitions Cutting sequence of irrational Let w be an infinite sequence of 0s and 1s. The sequence w is Sturmian if for some and some irrational , w is realized as the cutting sequence of the line . Difference of Beatty sequences Let w = (wn) be an infinite sequence of 0s and 1s. The sequence w is Sturmian if it is the difference of non-homogeneous Beatty sequences, that is, for some and some irrational for all or for all . Coding of irrational rotation For , define by . For define the θ-coding of x to be the sequence (xn) where Let w be an infinite se
https://en.wikipedia.org/wiki/ZbMATH%20Open
zbMATH Open, formerly Zentralblatt MATH, is a major reviewing service providing reviews and abstracts for articles in pure and applied mathematics, produced by the Berlin office of FIZ Karlsruhe – Leibniz Institute for Information Infrastructure GmbH. Editors are the European Mathematical Society, FIZ Karlsruhe, and the Heidelberg Academy of Sciences. zbMATH is distributed by Springer Science+Business Media. It uses the Mathematics Subject Classification codes for organising reviews by topic. History Mathematicians Richard Courant, Otto Neugebauer, and Harald Bohr, together with the publisher Ferdinand Springer, took the initiative for a new mathematical reviewing journal. Harald Bohr worked in Copenhagen. Courant and Neugebauer were professors at the University of Göttingen. At that time, Göttingen was considered one of the central places for mathematical research, having appointed mathematicians like David Hilbert, Hermann Minkowski, Carl Runge, and Felix Klein, the great organiser of mathematics and physics in Göttingen. His dream of a building for an independent mathematical institute with a spacious and rich reference library was realised four years after his death. The credit for this achievement is particularly due to Richard Courant, who convinced the Rockefeller Foundation to donate a large amount of money for the construction. The service was founded in 1931, by Otto Neugebauer as Zentralblatt für Mathematik und ihre Grenzgebiete. It contained the bibliographical data of all recently published mathematical articles and book, together with peer reviews done by mathematicians over the world. In the preface to the first volume, the intentions of Zentralblatt are formulated as follows: Zentralblatt and the Jahrbuch über die Fortschritte der Mathematik had in essence the same agenda, but Zentralblatt published several issues per year. An issue was published as soon as sufficiently many reviews were available, in a frequency of three or four weeks. In the l
https://en.wikipedia.org/wiki/Fibonacci%20word
A Fibonacci word is a specific sequence of binary digits (or symbols from any two-letter alphabet). The Fibonacci word is formed by repeated concatenation in the same way that the Fibonacci numbers are formed by repeated addition. It is a paradigmatic example of a Sturmian word and specifically, a morphic word. The name "Fibonacci word" has also been used to refer to the members of a formal language L consisting of strings of zeros and ones with no two repeated ones. Any prefix of the specific Fibonacci word belongs to L, but so do many other strings. L has a Fibonacci number of members of each possible length. Definition Let be "0" and be "01". Now (the concatenation of the previous sequence and the one before that). The infinite Fibonacci word is the limit , that is, the (unique) infinite sequence that contains each , for finite , as a prefix. Enumerating items from the above definition produces:    0    01    010    01001    01001010    0100101001001 ... The first few elements of the infinite Fibonacci word are: 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, ... Closed-form expression for individual digits The nth digit of the word is where is the golden ratio and is the floor function . As a consequence, the infinite Fibonacci word can be characterized by a cutting sequence of a line of slope or . See the figure above. Substitution rules Another way of going from Sn to Sn +1 is to replace each symbol 0 in Sn with the pair of consecutive symbols 0, 1 in Sn +1, and to replace each symbol 1 in Sn with the single symbol 0 in Sn +1. Alternatively, one can imagine directly generating the entire infinite Fibonacci word by the following process: start with a cursor poi
https://en.wikipedia.org/wiki/Interconnection
In telecommunications, interconnection is the physical linking of a carrier's network with equipment or facilities not belonging to that network. The term may refer to a connection between a carrier's facilities and the equipment belonging to its customer, or to a connection between two or more carriers. In United States regulatory law, interconnection is specifically defined (47 C.F.R. 51.5) as "the linking of two or more networks for the mutual exchange of traffic." One of the primary tools used by regulators to introduce competition in telecommunications markets has been to impose interconnection requirements on dominant carriers. History United States Under the Bell System monopoly (post Communications Act of 1934), the Bell System owned the phones and did not allow interconnection, either of separate phones (or other terminal equipment) or of other networks; a popular saying was "Ma Bell has you by the calls". This began to change in the landmark case Hush-A-Phone v. United States [1956], which allowed some non-Bell owned equipment to be connected to the network, and was followed by a number of other cases, regulatory decisions, and legislation that led to the transformation of the American long distance telephone industry from a monopoly to a competitive business. This further changed in FCC's Carterfone decision in 1968, which required the Bell System companies to permit interconnection by radio-telephone operators. Today the standard electrical connector for interconnection in the US, and much of the world, is the registered jack family of standards, especially RJ11. This was introduced by the Bell System in the 1970s, following a 1976 FCC order. Since then, it has gained popularity worldwide, and is a de facto international standard. Europe Outside of the U.S., Interconnection or "Interconnect regimes" also take into account the associated commercial arrangements. As an example of the use of commercial arrangements, the focus by the EU has been on
https://en.wikipedia.org/wiki/777%20%28number%29
777 (seven hundred [and] seventy-seven) is the natural number following 776 and preceding 778. The number 777 is significant in numerous religious and political contexts. In mathematics 777 is an odd, composite, palindromic repdigit. It is also a sphenic number, with 3, 7, and 37 as its prime factors. Its largest prime factor is a concatenation of its smaller two; the only other number below 1000 with this property is 138. 777 is also: An extravagant number, a lucky number, a polite number, and an amenable number. A deficient number, since the sum of its divisors is less than 2n. A congruent number, as it is possible to make a right triangle with a rational number of sides whose area is 777. An arithmetic number, since the average of its positive divisors is also an integer (152). A repdigit in senary. Religious significance According to the Bible, Lamech, the father of Noah lived for 777 years. Some of the known religious connections to 777 are noted in the sections below. Judaism The numbers 3 and 7 both are considered "perfect numbers" under Hebrew tradition. Christianity According to the American publication, the Orthodox Study Bible, 777 represents the threefold perfection of the Trinity. Thelema 777 is also found in the title of the book 777 and Other Qabalistic Writings of Aleister Crowley pertaining to the law of thelema. Political significance Afrikaner Weerstandsbeweging The Afrikaner Resistance Movement (Afrikaner Weerstandsbeweging, AWB), a Boer-nationalist, neo-Nazi, and white supremacist movement in South Africa, used the number 777 as part of their emblem. The number refers to a triumph of "God's number" 7 over the Devil's number 666. On the AWB flag, the numbers are arranged in a triskelion shape, resembling the Nazi swastika. Computing In Unix's chmod, change-access-mode command, the octal value 777 grants all file-access permissions to all user types in a file. Commercial Aviation Boeing, the largest manufacturer of airliners in
https://en.wikipedia.org/wiki/Wireless%20router
A wireless router or Wi-Fi router is a device that performs the functions of a router and also includes the functions of a wireless access point. It is used to provide access to the Internet or a private computer network. Depending on the manufacturer and model, it can function in a wired local area network, in a wireless-only LAN, or in a mixed wired and wireless network. Features Wireless routers typically feature one or more network interface controllers supporting Fast Ethernet or Gigabit Ethernet ports integrated into the main system on a chip (SoC) around which the router is built. An Ethernet switch as described in IEEE 802.1Q may interconnect multiple ports. Some routers implement link aggregation through which two or more ports may be used together improving throughput and redundancy. All wireless routers feature one or more wireless network interface controllers. These are also integrated into the main SoC or may be separate chips on the printed circuit board. It also can be a distinct card connected over a MiniPCI or MiniPCIe interface. Some dual-band wireless routers operate the 2.4 GHz and 5 GHz bands simultaneously. Wireless controllers support a part of the IEEE 802.11-standard family and many dual-band wireless routers have data transfer rates exceeding 300 Mbit/s (For 2.4 GHz band) and 450 Mbit/s (For 5 GHz band). Some wireless routers provide multiple streams allowing multiples of data transfer rates (e.g. a three-stream wireless router allows transfers of up to 1.3 Gbit/s on the 5 GHz bands). Some wireless routers have one or two USB ports. These can be used to connect printer or desktop or mobile external hard disk drive to be used as a shared resource on the network. A USB port may also be used for connecting mobile broadband modem, aside from connecting the wireless router to an Ethernet with xDSL or cable modem. A mobile broadband USB adapter can be connected to the router to share the mobile broadband Internet connection through the wire
https://en.wikipedia.org/wiki/MapReduce
MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster. A MapReduce program is composed of a map procedure, which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and a reduce method, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing by marshalling the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundancy and fault tolerance. The model is a specialization of the split-apply-combine strategy for data analysis. It is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as in their original forms. The key contributions of the MapReduce framework are not the actual map and reduce functions (which, for example, resemble the 1995 Message Passing Interface standard's reduce and scatter operations), but the scalability and fault-tolerance achieved for a variety of applications due to parallelization. As such, a single-threaded implementation of MapReduce is usually not faster than a traditional (non-MapReduce) implementation; any gains are usually only seen with multi-threaded implementations on multi-processor hardware. The use of this model is beneficial only when the optimized distributed shuffle operation (which reduces network communication cost) and fault tolerance features of the MapReduce framework come into play. Optimizing the communication cost is essential to a good MapReduce algorithm. MapReduce libraries have been written in many programming languages, with different levels of optimization. A popular open-source imp
https://en.wikipedia.org/wiki/Extraterrestrials%20in%20fiction
An extraterrestrial or alien is any extraterrestrial lifeform; a lifeform that did not originate on Earth. The word extraterrestrial means "outside Earth". The first published use of extraterrestrial as a noun occurred in 1956, during the Golden Age of Science Fiction. Extraterrestrials are a common theme in modern science-fiction, and also appeared in much earlier works such as the second-century parody True History by Lucian of Samosata. Gary Westfahl writes: History Pre-modern Cosmic pluralism, the assumption that there are many inhabited worlds beyond the human sphere predates modernity and the development of the heliocentric model and is common in mythologies worldwide. The 2nd century writer of satires, Lucian, in his True History claims to have visited the Moon when his ship was sent up by a fountain, which was peopled and at war with the people of the Sun over colonisation of the Morning Star. Other worlds are depicted in such early works as the 10th-century Japanese narrative, The Tale of the Bamboo Cutter, and the medieval Arabic The Adventures of Bulukiya (from the One Thousand and One Nights). Dante in his Divine Comedy sets the third part, Paradise amongst the planets where he meets the saints. Early modern The assumption of extraterrestrial life in the narrow sense (as opposed to generic cosmic pluralism) becomes possible with the development of the heliocentric understanding of the Solar System, and later the understanding of interstellar space, during the Early Modern period, and the topic was popular in the literature of the 17th and 18th centuries. In Johannes Kepler's Somnium, published in 1634, the character Duracotus is transported to the Moon by demons. Even if much of the story is fantasy, the scientific facts about the Moon and how the lunar environment has shaped its non-human inhabitants are science fiction. The didactic poet Henry More took up the classical theme of Cosmic pluralism of the Greek Democritus in "Democritus Platonissa
https://en.wikipedia.org/wiki/AOpen
AOPEN (, stylized AOPEN) is a major electronics manufacturer from Taiwan that makes computers and parts for computers. AOPEN used to be the Open System Business Unit of Acer Computer Inc. which designed, manufactured and sold computer components. It was incorporated in December 1996 as a subsidiary of Acer Group with an initial public offering (IPO) at the Taiwan stock exchange in August 2002. It is also the first subsidiary that established the entrepreneurship paradigm in the pan-Acer Group. At that time, AOPENs major shareholder was the Wistron Group. In 2018 AOPEN became a partner of the pan-Acer Group again as the business-to-business branch of the computing industry. They are perhaps most well known for their "Mobile on Desktop" (MoDT), which implements Intel's Pentium M platform on desktop motherboards. Because the Pentium 4 and other NetBurst CPUs proved less energy efficient than the Pentium M, in late 2004 and early 2005, many manufacturers introduced desktop motherboards for the mobile Pentium M, AOPEN being one of the first. AOPEN currently specializes in ultra small form factor (uSFF) platform applications; digital signage; and product development and designs characterized by miniaturization, standardization and modularization. Product position and strategies Since 2005 AOPEN has been developing energy-saving products. According to different types of customers, applications and contexts, AOPEN splits its product platforms into two major categories: media player platform and Panel PC platform, both of which have Windows, Linux, ChromeOS and Android devices. Digital Signage Platform There are six major parts in AOPEN's digital signage platform applications: media player, management, deployment, display, extension and software. AOPEN manufacturers the digital signage media players with operating system and pre-imaging. This also includes a remote management option. See also List of companies of Taiwan References Taiwanese companies established in
https://en.wikipedia.org/wiki/Rowland%20Mason%20Ordish
Rowland Mason Ordish (11 April 1824 – 1886) was an English engineer. He is most noted for his design of the Winter Garden, Dublin (1865), for his detailed work on the single-span roof of London's St Pancras railway station, undertaken with William Henry Barlow (1868) and the Albert Bridge, a crossing of the River Thames in London, completed in 1873. Biography Born in Melbourne, Derbyshire, Ordish was the son of a land agent and surveyor. He worked with Charles (later Sir Charles) Fox, who was responsible for the construction of Joseph Paxton's Crystal Palace in Hyde Park, London, in 1851. He subsequently supervised its re-erection in Sydenham, south London. His other projects included: Farringdon Street Bridge, London Holborn Viaduct, London (1863–69) Derby market hall (1866) Franz Joseph I Suspension Bridge, over the Vltava, Prague (1868, bombed 1941, demolished in 1947) Cavenagh Bridge, Singapore (1869) Esplanade Mansions, Mumbai, India (1869) dome of Royal Albert Hall, London (1871) He died in 1886 and was buried in a family grave on the western side of Highgate Cemetery. Bridge design In 1858 Ordish patented a bridge suspension system, which he later used in the design of bridges across several European rivers that include Neva at St Petersburg. The system, which consists of a rigid girder suspended by inclined straight chains, was known as Ordish's straight-chain suspension system. The Ordish–Lefeuvre Principle is named after him and his partner William Henry Le Feuvre (1832 – 1896) from Jersey (together the pair submitted plans for the department store De Gruchy's in St Helier, Jersey). References 1824 births 1886 deaths Burials at Highgate Cemetery People from Melbourne, Derbyshire English engineers Bridge engineers
https://en.wikipedia.org/wiki/Cray%20Time%20Sharing%20System
The Cray Time Sharing System, also known in the Cray user community as CTSS, was developed as an operating system for the Cray-1 or Cray X-MP line of supercomputers in 1978. CTSS was developed by the Los Alamos Scientific Laboratory (LASL now LANL) in conjunction with the Lawrence Livermore Laboratory (LLL now LLNL). CTSS was popular with Cray sites in the United States Department of Energy (DOE), but was used by several other Cray sites, such as the San Diego Supercomputing Center. Overview The predecessor of CTSS was the Livermore Time Sharing System (LTSS) which ran on Control Data CDC 7600 line of supercomputers. The first compiler was known as LRLTRAN, for Lawrence Radiation Laboratory forTRAN, a FORTRAN 66 programming language but with dynamic memory and other features. The Cray version, including automatic vectorization, was known as CVC, pronounced "Civic" like the Honda car of the period, for Cray Vector Compiler. Some controversy existed at LASL with the first attempt to develop an operating system for the Cray-1 named DEIMOS, a message-passing, Unix-like operating system, by Forrest Basket. DEIMOS had initial "teething" problems common to the performance of all early operating systems. This left a bad taste for Unix-like systems at the National Laboratories and with the manufacturer, Cray Research, Inc., of the hardware who went on to develop their own batch oriented operating system, COS (Cray Operating System) and their own vectorizing Fortran compiler named "CFT" (Cray ForTran) both written in the Cray Assembly Language (CAL). CTSS had the misfortune to have certain constants, structures, and lacking certain networking facilities (TCP/IP) which were optimized to be Cray-1 architecture-dependent without extensive rework when larger memory supercomputers like the Cray-2 and the Cray Y-MP came into use. CTSS has its final breaths running on Cray instruction-set-compatible hardware developed by Scientific Computer Systems (SCS-40 and SCS-30) and Supert
https://en.wikipedia.org/wiki/KSAT-TV
KSAT-TV (channel 12) is a television station in San Antonio, Texas, United States, affiliated with ABC. Owned by Graham Media Group, the station maintains studios on North St. Mary's Street on the northern edge of downtown, and its transmitter is located off Route 181 in northwest Wilson County (northeast of Elmendorf). History KONO-TV Channel 12 was the last commercial VHF allocation in San Antonio to be awarded. The first applicant for the allocation came in June 1952, from Bexar County Television Corporation, a subsidiary of Alamo Broadcasting Company, owners of radio station KABC. Bexar County Television planned to operate channel 12 as an ABC Television affiliate, owing to the radio station's affiliation with the ABC radio network. Shortly thereafter, Mission Broadcasting Company, owners of KONO radio (860 AM and 92.9 FM), also applied for a channel 12 license. By 1953, both Bexar County Television and Mission Broadcasting proper had dropped out of the running for channel 12. However, two new applicants filed applications: Sunshine Broadcasting Company, then-owners of KTSA radio, and Mission Telecasting Company. Mission was majority (50%) owned by Eugene J. Roth, principal owner of Mission Broadcasting Company, with the other half of the company split among seven individuals. Sunshine would later withdraw its application, although another player would throw their hat into the ring in January 1954: the Walmac Corporation, owners of KMAC radio. In an attempt to avoid long, drawn-out hearings for a license, Walmac and Mission met in May 1954 to work out an agreement between the two parties. On March 12, 1956, the FCC heard final oral arguments between Walmac and Mission, with an FCC examiner having already favored Mission's application the previous year. In May 1956, the FCC granted a license to Mission and denied Walmac's bid. Mission officials proceeded to construct a new, studio building and tower on North St. Mary's Street, adjacent to the studios for K
https://en.wikipedia.org/wiki/Scale%20space
Scale-space theory is a framework for multi-scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, parametrized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale . The main type of scale space is the linear (Gaussian) scale space, which has wide applicability as well as the attractive property of being possible to derive from a small set of scale-space axioms. The corresponding scale-space framework encompasses a theory for Gaussian derivative operators, which can be used as a basis for expressing a large class of visual operations for computerized systems that process visual information. This framework also allows visual operations to be made scale invariant, which is necessary for dealing with the size variations that may occur in image data, because real-world objects may be of different sizes and in addition the distance between the object and the camera may be unknown and may vary depending on the circumstances. Definition The notion of scale space applies to signals of arbitrary numbers of variables. The most common case in the literature applies to two-dimensional images, which is what is presented here. For a given image , its linear (Gaussian) scale-space representation is a family of derived signals defined by the convolution of with the two-dimensional Gaussian kernel such that where the semicolon in the argument of implies that the convolution is performed only over the variables , while the scale paramete
https://en.wikipedia.org/wiki/Escalation%20of%20commitment
Escalation of commitment is a human behavior pattern in which an individual or group facing increasingly negative outcomes from a decision, action, or investment nevertheless continue the behavior instead of altering course. The actor maintains behaviors that are irrational, but align with previous decisions and actions. Economists and behavioral scientists use a related term, sunk-cost fallacy, to describe the justification of increased investment of money or effort in a decision, based on the cumulative prior investment ("sunk cost") despite new evidence suggesting that the future cost of continuing the behavior outweighs the expected benefit. In sociology, irrational escalation of commitment or commitment bias describe similar behaviors. The phenomenon and the sentiment underlying them are reflected in such proverbial images as "throwing good money after bad", or "In for a penny, in for a pound", or "It's never the wrong time to make the right decision", or "If you find yourself in a hole, stop digging." Early use Escalation of commitment was first described by Barry M. Staw in his 1976 paper, "Knee deep in the big muddy: A study of escalating commitment to a chosen course of action". Researchers, inspired by the work of Staw, conducted studies that tested factors, situations and causes of escalation of commitment. The research introduced other analyses of situations and how people approach problems and make decisions. Some of the earliest work stemmed from events in which this phenomenon had an effect and help explain the phenomenon. Research and analysis Over the past few decades, researchers have followed and analyzed many examples of the escalation of commitment to a situation. The heightened situations are explained in three elements. Firstly, a situation has a costly amount of resources such as time, money and people invested in the project. Next, past behavior leads up to an apex in time where the project has not met expectations or could be in a c
https://en.wikipedia.org/wiki/WinSCP
WinSCP (Windows Secure Copy) is a free and open-source SSH File Transfer Protocol (SFTP), File Transfer Protocol (FTP), WebDAV, Amazon S3, and secure copy protocol (SCP) client for Microsoft Windows. Its main function is secure file transfer between a local computer and a remote server. Beyond this, WinSCP offers basic file manager and file synchronization functionality. For secure transfers, it uses the Secure Shell protocol (SSH) and supports the SCP protocol in addition to SFTP. Development of WinSCP started around March 2000 and continues. Originally it was hosted by the University of Economics in Prague, where its author worked at the time. Since July 16, 2003, it is licensed under the GNU GPL. It is hosted on SourceForge and GitHub. WinSCP is based on the implementation of the SSH protocol from PuTTY and FTP protocol from FileZilla. It is also available as a plugin for Altap Salamander file manager, and there exists a third-party plugin for the FAR file manager. Features Graphical user interface Translated into several languages Integration with Windows (drag and drop, URL, shortcut icons) All common operations with files Support for SFTP and SCP protocols over SSH-1 and SSH-2, FTP protocol, WebDAV protocol and Amazon S3 protocol. Batch file scripting, command-line interface, and .NET wrapper Can act as a remote text editor, either downloading a file to edit or passing it on to a local application, then uploading it again when updated. Directory synchronization in several semi or fully automatic ways Support for SSH password, keyboard-interactive, public key, and Kerberos (GSS) authentication Integrates with Pageant (PuTTY authentication agent) for full support of public key authentication with SSH Choice of Windows File Explorer-like or Norton Commander-like interfaces Optionally stores session information Optionally import session information from PuTTY sessions in the registry Able to upload files and retain associated original date/timestamps, unlike F
https://en.wikipedia.org/wiki/Administration%20on%20Aging
The Administration on Aging (AoA) is an agency within the Administration for Community Living of the United States Department of Health and Human Services. AoA works to ensure that older Americans can stay independent in their communities, mostly by awarding grants to States, Native American tribal organizations, and local communities to support programs authorized by Congress in the Older Americans Act. AoA also awards discretionary grants to research organizations working on projects that support those goals. It conducts statistical activities in support of the research, analysis, and evaluation of programs to meet the needs of an aging population. AoA's FY 2013 budget proposal includes a total of $1.9 billion, $819 million of which funds senior nutrition programs like Meals on Wheels. The agency also funds $539 million in grants to programs to help seniors stay in their homes through services (such as accomplishing essential activities of daily living, like getting to the doctor's office, buying groceries etc.) and through help given to caregivers. Some of these grants are for Cash & Counseling programs that provide Medicaid participants a monthly budget for home care and access to services that help them manage their finances. AoA is headed by the Assistant Secretary for Aging. From July 2016 to August 2017, Edwin Walker served as Acting Assistant Secretary for Aging. The Assistant Secretary reports directly to the Secretary of Health and Human Services. Lance Allen Robertson was confirmed in August 2017, and served until January 20, 2021. On January 20, 2021, Alison Barkoff was sworn in as Principal Deputy Assistant Secretary, and was named as Acting Assistant Secretary. On March 9, 2022, President Biden Nominated Rita Landgraf, the former Secretary of the Delaware Department of Health and Social Services, to serve as his first Assistant Secretary. Confirmation is pending. See also :Category:United States Assistant Secretaries for Aging Pension Rights C
https://en.wikipedia.org/wiki/Dephosphorylation
In biochemistry, dephosphorylation is the removal of a phosphate (PO43−) group from an organic compound by hydrolysis. It is a reversible post-translational modification. Dephosphorylation and its counterpart, phosphorylation, activate and deactivate enzymes by detaching or attaching phosphoric esters and anhydrides. A notable occurrence of dephosphorylation is the conversion of ATP to ADP and inorganic phosphate. Dephosphorylation employs a type of hydrolytic enzyme, or hydrolase, which cleaves ester bonds. The prominent hydrolase subclass used in dephosphorylation is phosphatase, which removes phosphate groups by hydrolysing phosphoric acid monoesters into a phosphate ion and a molecule with a free hydroxyl (-OH) group. The reversible phosphorylation-dephosphorylation reaction occurs in every physiological process, making proper function of protein phosphatases necessary for organism viability. Because protein dephosphorylation is a key process involved in cell signalling, protein phosphatases are implicated in conditions such as cardiac disease, diabetes, and Alzheimer's disease. History The discovery of dephosphorylation came from a series of experiments examining the enzyme phosphorylase isolated from rabbit skeletal muscle. In 1955, Edwin Krebs and Edmond Fischer used radiolabeled ATP to determine that phosphate is added to the serine residue of phosphorylase to convert it from its b to a form via phosphorylation. Subsequently, Krebs and Fischer showed that this phosphorylation is part of a kinase cascade. Finally, after purifying the phosphorylated form of the enzyme, phosphorylase a, from rabbit liver, ion exchange chromatography was used to identify phosphoprotein phosphatase I and II. Since the discovery of these dephosphorylating proteins, the reversible nature of phosphorylation and dephosphorylation has been associated with a broad range of functional proteins, primarily enzymatic, but also including nonenzymatic proteins. Edwin Krebs and Edmond F
https://en.wikipedia.org/wiki/Fraction
A fraction (from , "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction (examples: and ) consists of an integer numerator, displayed above a line (or before a slash like ), and a non-zero integer denominator, displayed below (or after) that line. If these integers are positive, then the numerator represents a number of equal parts, and the denominator indicates how many of those parts make up a unit or a whole. For example, in the fraction , the numerator 3 indicates that the fraction represents 3 equal parts, and the denominator 4 indicates that 4 parts make up a whole. The picture to the right illustrates of a cake. Other uses for fractions are to represent ratios and division. Thus the fraction can also be used to represent the ratio 3:4 (the ratio of the part to the whole), and the division (three divided by four). We can also write negative fractions, which represent the opposite of a positive fraction. For example, if represents a half-dollar profit, then − represents a half-dollar loss. Because of the rules of division of signed numbers (which states in part that negative divided by positive is negative), −, and all represent the same fraction negative one-half. And because a negative divided by a negative produces a positive, represents positive one-half. In mathematics the set of all numbers that can be expressed in the form , where a and b are integers and b is not zero, is called the set of rational numbers and is represented by the symbol Q or ℚ, which stands for quotient. A number is a rational number precisely when it can be written in that form (i.e., as a common fraction). However, the word fraction can also be used to describe mathematical expressions that are not rational numbers. Examples of these usages include algebra
https://en.wikipedia.org/wiki/Olive%20branch
The olive branch is a symbol of peace. It is associated with the customs of ancient Greece and ancient Rome, and is connected with supplication to gods and persons in power. Likewise, it is found in most cultures of the Mediterranean Basin and has become a near-universal peace symbol in the modern world. In the Greco-Roman world In Greek tradition, a hiketeria (ἱκετηρία) was an olive branch held by supplicants to show their status as such when approaching persons of power or in temples when supplicating the gods. In Greek mythology, Athena competed with Poseidon for possession of Athens. Poseidon claimed possession by thrusting his trident into the Acropolis, where a well of sea-water gushed out. Athena took possession by planting the first olive tree beside the well. The court of gods and goddesses ruled that Athena had the better right to the land because she had given it the better gift. Olive wreaths were worn by brides and awarded to olympic victors. The olive branch was one of the attributes of Eirene on Roman Imperial coins. For example, the reverse of a tetradrachm of Vespasian from Alexandria, 70-71 AD, shows Eirene standing holding a branch upward in her right hand. The Roman poet Virgil (70–19 BC) associated "the plump olive" with the goddess Pax (the Roman Eirene) and he used the olive branch as a symbol of peace in his Aeneid: For the Romans, there was an intimate relationship between war and peace, and Mars, the god of war, had another aspect, Mars Pacifer, Mars the bringer of Peace, who is shown on coins of the later Roman Empire bearing an olive branch. Appian describes the use of the olive-branch as a gesture of peace by the enemies of the Roman general Scipio Aemilianus in the Numantine War and by Hasdrubal the Boeotarch of Carthage. Although peace was associated with the olive branch during the time of the Greeks, the symbolism became even stronger under the Pax Romana when envoys used olive branches as tokens of peace. Early Christ
https://en.wikipedia.org/wiki/Mine%20Safety%20and%20Health%20Administration
The Mine Safety and Health Administration (MSHA) () is a large agency of the United States Department of Labor which administers the provisions of the Federal Mine Safety and Health Act of 1977 (Mine Act) to enforce compliance with mandatory safety and health standards as a means to eliminate fatal accidents, to reduce the frequency and severity of nonfatal accidents, to minimize health hazards, and to promote improved safety and health conditions in the nation's mines. MSHA carries out the mandates of the Mine Act at all mining and mineral processing operations in the United States, regardless of size, number of employees, commodity mined, or method of extraction. David Zatezalo was sworn in as Assistant Secretary of Labor for Mine Safety and Health, and head of MSHA, on November 30, 2017. He served until January 20, 2021. Jeannette Galanais served as Acting Assistant Secretary by President Joe Biden on February 1, 2021 until Christopher Williamson took office on April 11, 2022. MSHA is organized into several divisions. The Coal Mine Safety and Health division is divided into 12 districts covering coal mining in different portions of the United States. The Metal-Nonmetal Mine Safety and Health division covers six regions of the United States. History Early legislation In 1891, Congress passed the first federal statute governing mine safety. The 1891 law was relatively modest legislation that applied only to mines in U.S. territories, and, among other things, established minimum ventilation requirements at underground coal mines and prohibited operators from employing children under 12 years of age. In 1910, Congress established the Bureau of Mines as a new agency in the Department of the Interior. The Bureau was charged with the responsibility to conduct research and to reduce accidents in the coal mining industry, but was given no inspection authority until 1941, when Congress empowered federal inspectors to enter mines. In 1947, Congress authorized the form
https://en.wikipedia.org/wiki/Numeral%20prefix
Numeral or number prefixes are prefixes derived from numerals or occasionally other numbers. In English and many other languages, they are used to coin numerous series of words. For example: unicycle, bicycle, tricycle (1-cycle, 2-cycle, 3-cycle) dyad, triad (2 parts, 3 parts) biped, quadruped (2 legs, 4 legs) September, October, November, December (month 7, month 8, month 9, month 10) decimal, hexadecimal (base-10, base-16) septuagenarian, octogenarian (70–79 years old, 80–89 years old) centipede, millipede (around 100 legs, around 1000 legs) In many European languages there are two principal systems, taken from Latin and Greek, each with several subsystems; in addition, Sanskrit occupies a marginal position. There is also an international set of metric prefixes, which are used in the metric system and which for the most part are either distorted from the forms below or not based on actual number words. Table of number prefixes in English In the following prefixes, a final vowel is normally dropped before a root that begins with a vowel, with the exceptions of bi-, which is bis- before a vowel, and of the other monosyllables, du-, di-, dvi-, tri-, which are invariable. The cardinal series are derived from cardinal numbers, such as the English one, two, three. The multiple series are based on adverbial numbers like the English once, twice, thrice. The distributive series originally meant one each, two each or one by one, two by two, etc., though that meaning is now frequently lost. The ordinal series are based on ordinal numbers such as the English first, second, third (for numbers higher than 2, the ordinal forms are also used for fractions; only the fraction has special forms). For the hundreds, there are competing forms: those in -gent-, from the original Latin, and those in -cent-, derived from centi-, etc. plus the prefixes for 1–9. Many of the items in the following tables are not in general use, but may rather be regarded as coinages by individ
https://en.wikipedia.org/wiki/Trickle%20charging
Trickle charging means charging a fully charged battery at a rate equal to its self-discharge rate, thus enabling the battery to remain at its fully charged level; this state occurs almost exclusively when the battery is not loaded, as trickle charging will not keep a battery charged if current is being drawn by a load. A battery under continuous float voltage charging is said to be float-charging. For lead-acid batteries under no load float charging (such as in SLI batteries), trickle charging happens naturally at the end-of-charge, when the lead-acid battery internal resistance to the charging current increases enough to reduce additional charging current to a trickle, hence the name. In such cases, the trickle charging equals the energy expended by the lead-acid battery splitting the water in the electrolyte into hydrogen and oxygen gases. Other battery chemistries, such as lithium-ion battery technology, cannot be safely trickle charged. In that case, supervisory circuits (sometimes called battery management system) adjust electrical conditions during charging to match the requirements of the battery chemistry. For Li-ion batteries generally, and for some variants especially, failure to accommodate the limitations of the chemistry and electro-chemistry of a cell, with regard to trickle charging after reaching a fully charged state, can lead to overheating and, possibly to fire or explosion. References Battery charging
https://en.wikipedia.org/wiki/Data%20validation
In computer science, data validation is the process of ensuring data has undergone data cleansing to confirm they have data quality, that is, that they are both correct and useful. It uses routines, often called "validation rules", "validation constraints", or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of a data dictionary, or by the inclusion of explicit application program validation logic of the computer and its application. This is distinct from formal verification, which attempts to prove or disprove the correctness of algorithms for implementing a specification or property. Overview Data validation is intended to provide certain well-defined guarantees for fitness and consistency of data in an application or automated system. Data validation rules can be defined and designed using various methodologies, and be deployed in various contexts. Their implementation can use declarative data integrity rules, or procedure-based business rules. The guarantees of data validation do not necessarily include accuracy, and it is possible for data entry errors such as misspellings to be accepted as valid. Other clerical and/or computer controls may be applied to reduce inaccuracy within a system. Different kinds In evaluating the basics of data validation, generalizations can be made regarding the different kinds of validation according to their scope, complexity, and purpose. For example: Data type validation; Range and constraint validation; Code and cross-reference validation; Structured validation; and Consistency validation Data-type check Data type validation is customarily carried out on one or more simple data fields. The simplest kind of data type validation verifies that the individual characters provided through user input are consistent with the expected characters of one or more known primitive data types as defined in
https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20stability%20criterion
In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants () than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial. The importance of the criterion is that the roots p of the characteristic equation of a linear system with negative real parts represent solutions ept of the system that are stable (bounded). Thus the criterion provides a way to determine if the equations of motion of a linear system have only stable solutions, without solving the system directly. For discrete systems, the corresponding stability test can be handled by the Schur–Cohn criterion, the Jury test and the Bistritz test. With the advent of computers, the criterion has become less widely used, as an alternative is to solve the polynomial numerically, obtaining approximations to the roots directly. The Routh test can be derived through the use of the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. Hurwitz derived his conditions differently. Using Euclid's algorithm The criterion is rela
https://en.wikipedia.org/wiki/World%20Reference%20Base%20for%20Soil%20Resources
The World Reference Base for Soil Resources (WRB) is an international soil classification system for naming soils and creating legends for soil maps. The currently valid version is the fourth edition 2022. It is edited by a working group of the International Union of Soil Sciences (IUSS). Background History Since the 19th century, several countries developed national soil classification systems. During the 20th century, the need for an international soil classification system became more and more obvious. From 1971 to 1981, the Food and Agriculture Organization (FAO) and UNESCO published the Soil Map of the World, 10 volumes, scale 1 : 5 M). The Legend for this map, published in 1974 under the leadership of Rudi Dudal, became the FAO soil classification. Many ideas from national soil classification systems were brought together in this worldwide-applicable system, among them the idea of diagnostic horizons as established in the '7th approximation to the USDA soil taxonomy' from 1960. The next step was the Revised Legend of the Soil Map of the World, published in 1988. In 1982, the International Soil Science Society (ISSS; now: International Union of Soil Sciences, IUSS) established a working group named International Reference Base for Soil Classification (IRB). Chair of this working group was Ernst Schlichting. Its mandate was to develop an international soil classification system that should better consider soil-forming processes than the FAO soil classification. Drafts were presented in 1982 and 1990. In 1992, the IRB working group decided to develop a new system named World Reference Base for Soil Resources (WRB) that should further develop the Revised Legend of the FAO soil classification and include some ideas of the more systematic IRB approach. Otto Spaargaren (International Soil Reference and Information Centre) and Freddy Nachtergaele (FAO) were nominated to prepare a draft. This draft was presented at the 15th World Congress of Soil Science in Acapu
https://en.wikipedia.org/wiki/Hut%208
Hut 8 was a section in the Government Code and Cypher School (GC&CS) at Bletchley Park (the British World War II codebreaking station, located in Buckinghamshire) tasked with solving German naval (Kriegsmarine) Enigma messages. The section was led initially by Alan Turing. He was succeeded in November 1942 by his deputy, Hugh Alexander. Patrick Mahon succeeded Alexander in September 1944. Hut 8 was partnered with Hut 4, which handled the translation and intelligence analysis of the raw decrypts provided by Hut 8. Located initially in one of the original single-story wooden huts, the name "Hut 8" was retained when Huts 3, 6 & 8 moved to a new brick building, Block D, in February 1943. After 2005, the first Hut 8 was restored to its wartime condition, and it now houses the "HMS Petard Exhibition". Operation In 1940, a few breaks were made into the naval "Dolphin" code, but Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed (see ) before they could be broken. In February 1942, the German navy introduced "Triton", a version of Enigma with a fourth rotor for messages to and from Atlantic U-boats; these became unreadable for a period of ten months during a crucial period (see Enigma in 1942). Britain produced modified bombes, but it was the success of the US Navy bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and from across the Atlantic by enciphered teleprinter links. Personnel In addition to the cryptanalysts, around 130 women worked in Hut 8 and provided essential clerical support including punching holes into the Banbury sheets. Hut 8 relied on Wrens to run the bombes housed elsewhere at Bletchley. Code breakers Alan Turing Conel Hugh O'Donel Alexander Michael Arbuthnot Ashcroft Joan Clarke Joseph Gillis Harry Golombek I. J. Good Peter Hilton, January 1942 to late 1942 Rosalind
https://en.wikipedia.org/wiki/Chudnovsky%20brothers
David Volfovich Chudnovsky (born January 22, 1947 in Kyiv) and Gregory Volfovich Chudnovsky (born April 17, 1952 in Kyiv) are Ukrainian-born American mathematicians and engineers known for their world-record mathematical calculations and developing the Chudnovsky algorithm used to calculate the digits of with extreme precision. Careers in mathematics As a child, Gregory Chudnovsky was given a copy of What Is Mathematics? by his father (Volf Grigorovich Chudnovski, a Soviet-Ukrainian professor of technical sciences) and decided that he wanted to be a mathematician. As a high schooler, he solved Hilbert's tenth problem, shortly after Yuri Matiyasevich had solved it. He received a mathematics degree from Kyiv State University in 1974 and a PhD the following year from the Institute of Mathematics, National Academy of Sciences of Ukraine. In part to avoid religious persecution and in part to seek better medical care for Gregory, who had been diagnosed with myasthenia gravis, a neuromuscular disease, the Chudnovsky family applied in 1976 for permission to emigrate from the Soviet Union. Although the family was harassed by the KGB for attempting to leave the country, the brothers were eventually able to secure their emigration with the help of United States Senator Henry M. Jackson and mathematician Edwin Hewitt. A 1992 article in The New Yorker quoted the opinion of several mathematicians that Gregory Chudnovsky was one of the world's best living mathematicians. David Chudnovsky works closely with and assists his brother Gregory. Despite their accomplishments and the attention brought to them by their profile in The New Yorker, the Chudnovsky brothers largely worked alone for decades. A 1997 Karen Arenson article in The New York Times theorized that this was due to some combination of the brothers' lack of a specialization (they worked on topics including number theory, applied physics and computers), Gregory's medical condition, their refusal to leave New York City
https://en.wikipedia.org/wiki/QCP
The QCP file format is used by many cellular telephone manufacturers to provide ring tones and record voice. It is based on RIFF, a generic format for storing chunks of data identified by tags. The QCP format does not specify how voice data in the file is encoded. Rather, it is a container format. QCP files are typically encoded QCELP or EVRC. Qualcomm, which originated the format, has removed an internal web page link from the page that formerly discussed QCP. "Out of an abundance of caution, due to the December 31st, 2007 injunction ordered against certain Qualcomm products, Qualcomm has temporarily removed certain web content until it can be reviewed and modified if necessary to ensure compliance with the injunction. It may be several more days or weeks before these pages are accessible again. Thank you for your patience." QCP files have the same signature as RIFF files: A SOF (start of file) header of 52494646 ("RIFF"), and an EOF (end of file) of 0000. Playing QCP files Qualcomm previously offered downloads of the software and SDK for its PureVoice voice and audio enhancement products that could play and convert QCP files. References Digital audio Computer file formats
https://en.wikipedia.org/wiki/Battery%20room
A battery room is a room that houses batteries for backup or uninterruptible power systems. The rooms are found in telecommunication central offices, and provide standby power for computing equipment in datacenters. Batteries provide direct current (DC) electricity, which may be used directly by some types of equipment, or which may be converted to alternating current (AC) by uninterruptible power supply (UPS) equipment. The batteries may provide power for minutes, hours or days, depending on each system's design, although they are most commonly activated during brief electric utility outages lasting only seconds. Battery rooms were used to segregate the fumes and corrosive chemicals of wet cell batteries (often lead–acid) from the operating equipment, and for better control of temperature and ventilation. In 1890, the Western Union central telegraph office in New York City had 20,000 wet cells, mostly of the primary zinc-copper type. Telecommunications Telephone system central offices contain large battery systems to provide power for customer telephones, telephone switches, and related apparatus. Terrestrial microwave links, cellular telephone sites, fibre optic apparatus and satellite communications facilities also have standby battery systems, which may be large enough to occupy a separate room in the building. In normal operation power from the local commercial utility operates telecommunication equipment, and batteries provide power if the normal supply is interrupted. These can be sized for the expected full duration of an interruption, or may be required only to provide power while a standby generator set or other emergency power supply is started. Batteries often used in battery rooms are the flooded lead-acid battery, the valve regulated lead-acid battery or the nickel–cadmium battery. Batteries are installed in groups. Several batteries are wired together in a series circuit forming a group providing DC electric power at 12, 24, 48 or 60 volts (or
https://en.wikipedia.org/wiki/KIAH
KIAH (channel 39) is a television station in Houston, Texas, United States, serving as the local CW outlet. Owned and operated by network majority owner Nexstar Media Group, the station maintains studios adjacent to the Westpark Tollway on the southwest side of Houston, and its transmitter is located near Missouri City, in unincorporated Fort Bend County. History Origins The station first signed on the air on January 6, 1967, as an independent station under the callsign KHTV (standing for "Houston Television"). Prior to its debut, the channel 39 allocation in Houston belonged to the now-defunct DuMont affiliate KNUZ-TV, which existed during the mid-1950s. Channel 39 was originally owned by the WKY Television System, a subsidiary of the Oklahoma Publishing Company, publishers of Oklahoma City's major daily newspaper, The Daily Oklahoman. After the company's namesake station, WKY-TV, was sold in 1976, the WKY Television System became Gaylord Broadcasting, named for the family that owned Oklahoma Publishing. As Houston's first general entertainment independent station, KHTV ran a schedule of programs including children's shows, syndicated programs, movies, religious programs and some sporting events. One of its best known locally produced programs was Houston Wrestling, hosted by local promoter Paul Boesch, which aired on Saturday evenings (having been taped the night before at the weekly live shows in the Sam Houston Coliseum). From 1983 to 1985, the station was branded on-air as "KHTV 39 Gold". It was the leading independent station in Houston, even as competitors entered the market (including KVRL/KDOG (channel 26, now KRIV), when it launched in 1971). During this time, KHTV was distributed to cable providers as a regional superstation of sorts, with carriage on systems as far east as Baton Rouge, Louisiana. As a WB affiliate On November 2, 1993, the Warner Bros. Television division of Time Warner and the Tribune Company announced the formation of The WB Televis
https://en.wikipedia.org/wiki/Rado%27s%20theorem%20%28Ramsey%20theory%29
Rado's theorem is a theorem from the branch of mathematics known as Ramsey theory. It is named for the German mathematician Richard Rado. It was proved in his thesis, Studien zur Kombinatorik. Statement Let be a system of linear equations, where is a matrix with integer entries. This system is said to be -regular if, for every -coloring of the natural numbers 1, 2, 3, ..., the system has a monochromatic solution. A system is regular if it is r-regular for all r ≥ 1. Rado's theorem states that a system is regular if and only if the matrix A satisfies the columns condition. Let ci denote the i-th column of A. The matrix A satisfies the columns condition provided that there exists a partition C1, C2, ..., Cn of the column indices such that if , then s1 = 0 for all i ≥ 2, si can be written as a rational linear combination of the cjs in all the Ck with k < i. This means that si is in the linear subspace of Q'm spanned by the set of the cj&apos;s. Special cases Folkman's theorem, the statement that there exist arbitrarily large sets of integers all of whose nonempty sums are monochromatic, may be seen as a special case of Rado's theorem concerning the regularity of the system of equations where T ranges over each nonempty subset of the set Other special cases of Rado's theorem are Schur's theorem and Van der Waerden's theorem. For proving the former apply Rado's theorem to the matrix . For Van der Waerden's theorem with m chosen to be length of the monochromatic arithmetic progression, one can for example consider the following matrix: Computability Given a system of linear equations it is a priori unclear how to check computationally that it is regular. Fortunately, Rado's theorem provides a criterion which is testable in finite time. Instead of considering colourings (of infinitely many natural numbers), it must be checked that the given matrix satisfies the columns condition. Since the matrix consists only of finitely many columns, this property
https://en.wikipedia.org/wiki/Bayesian%20experimental%20design
Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. This allows accounting for both any prior knowledge on the parameters to be determined as well as uncertainties in observations. The theory of Bayesian experimental design is to a certain extent based on the theory for making optimal decisions under uncertainty. The aim when designing an experiment is to maximize the expected utility of the experiment outcome. The utility is most commonly defined in terms of a measure of the accuracy of the information provided by the experiment (e.g., the Shannon information or the negative of the variance) but may also involve factors such as the financial cost of performing the experiment. What will be the optimal experiment design depends on the particular utility criterion chosen. Relations to more specialized optimal design theory Linear theory If the model is linear, the prior probability density function (PDF) is homogeneous and observational errors are normally distributed, the theory simplifies to the classical optimal experimental design theory. Approximate normality In numerous publications on Bayesian experimental design, it is (often implicitly) assumed that all posterior probabilities will be approximately normal. This allows for the expected utility to be calculated using linear theory, averaging over the space of model parameters. Caution must however be taken when applying this method, since approximate normality of all possible posteriors is difficult to verify, even in cases of normal observational errors and uniform prior probability. Posterior distribution In many cases, the posterior distribution is not available in closed form and has to be approximated using numerical methods. The most common approach is to use Markov chain Monte Carlo methods to generate samp
https://en.wikipedia.org/wiki/Destination%20routing
In telecommunications, destination routing is a methodology for selecting sequential pathways that messages must pass through to reach a target destination, based on a single destination address. In electronic switching systems for circuit-based telephone calls, the destination stations are identified by a station address or more commonly, a telephone number. Description The telephone network comprises various classes of switching systems. An end office switch connects directly to the stations. It knows which circuit to activate (ring) when given a destination number. Other switches in the network are for transport only. These are sometimes called tandem switches. In this case, the goal of destination routing would be to select an outbound span for a particular destination number. The objective is to get a continuous signal path from the starting location of the caller to the ending location of the called party. External links VoIP Telephone System CC Routes Management Telephone exchanges Telecommunications engineering
https://en.wikipedia.org/wiki/UNIVAC%20418
The UNIVAC 418 was a transistorized, 18-bit word magnetic-core memory machine made by Sperry Univac. The name came from its 4-microsecond memory cycle time and 18-bit word. The assembly language for this class of computers was TRIM III and ART418. Over the three different models, more than 392 systems were manufactured. It evolved from the Control Unit Tester (CUT), a device used in the factory to test peripherals for larger systems. Architecture The instruction word had three formats: Format I - common Load, Store, and Arithmetic operations f - Function code (6 bits) u - Operand address (12 bits) Format II - Constant arithmetic and Boolean functions f - Function code (6 bits) z - Operand address or value (12 bits) Format III - Input/Output f - Function code (6 bits) m - Minor function code (6 bits) k - Designator (6 bits) used for channel number, shift count, etc. Numbers were represented in ones' complement, single and double precision. The TRIM assembly source code used octal numbers as opposed to more common hexadecimal because the 18-bit words are evenly divisible by 3, but not by 4. The machine had the following addressable registers: A - Register (Double precision Accumulator, 36 bits) composed of: AU - Register (Upper Accumulator, 18 bits) AL - Register (Lower Accumulator, 18 bits) ICR - Register (Index Control Register, 3 bits), also designated the "B-register" SR - Register ("Special Register", 4 bits), a paging register allowing direct access to memory banks other than the executing (P register) bank P - Register (Program address, 15 bits) All register values were displayed in real time on the front panel of the computer in binary, with the ability of the user to enter new values via push button (a function that was safe to perform only when the computer was not in run mode). UNIVAC 418-I The first UNIVAC 418-I was delivered in June 1963. It was available with 4,096 to 16,384 words of memory. UNIVAC 1218 Military Computer The 418-I was also av
https://en.wikipedia.org/wiki/Chinese%20Software%20Developer%20Network
The "Chinese Software Developer Network" or "China Software Developer Network", (CSDN), operated by Bailian Midami Digital Technology Co., Ltd., is one of the biggest networks of software developers in China. CSDN provides Web forums, blog hosting, IT news, and other services. CSDN has about 10 million registered users and is the largest developer community in China. Services offered Web forums with a ranking system and similar topics Blog hosting , with 69,484 bloggers at April 7, 2005 Document Center , a selection of blog articles IT News IT job hunting and training services Online bookmark service Web Forum The CSDN community website is where Chinese software programmers seek advice. A poster describes a problem, posts it in the forum with a price in CSDN points, and then waits for replies. On some popular boards, a poster will get a response in a few hours, if not minutes. Most replies are short but enough to point out the mistake and give possible solutions. Some posts include code and may grow to several pages. The majority of posts are written in Simplified Chinese, although Traditional Chinese and English posts are not uncommon. . Topics are mainly IT related and focused on programming, but political and life topics are also active. The forums were closed for two weeks in June 2004. This was likely for political reasons, because many political words, such as the names of political leaders and organizations, have been banned in posts since then. However, political discussions with intentional misspellings are still active. Blog The site hosts many IT blogs, but the large number of bloggers makes the server slow. In December 2005, Baidu rated CSDN as one of the top Chinese blog service providers. Collaboration with Microsoft CSDN started cooperation with Microsoft in 2002, and several Microsoft technical support staff have provided their support in CSDN forums since then. CSDN is also a major source of Chinese Microsoft Most Valuable Professional
https://en.wikipedia.org/wiki/List%20of%20NP-complete%20problems
This is a list of some of the more commonly known problems that are NP-complete when expressed as decision problems. As there are hundreds of such problems known, this list is in no way comprehensive. Many problems of this type can be found in . Graphs and hypergraphs Graphs occur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g. Facebook or LinkedIn). 1-planarity 3-dimensional matching Bandwidth problem Bipartite dimension Capacitated minimum spanning tree Route inspection problem (also called Chinese postman problem) for mixed graphs (having both directed and undirected edges). The program is solvable in polynomial time if the graph has all undirected or all directed edges. Variants include the rural postman problem. Clique cover problem Clique problem Complete coloring, a.k.a. achromatic number Cycle rank Degree-constrained spanning tree Domatic number Dominating set, a.k.a. domination number NP-complete special cases include the edge dominating set problem, i.e., the dominating set problem in line graphs. NP-complete variants include the connected dominating set problem and the maximum leaf spanning tree problem. Feedback vertex set Feedback arc set Graph coloring Graph homomorphism problem Graph partition into subgraphs of specific types (triangles, isomorphic subgraphs, Hamiltonian subgraphs, forests, perfect matchings) are known NP-complete. Partition into cliques is the same problem as coloring the complement of the given graph. A related problem is to find a partition that is optimal terms of the number of edges between parts. Grundy number of a directed graph. Hamiltonian completion Hamiltonian path problem, directed and undirected. Graph intersection number Longest path problem Maximum bipartite subgraph or (especially with weighted edges) maximum cut. Maximum common subgraph isomorphism problem Maximum independent set Maximum Induced pat
https://en.wikipedia.org/wiki/Mapping%20cone%20%28topology%29
In mathematics, especially homotopy theory, the mapping cone is a construction of topology, analogous to a quotient space. It is also called the homotopy cofiber, and also notated . Its dual, a fibration, is called the mapping fibre. The mapping cone can be understood to be a mapping cylinder , with one end of the cylinder collapsed to a point. Thus, mapping cones are frequently applied in the homotopy theory of pointed spaces. Definition Given a map , the mapping cone is defined to be the quotient space of the mapping cylinder with respect to the equivalence relation , . Here denotes the unit interval [0, 1] with its standard topology. Note that some authors (like J. Peter May) use the opposite convention, switching 0 and 1. Visually, one takes the cone on X (the cylinder with one end (the 0 end) identified to a point), and glues the other end onto Y via the map f (the identification of the 1 end). Coarsely, one is taking the quotient space by the image of X, so ; this is not precisely correct because of point-set issues, but is the philosophy, and is made precise by such results as the homology of a pair and the notion of an n-connected map. The above is the definition for a map of unpointed spaces; for a map of pointed spaces (so ), one also identifies all of ; formally, Thus one end and the "seam" are all identified with Example of circle If is the circle , the mapping cone can be considered as the quotient space of the disjoint union of Y with the disk formed by identifying each point x on the boundary of to the point in Y. Consider, for example, the case where Y is the disk , and is the standard inclusion of the circle as the boundary of . Then the mapping cone is homeomorphic to two disks joined on their boundary, which is topologically the sphere . Double mapping cylinder The mapping cone is a special case of the double mapping cylinder. This is basically a cylinder joined on one end to a space via a map and joined on the other e
https://en.wikipedia.org/wiki/Datasaab%20D2
D2 was a concept and prototype computer designed by Datasaab in Linköping, Sweden. It was built with discrete transistors and completed in 1960. Its purpose was to investigate the feasibility of building a computer for use in an aircraft to assist with navigation, ultimately leading to the design of the CK37 computer used in Saab 37 Viggen. This military side of the project was known as SANK, or Saabs Automatiska Navigations-Kalkylator (Saab's Automatic Navigational-Calculator), and D2 was the name for its civilian application. The D2 weighed approximately 200 kg, and could be placed on a desktop. It used words of 20 bits corresponding to 6 decimal digits. The memory capacity was 6K words, corresponding to 15 kilobytes. Programs and data were stored in separate memories. It could perform 100,000 integer additions per second. Paper tape was used for input. Experience from the D2 prototype was the foundation for Datasaab's continued development both of the civilian D21 computer and military aircraft models. The commercial D21, launched already in 1962, used magnetic tape, 24 bit words, and unified program and data memory. Otherwise it was close to the D2 prototype, while a working airborne computer required a lot more miniaturization. The D2 is on exhibit at IT-ceum, the computer museum in Linköping, Sweden. References External links D2 presented at the Datasaab's Friends' Society website One-of-a-kind computers Science and technology in Sweden 20-bit computers Transistorized computers
https://en.wikipedia.org/wiki/Windows%20Presentation%20Foundation
Windows Presentation Foundation (WPF) is a free and open-source graphical subsystem (similar to WinForms) originally developed by Microsoft for rendering user interfaces in Windows-based applications. WPF, previously known as "Avalon", was initially released as part of .NET Framework 3.0 in 2006. WPF uses DirectX and attempts to provide a consistent programming model for building applications. It separates the user interface from business logic, and resembles similar XML-oriented object models, such as those implemented in XUL and SVG. Overview WPF employs XAML, an XML-based language, to define and link various interface elements. WPF applications can be deployed as standalone desktop programs or hosted as an embedded object in a website. WPF aims to unify a number of common user interface elements, such as 2D/3D rendering, fixed and adaptive documents, typography, vector graphics, runtime animation, and pre-rendered media. These elements can then be linked and manipulated based on various events, user interactions, and data bindings. WPF runtime libraries are included with all versions of Microsoft Windows since Windows Vista and Windows Server 2008. Users of Windows XP SP2/SP3 and Windows Server 2003 can optionally install the necessary libraries. Microsoft Silverlight provided functionality that is mostly a subset of WPF to provide embedded web controls comparable to Adobe Flash. 3D runtime rendering had been supported in Silverlight since Silverlight 5. At the Microsoft Connect event on December 4, 2018, Microsoft announced releasing WPF as open source project on GitHub. It is released under the MIT License. Windows Presentation Foundation has become available for projects targeting the .NET software framework, however, the system is not cross-platform and is still available only on Windows. Features Direct3D Graphics, including desktop items like windows, are rendered using Direct3D. This allows the display of more complex graphics and custom themes, a
https://en.wikipedia.org/wiki/Pitch%20detection%20algorithm
A pitch detection algorithm (PDA) is an algorithm designed to estimate the pitch or fundamental frequency of a quasiperiodic or oscillating signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain, the frequency domain, or both. PDAs are used in various contexts (e.g. phonetics, music information retrieval, speech coding, musical performance systems) and so there may be different demands placed upon the algorithm. There is as yet no single ideal PDA, so a variety of algorithms exist, most falling broadly into the classes given below. A PDA typically estimates the period of a quasiperiodic signal, then inverts that value to give the frequency. General approaches One simple approach would be to measure the distance between zero crossing points of the signal (i.e. the zero-crossing rate). However, this does not work well with complicated waveforms which are composed of multiple sine waves with differing periods or noisy data. Nevertheless, there are cases in which zero-crossing can be a useful measure, e.g. in some speech applications where a single source is assumed. The algorithm's simplicity makes it "cheap" to implement. More sophisticated approaches compare segments of the signal with other segments offset by a trial period to find a match. AMDF (average magnitude difference function), ASMDF (Average Squared Mean Difference Function), and other similar autocorrelation algorithms work this way. These algorithms can give quite accurate results for highly periodic signals. However, they have false detection problems (often "octave errors"), can sometimes cope badly with noisy signals (depending on the implementation), and - in their basic implementations - do not deal well with polyphonic sounds (which involve multiple musical notes of different pitches). Current time-domain pitch detector algorithms tend to build upon the basic methods mentioned above, with additional refinements to bring the performance more
https://en.wikipedia.org/wiki/Designer%20baby
A designer baby is a baby whose genetic makeup has been selected or altered, often to exclude a particular gene or to remove genes associated with disease. This process usually involves analysing a wide range of human embryos to identify genes associated with particular diseases and characteristics, and selecting embryos that have the desired genetic makeup; a process known as preimplantation genetic diagnosis. Screening for single genes is commonly practiced, and polygenic screening is offered by a few companies. Other methods by which a baby's genetic information can be altered involve directly editing the genome before birth, which is not routinely performed and only one instance of this is known to have occurred as of 2019, where Chinese twins Lulu and Nana were edited as embryos, causing widespread criticism. Genetically altered embryos can be achieved by introducing the desired genetic material into the embryo itself, or into the sperm and/or egg cells of the parents; either by delivering the desired genes directly into the cell or using gene-editing technology. This process is known as germline engineering and performing this on embryos that will be brought to term is typically prohibited by law. Editing embryos in this manner means that the genetic changes can be carried down to future generations, and since the technology concerns editing the genes of an unborn baby, it is considered controversial and is subject to ethical debate. While some scientists condone the use of this technology to treat disease, concerns have been raised that this could be translated into using the technology for cosmetic purposes and enhancement of human traits. Pre-implantation genetic diagnosis Pre-implantation genetic diagnosis (PGD or PIGD) is a procedure in which embryos are screened prior to implantation. The technique is used alongside in vitro fertilisation (IVF) to obtain embryos for evaluation of the genome – alternatively, ovocytes can be screened prior to fertilisat
https://en.wikipedia.org/wiki/Sanger%20sequencing
Sanger sequencing is a method of DNA sequencing that involves electrophoresis and is based on the random incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication. After first being developed by Frederick Sanger and colleagues in 1977, it became the most widely used sequencing method for approximately 40 years. It was first commercialized by Applied Biosystems in 1986. More recently, higher volume Sanger sequencing has been replaced by next generation sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use for smaller-scale projects and for validation of deep sequencing results. It still has the advantage over short-read sequencing technologies (like Illumina) in that it can produce DNA sequence reads of > 500 nucleotides and maintains a very low error rate with accuracies around 99.99%. Sanger sequencing is still actively being used in efforts for public health initiatives such as sequencing the spike protein from SARS-CoV-2 as well as for the surveillance of norovirus outbreaks through the Center for Disease Control and Prevention's (CDC) CaliciNet surveillance network. Method The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotide triphosphates (dNTPs), and modified di-deoxynucleotide triphosphates (ddNTPs), the latter of which terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a modified ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in automated sequencing machines. The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP) and the DNA polymerase. To each reaction is adde
https://en.wikipedia.org/wiki/Cryptobiosis
Cryptobiosis or anabiosis is a metabolic state in extremophilic organisms in response to adverse environmental conditions such as desiccation, freezing, and oxygen deficiency. In the cryptobiotic state, all measurable metabolic processes stop, preventing reproduction, development, and repair. When environmental conditions return to being hospitable, the organism will return to its metabolic state of life as it was prior to cryptobiosis. Forms Anhydrobiosis Anhydrobiosis is the most studied form of cryptobiosis and occurs in situations of extreme desiccation. The term anhydrobiosis derives from the Greek for "life without water" and is most commonly used for the desiccation tolerance observed in certain invertebrate animals such as bdelloid rotifers, tardigrades, brine shrimp, nematodes, and at least one insect, a species of chironomid (Polypedilum vanderplanki). However, other life forms exhibit desiccation tolerance. These include the resurrection plant Craterostigma plantagineum, the majority of plant seeds, and many microorganisms such as bakers' yeast. Studies have shown that some anhydrobiotic organisms can survive for decades, even centuries, in the dry state. Invertebrates undergoing anhydrobiosis often contract into a smaller shape and some proceed to form a sugar called trehalose. Desiccation tolerance in plants is associated with the production of another sugar, sucrose. These sugars are thought to protect the organism from desiccation damage. In some creatures, such as bdelloid rotifers, no trehalose has been found, which has led scientists to propose other mechanisms of anhydrobiosis, possibly involving intrinsically disordered proteins. In 2011, Caenorhabditis elegans, a nematode that is also one of the best-studied model organisms, was shown to undergo anhydrobiosis in the dauer larva stage. Further research taking advantage of genetic and biochemical tools available for this organism revealed that in addition to trehalose biosynthesis, a set of
https://en.wikipedia.org/wiki/Restriction%20digest
A restriction digest is a procedure used in molecular biology to prepare DNA for analysis or other processing. It is sometimes termed DNA fragmentation, though this term is used for other procedures as well. In a restriction digest, DNA molecules are cleaved at specific restriction sites of 4-12 nucleotides in length by use of restriction enzymes which recognize these sequences. The resulting digested DNA is very often selectively amplified using polymerase chain reaction (PCR), making it more suitable for analytical techniques such as agarose gel electrophoresis, and chromatography. It is used in genetic fingerprinting, plasmid subcloning, and RFLP analysis. Restriction site A given restriction enzyme cuts DNA segments within a specific nucleotide sequence, at what is called a restriction site. These recognition sequences are typically four, six, eight, ten, or twelve nucleotides long and generally palindromic (i.e. the same nucleotide sequence in the 5' – 3' direction). Because there are only so many ways to arrange the four nucleotides that compose DNA (Adenine, Thymine, Guanine and Cytosine) into a four- to twelve-nucleotide sequence, recognition sequences tend to occur by chance in any long sequence. Restriction enzymes specific to hundreds of distinct sequences have been identified and synthesized for sale to laboratories, and as a result, several potential "restriction sites" appear in almost any gene or locus of interest on any chromosome. Furthermore, almost all artificial plasmids include a (often entirely synthetic) polylinker (also called "multiple cloning site") that contains dozens of restriction enzyme recognition sequences within a very short segment of DNA. This allows the insertion of almost any specific fragment of DNA into plasmid vectors, which can be efficiently "cloned" by insertion into replicating bacterial cells. After restriction digest, DNA can then be analysed using agarose gel electrophoresis. In gel electrophoresis, a sample of
https://en.wikipedia.org/wiki/Quantum%201/f%20noise
Quantum 1/f noise is an intrinsic and fundamental part of quantum mechanics. Fighter pilots, photographers, and scientists all appreciate the higher quality of images and signals resulting from the consideration of quantum 1/f noise. Engineers have battled unwanted 1/f noise since 1925, giving it poetic names (such as flicker noise, funkelrauschen, bruit de scintillation, etc.) due to its mysterious nature. The Quantum 1/f noise theory was developed about 50 years later, describing the nature of 1/f noise, allowing it the be explained and calculated via straightforward engineering formulas. It allows for the low-noise optimization of materials, devices and systems of most high-technology applications of modern industry and science. The theory includes the conventional and coherent quantum 1/f effects (Q1/fE). Both effects are combined in a general engineering formula, and present in Q1/f noise, which is itself most of fundamental 1/f noise. The latter is defined as the result of the simultaneous presence of nonlinearity and a certain type of homogeneity in a system, and can be quantum or classical. The conventional Q1/fE represents 1/f fluctuations caused by bremsstrahlung, decoherence and interference in the scattering of charged particles off one another, in tunneling or in any other process in solid state physics and in general. Other noise data sets It has also recently been claimed that 1/f noise has been seen in higher ordered self constructing functions, as well as complex systems, both biological, chemical, and physical. The theory The basic derivation of quantum 1/f was made by Peter Handel, a theoretical physicist at the University of Missouri–St. Louis, and published in Physical Review A, in August 1980. Several hundred papers have been published by many authors on Handel's quantum theory on 1/f noise, which is a new aspect of quantum mechanics. They verified, applied, and further developed the quantum 1/f noise formulas. Aldert van der Ziel, the
https://en.wikipedia.org/wiki/Transverse%20isotropy
A transversely isotropic material is one with physical properties that are symmetric about an axis that is normal to a plane of isotropy. This transverse plane has infinite planes of symmetry and thus, within this plane, the material properties are the same in all directions. Hence, such materials are also known as "polar anisotropic" materials. In geophysics, vertically transverse isotropy (VTI) is also known as radial anisotropy. This type of material exhibits hexagonal symmetry (though technically this ceases to be true for tensors of rank 6 and higher), so the number of independent constants in the (fourth-rank) elasticity tensor are reduced to 5 (from a total of 21 independent constants in the case of a fully anisotropic solid). The (second-rank) tensors of electrical resistivity, permeability, etc. have two independent constants. Example of transversely isotropic materials An example of a transversely isotropic material is the so-called on-axis unidirectional fiber composite lamina where the fibers are circular in cross section. In a unidirectional composite, the plane normal to the fiber direction can be considered as the isotropic plane, at long wavelengths (low frequencies) of excitation. In the figure to the right, the fibers would be aligned with the axis, which is normal to the plane of isotropy. In terms of effective properties, geological layers of rocks are often interpreted as being transversely isotropic. Calculating the effective elastic properties of such layers in petrology has been coined Backus upscaling, which is described below. Material symmetry matrix The material matrix has a symmetry with respect to a given orthogonal transformation () if it does not change when subjected to that transformation. For invariance of the material properties under such a transformation we require Hence the condition for material symmetry is (using the definition of an orthogonal transformation) Orthogonal transformations can be represented in Carte
https://en.wikipedia.org/wiki/CAST%20tool
CAST tools are software applications used in the process of software testing. The acronym stands for "Computer Aided Software Testing". Such tools are available from various vendors and there are different tools for different types of testing, as well as for test management. They are known to be cost-effective and time-saving because they reduce the incidence of human error and are thorough. 'Cast is also a multimedia professional development tool or multimedia database' External links Cast Tools: some examples. Software testing tools
https://en.wikipedia.org/wiki/IBM%20Lightweight%20Third-Party%20Authentication
Lightweight Third-Party Authentication (LTPA), is an authentication technology used in IBM WebSphere and Lotus Domino products. When accessing web servers that use the LTPA technology it is possible for a web user to re-use their login across physical servers. A Lotus Domino server or an IBM WebSphere server that is configured to use the LTPA authentication will challenge the web user for a name and password. When the user has been authenticated, their browser will have received a session cookie - a cookie that is only available for one browsing session. This cookie contains the LTPA token. If the user – after having received the LTPA token – accesses a server that is a member of the same authentication realm as the first server, and if the browsing session has not been terminated (the browser was not closed down), then the user is automatically authenticated and will not be challenged for a name and password. Such an environment is also called a single sign-on environment. See also Access control List of single sign-on implementations References DeveloperToolbox Technical Magazine: WebSphere and Domino single sign-on DominoTomcatSSO at OpenNTF.org: A open source implementation of LTPA for Tomcat Websphere Websphere Liberty Profile Lightweight Third-Party Authentication Computer access control
https://en.wikipedia.org/wiki/Edward%20Charles%20Titchmarsh
Edward Charles "Ted" Titchmarsh (June 1, 1899 – January 18, 1963) was a leading British mathematician. Education Titchmarsh was educated at King Edward VII School (Sheffield) and Balliol College, Oxford, where he began his studies in October 1917. Career Titchmarsh was known for work in analytic number theory, Fourier analysis and other parts of mathematical analysis. He wrote several classic books in these areas; his book on the Riemann zeta-function was reissued in an edition edited by Roger Heath-Brown. Titchmarsh was Savilian Professor of Geometry at the University of Oxford from 1932 to 1963. He was a Plenary Speaker at the ICM in 1954 in Amsterdam. He was on the governing body of Abingdon School from 1935-1947. Awards Fellow of the Royal Society, 1931 De Morgan Medal, 1953 Sylvester Medal, 1955 Berwick Prize winner, 1956 Publications The Zeta-Function of Riemann (1930); Introduction to the Theory of Fourier Integrals (1937) 2nd. edition(1939) 2nd. edition (1948); The Theory of Functions (1932); Mathematics for the General Reader (1948); The Theory of the Riemann Zeta-Function (1951); 2nd edition, revised by D. R. Heath-Brown (1986) Eigenfunction Expansions Associated with Second-order Differential Equations. Part I (1946) 2nd. edition (1962); Eigenfunction Expansions Associated with Second-order Differential Equations. Part II (1958); References 1899 births 1963 deaths People from Newbury, Berkshire 20th-century British mathematicians Number theorists Mathematical analysts Fellows of the Royal Society People educated at King Edward VII School, Sheffield Savilian Professors of Geometry Alumni of Balliol College, Oxford Governors of Abingdon School