source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Virahanka
Virahanka (Devanagari: विरहाङ्क) was an Indian prosodist who is also known for his work on mathematics. He may have lived in the 6th century, but it is also possible that he worked as late as the 8th century. His work on prosody builds on the Chhanda-sutras of Pingala (4th century BCE), and was the basis for a 12th-century commentary by Gopala. He was the first to propose the so-called Fibonacci Sequence. See also Indian mathematicians References External links The So-called Fibonacci Numbers in Ancient and Medieval India by Parmanand Singh 8th-century Indian mathematicians Fibonacci numbers Medieval Sanskrit grammarians Ancient Indian mathematical works
https://en.wikipedia.org/wiki/Baudhayana%20sutras
The (Sanskrit: बौधायन) are a group of Vedic Sanskrit texts which cover dharma, daily ritual, mathematics and is one of the oldest Dharma-related texts of Hinduism that have survived into the modern age from the 1st-millennium BCE. They belong to the Taittiriya branch of the Krishna Yajurveda school and are among the earliest texts of the genre. The Baudhayana sūtras consist of six texts: the , probably in 19 (questions), the in 20 (chapters), the in 4 , the Grihyasutra in 4 , the in 4 and the in 3 . The is noted for containing several early mathematical results, including an approximation of the square root of 2 and the statement of the Pythagorean theorem. Baudhāyana Shrautasūtra His Śrauta sūtras related to performing Vedic sacrifices have followers in some Smārta brāhmaṇas (Iyers) and some Iyengars of Tamil Nadu, Yajurvedis or Namboothiris of Kerala, Gurukkal Brahmins (Aadi Saivas) and Kongu Vellalars. The followers of this sūtra follow a different method and do 24 Tila-tarpaṇa, as Lord Krishna had done tarpaṇa on the day before amāvāsyā; they call themselves Baudhāyana Amavasya. Baudhāyana Dharmasūtra The Dharmasūtra of Baudhāyana like that of Apastamba also forms a part of the larger Kalpasutra. Likewise, it is composed of praśnas which literally means 'questions' or books. The structure of this Dharmasūtra is not very clear because it came down in an incomplete manner. Moreover, the text has undergone alterations in the form of additions and explanations over a period of time. The praśnas consist of the Srautasutra and other ritual treatises, the Sulvasutra which deals with vedic geometry, and the Grhyasutra which deals with domestic rituals. There are no commentaries on this Dharmasūtra with the exception of Govindasvāmin's Vivaraṇa. The date of the commentary is uncertain but according to Olivelle it is not very ancient. Also the commentary is inferior in comparison to that of Haradatta on Āpastamba and Gautama. This Dharmasūtr
https://en.wikipedia.org/wiki/Thomson%20EF936x
The Thomson EF936x series is a type of Graphic Display Processor (GDP) by Thomson-EFCIS. The chip could draw at 1 million pixels per second, which was relatively advanced for the time of its release (1982 or earlier). There are various versions of the chip (EF9364, EF9365, EF9366, EF9367, SFF96364, EF9369), with slightly different capabilities. In 1982 Commodore released a "High Resolution Graphics" board for the PET based on the EF9365 and EF9366 chips, allowing it to display 512 x 512 or 512 x 256 resolution graphics. The EF9366 was also used on the SMP-E353 graphic card for the Siemens computer series and on the NDR-Klein-Computer introduced in 1984. Version EF9369 was used on computers such as the Thomson MO5NR, MO6, TO8, TO9 and TO9+, and from 1985 to 1989 on the DAI Personal Computer. Versions Based on the 1989 data book published by the company, the EF936x series was split into Graphics Controllers and Color Palette models. Graphics Controllers EF9364 CRT Processor introduced in 1981 EF9365 512×512 (interlaced), 256×256, 128×128, 64×64; 50 Hz EF9366 512×256 (noninterlaced); 50/60 Hz EF9367 1024×512 (interlaced), 1024x416 (interlaced); 50/60 Hz (capable of SECAM system output for the French market). SFF96364 Color Palette EF9369 - 4-bit DACs (16 out of 4096 colors - 12-bit RGB), generating gamma corrected (gamma 2.8) voltages. TS9370 - 4-bit DACs (16 out of 4096 colors) Capabilities Integrated DRAM controller Line drawing, with delta-x and delta-y limited to 255 each. Support for solid, dotted, dashed and dotted/dashed lines. Built-in 5×8 pixel ASCII font. Support for rendering tilted characters, and scaling by integer factors (no antialiasing) Clear screen Light Pen support The GPUs did not support direct access to the graphics memory, although a special command was provided to aid in implementing access to individual memory words. See also Thomson EF934x Thomson MO5NR Thomson MO6 Thomson TO8 Thomson TO9 Thomson TO9+ NDR-Klein-Computer Commodore P
https://en.wikipedia.org/wiki/Protein%20skimmer
A protein skimmer or foam fractionator is a device used to remove organic compounds such as food and waste particles from water. It is most commonly used in commercial applications like municipal water treatment facilities, public aquariums, and aquaculture facilities. Smaller protein skimmers are also used for filtration of home saltwater aquariums and even freshwater aquariums and ponds. Function Protein skimming removes certain organic compounds, including proteins and amino acids found in food particles and fish waste, by using the polarity of the protein itself. Due to their intrinsic charge, water-borne proteins are either repelled or attracted by the air/water interface and these molecules can be described as hydrophobic (such as fats or oils) or hydrophilic (such as salt, sugar, ammonia, most amino acids, and most inorganic compounds). However, some larger organic molecules can have both hydrophobic and hydrophilic portions. These molecules are called amphipathic or amphiphilic. Commercial protein skimmers work by generating a large air/water interface, specifically by injecting large numbers of bubbles into the water column. In general, the smaller the bubbles the more effective the protein skimming is because the surface area of small bubbles occupying the same volume is much greater than the same volume of larger bubbles. Large numbers of small bubbles present an enormous air/water interface for hydrophobic organic molecules and amphipathic organic molecules to collect on the bubble surface (the air/water interface). Water movement hastens diffusion of organic molecules, which effectively brings more organic molecules to the air/water interface and lets the organic molecules accumulate on the surface of the air bubbles. This process continues until the interface is saturated, unless the bubble is removed from the water or it bursts, in which case the accumulated molecules release back into the water column. However, it is important to note that further
https://en.wikipedia.org/wiki/Boule%20%28crystal%29
A boule is a single-crystal ingot produced by synthetic means. A boule of silicon is the starting material for most of the integrated circuits used today. In the semiconductor industry synthetic boules can be made by a number of methods, such as the Bridgman technique and the Czochralski process, which result in a cylindrical rod of material. In the Czochralski process a seed crystal is required to create a larger crystal, or ingot. This seed crystal is dipped into the pure molten silicon and slowly extracted. The molten silicon grows on the seed crystal in a crystalline fashion. As the seed is extracted the silicon solidifies and eventually a large, cylindrical boule is produced. A semiconductor crystal boule is normally cut into circular wafers using an inside hole diamond saw or diamond wire saw, and each wafer is lapped and polished to provide substrates suitable for the fabrication of semiconductor devices on its surface. The process is also used to create sapphires, which are used for substrates in the production of blue and white LEDs, optical windows in special applications and as the protective covers for watches. References Crystals Semiconductor growth
https://en.wikipedia.org/wiki/Bridgman%E2%80%93Stockbarger%20method
The Bridgman–Stockbarger method, or Bridgman–Stockbarger technique, is named after Harvard physicist Percy Williams Bridgman (1882–1961) and MIT physicist Donald C. Stockbarger (1895–1952). The method includes two similar but distinct techniques primarily used for growing boules (single-crystal ingots), but which can be used for solidifying polycrystalline ingots as well. Overview The methods involve heating polycrystalline material above its melting point and slowly cooling it from one end of its container, where a seed crystal is located. A single crystal of the same crystallographic orientation as the seed material is grown on the seed and is progressively formed along the length of the container. The process can be carried out in a horizontal or vertical orientation, and usually involves a rotating crucible/ampoule to stir the melt. The Bridgman method is a popular way of producing certain semiconductor crystals such as gallium arsenide, for which the Czochralski method is more difficult. The process can reliably produce single-crystal ingots, but does not necessarily result in uniform properties through the crystal. The difference between the Bridgman technique and Stockbarger technique is subtle: While both methods utilize a temperature gradient and a moving crucible, the Bridgman technique utilizes the relatively uncontrolled gradient produced at the exit of the furnace; the Stockbarger technique introduces a baffle, or shelf, separating two coupled furnaces with temperatures above and below the freezing point. Stockbarger's modification of the Bridgman technique allows for better control over the temperature gradient at the melt/crystal interface. When seed crystals are not employed as described above, polycrystalline ingots can be produced from a feedstock consisting of rods, chunks, or any irregularly shaped pieces once they are melted and allowed to re-solidify. The resultant microstructure of the ingots so obtained are characteristic of direction
https://en.wikipedia.org/wiki/Web%20Services%20Invocation%20Framework
The Web Services Invocation Framework (WSIF) supports a simple and flexible Java API for invoking any Web Services Description Language (WSDL)-described service. Using WSIF, WSDL can become the centerpiece of an integration framework for accessing software running on diverse platforms which uses different protocols. The software needs to be described using WSDL and have a binding included in its description that the client's WSIF framework has a provider for. WSIF defines and comes packaged with providers for local Java, Enterprise JavaBeans (EJB), Java Message Service (JMS), and Java EE Connector Architecture (JCA) protocols, which means that a client can define an EJB or a Java Message Service-accessible service directly as a WSDL binding and access it transparently using WSIF, using the same API one would use for a SOAP service or a local Java class. Structure In WSDL, a binding defines how to map between the abstract PortType and a real service format and protocol. For example, the SOAP binding defines the encoding style, the SOAPAction header, the namespace of the body (the targetURI), and so forth. WSDL allows multiple implementations for a Web service and multiple ports that share the same PortType. In other words, WSDL allows the same interface to have bindings to services like SOAP and IIOP. WSIF provides an API to allow the same client code to access any available binding. As the client code can be written to the PortType, it can be a deployment or configuration setting (or a code choice) which port and binding it uses. WSIF uses providers to support these multiple WSDL bindings. A provider is a piece of code that supports a WSDL extension and allows invocation of the service through that particular implementation. WSIF providers use the J2SE JAR service provider specification, making them discoverable at runtime. Clients can utilize new implementations and delegate the choice of port to the infrastructure and runtime, which allows the implementatio
https://en.wikipedia.org/wiki/Aluminium%20nitride
Aluminium nitride (AlN) is a solid nitride of aluminium. It has a high thermal conductivity of up to 321 W/(m·K) and is an electrical insulator. Its wurtzite phase (w-AlN) has a band gap of ~6 eV at room temperature and has a potential application in optoelectronics operating at deep ultraviolet frequencies. History and physical properties AlN was first synthesized in 1862 by F. Briegleb and A. Geuther. AlN, in the pure (undoped) state has an electrical conductivity of 10−11–10−13 Ω−1⋅cm−1, rising to 10−5–10−6 Ω−1⋅cm−1 when doped. Electrical breakdown occurs at a field of 1.2–1.8 V/mm (dielectric strength). The material exists primarily in the hexagonal wurtzite crystal structure, but also has a metastable cubic zincblende phase, which is synthesized primarily in the form of thin films. It is predicted that the cubic phase of AlN (zb-AlN) can exhibit superconductivity at high pressures. In AlN wurtzite crystal structure, Al and N alternate along the c-axis, and each bond is tetrahedrally coordinated with four atoms per unit cell. One of the unique intrinsic properties of wurtzite AlN is its spontaneous polarization. The origin of spontaneous polarization is the strong ionic character of chemical bonds in wurtzite AlN due to the large difference in electronegativity between aluminium and nitrogen atoms. Furthermore, the non-centrosymmetric wurtzite crystal structure gives rise to a net polarization along the c-axis. Compared with other III-nitride materials, AlN has a larger spontaneous polarization due to the higher nonideality of its crystal structure (Psp: AlN 0.081 C/m2 > InN 0.032 C/m2 > GaN 0.029 C/m2). Moreover, the piezoelectric nature of AlN gives rise to internal piezoelectric polarization charges under strain. These polarization effects can be utilized to induce a high density of free carriers at III-nitride semiconductor heterostructure interfaces completely dispensing with the need of intentional doping. Owing to the broken inversion symmetry along
https://en.wikipedia.org/wiki/Ultima%20I%3A%20The%20First%20Age%20of%20Darkness
Ultima, later known as Ultima I: The First Age of Darkness or simply Ultima I, is the first game in the Ultima series of role-playing video games created by Richard Garriott, originally released for the Apple II. It was first published in the United States by California Pacific Computer Company, which registered a copyright for the game on September 2, 1980 and officially released it in June 1981. Since its release, the game has been completely re-coded and ported to many different platforms. The 1986 re-code of Ultima is the most commonly known and available version of the game. Ultima revolves around a quest to find and destroy the Gem of Immortality, which is being used by the evil wizard Mondain to enslave the lands of Sosaria. With the gem in his possession, he cannot be killed, and his minions roam and terrorize the countryside. The player takes on the role of "The Stranger", an individual summoned from another world to end the rule of Mondain. The game follows the endeavors of the stranger in this task, which involves progressing through many aspects of game play, including dungeon crawling and space travel. The game was one of the first definitive commercial computer RPGs, and is considered an important and influential turning point for the development of the genre throughout years to come. In addition to its influences on the RPG genre, it is also the first open-world computer game. Gameplay The world of Ultima is presented in a variety of different ways. The overworld is projected in a topdown, third-person view, while dungeons are displayed in first-person, one-point perspective. In both scenarios the player character is controlled with the keyboard directional arrows, and shortcut keys are used for other commands, such as A for attack and B for board. Character creation at the start of Ultima is not unlike a simplified version of traditional tabletop role-playing games. The player is presented with a number of points to distribute between various
https://en.wikipedia.org/wiki/Ultima%20II%3A%20The%20Revenge%20of%20the%20Enchantress
Ultima II: The Revenge of the Enchantress, released on August 24, 1982, for the Apple II (USCO# PA-317-502), is the second role-playing video game in the Ultima series, and the second installment in Ultima's "Age of Darkness" trilogy. It was also the only official Ultima game published by Sierra On-Line. Conflict with Sierra over royalties for the IBM port of this game led the series creator Richard Garriott to start his own company, Origin Systems. The plot of Ultima II revolves around the evil enchantress Minax, taking over the world and exacting revenge on the player for the player's defeat of Mondain in Ultima. The player travels through time to acquire the means to defeat Minax and restore the world to peace. Ultima II has a larger game world than Ultima I, and hosts advances in graphics and in gameplay. Gameplay The gameplay is very similar to the previous game in the series, Ultima I: The First Age of Darkness. The scope of the game is bigger, in that there are several more places to explore, even though some of them (like most of the Solar System planets and the dungeons and towers) are not required to complete the game. In the game, the player has to travel to several different time periods of Earth, using time doors. The periods are the Time of Legends (a mythological period), Pangea (about 300 to 250 million years ago), B.C. (1423, "before the dawn of civilization"), A.D. (1990), and the Aftermath (after 2112). The player also has to travel to space, where all the planets in the Solar System can be visited. Plot From the game's story, the player learns that the lover of the dark wizard Mondain, the enchantress Minax, is threatening Earth through disturbances in the space-time continuum. The player must guide a hero through time and the Solar System to defeat her evil plot. The young Minax survived her mentor's and lover's death at the hands of the Stranger (in Ultima I: The First Age of Darkness) and went into hiding. Several years later, Minax got
https://en.wikipedia.org/wiki/Ultima%20III%3A%20Exodus
Ultima III: Exodus is the third game in the series of Ultima role-playing video games. Exodus is also the name of the game's principal antagonist. It is the final installment in the "Age of Darkness" trilogy. Released in 1983, it was the first Ultima game published by Origin Systems. Originally developed for the Apple II, Exodus was eventually ported to 13 other platforms, including a NES/Famicom remake. Ultima III revolves around Exodus, the spawn of Mondain and Minax (from Ultima I and Ultima II, respectively), threatening the world of Sosaria. The player character travels to Sosaria to defeat Exodus and restore the world to peace. Ultima III hosts further advances in graphics, particularly in animation, adds a musical score, and increases the player's options in gameplay with a larger party and more interactivity with the game world. Ultima III was followed by Ultima IV: Quest of the Avatar in 1985. Gameplay Exodus featured revolutionary graphics for its time, as one of the first computer RPGs to display animated characters. Also, Exodus differs from previous games in that players now direct the actions of a party of four characters rather than just one. During regular play the characters are represented as a single player icon and move as one. However, in battle mode, each character is represented separately on a tactical battle screen, and the player alternates commands between each character in order, followed by each enemy character having a turn. This differs from the two previous games in the Ultima series in which the player is simply depicted as trading blows with one opponent on the main map until either is defeated. Enemies on the overworld map can be seen and at least temporarily avoided, while enemies in dungeons appear randomly without any forewarning. The party of four that a player uses can be chosen at the beginning of the game. There is a choice between 11 classes: Fighter, Paladin, Cleric, Wizard, Ranger, Thief, Barbarian, Lark, Illusionist,
https://en.wikipedia.org/wiki/Ultima%20IV%3A%20Quest%20of%20the%20Avatar
Ultima IV: Quest of the Avatar, first released in 1985 for the Apple II, is the fourth in the series of Ultima role-playing video games. It is the first in the "Age of Enlightenment" trilogy, shifting the series from the hack and slash, dungeon crawl gameplay of its "Age of Darkness" predecessors towards an ethically-nuanced, story-driven approach. Ultima IV has a much larger game world than its predecessors, with an overworld map sixteen times the size of Ultima III and puzzle-filled dungeon rooms to explore. Ultima IV further advances the franchise with dialog improvements, new means of travel and exploration, and world interactivity. In 1996 Computer Gaming World named Ultima IV as #2 on its Best Games of All Time list on the PC. Designer Richard Garriott considers this game to be among his favorites from the Ultima series. Ultima IV was followed by the release of Ultima V: Warriors of Destiny in 1988. Plot Ultima IV is among the few role-playing games, and perhaps the first, in which the game's story does not center on asking a player character to overcome a tangible ultimate evil. The story instead focuses on the player character's moral self-improvement. After the defeat of each of the members of the Triad of Evil in the previous three Ultima games, the world of Sosaria underwent some radical changes in geography: Three quarters of the world disappeared, continents rose and sank, and new cities were built to replace the ones that were lost. Eventually the world, now unified under Lord British's rule, was renamed Britannia. Lord British felt the people lacked purpose after their great struggles against the Triad were over, and he was concerned with their spiritual well-being in this unfamiliar new age of relative peace, so he proclaimed the Quest of the Avatar: He needed someone to step forth and become the shining example for others to follow. Unlike most other RPGs the game is not set in an "age of darkness"; prosperous Britannia resembles Renaissance I
https://en.wikipedia.org/wiki/Mesochronous%20network
A mesochronous network is a telecommunications network in which the clocks run with the same frequency but unknown phases. Compare synchronous network. See also Synchronization in telecommunications Isochronous signal Plesiochronous system Asynchronous system Network architecture Synchronization
https://en.wikipedia.org/wiki/Manava
Manava (c. 750 BC – 690 BC) is an author of the Hindu geometric text of Sulba Sutras. The Manava Sulbasutra is not the oldest (the one by Baudhayana is older), nor is it one of the most important, there being at least three Sulbasutras which are considered more important. Historians place his lifetime at around 750 BC. Manava would have not have been a mathematician in the sense that we would understand it today. Nor was he a scribe who simply copied manuscripts like Ahmes. He would certainly have been a man of very considerable learning but probably not interested in mathematics for its own sake, merely interested in using it for religious purposes. Undoubtedly he wrote the Sulbasutra to provide rules for religious rites and it would appear almost a certainty that Manava himself would be a Hindu priest. The mathematics given in the Sulbasutras is there to enable accurate construction of altars needed for sacrifices. It is clear from the writing that Manava, as well as being a priest, must have been a skilled craftsman. Manava's Sulbasutra, like all the Sulbasutras, contained approximate constructions of circles from rectangles, and squares from circles, which can be thought of as giving approximate values of π. There appear therefore different values of π throughout the Sulbasutra, essentially every construction involving circles leads to a different such approximation. The paper of R.C. Gupta is concerned with an interpretation of verses 11.14 and 11.15 of Manava's work which give π = 25/8 = 3.125. External links References Ancient Indian mathematicians Geometers 750s BC births 690 BC deaths 7th-century BC mathematicians Ancient Indian mathematical works
https://en.wikipedia.org/wiki/Global%20Information%20Assurance%20Certification
Global Information Assurance Certification (GIAC) is an information security certification entity that specializes in technical and practical certification as well as new research in the form of its GIAC Gold program. SANS Institute founded the certification entity in 1999 and the term GIAC is trademarked by The Escal Institute of Advanced Technologies. GIAC provides a set of vendor-neutral computer security certifications linked to the training courses provided by the SANS. GIAC is specific to the leading edge technological advancement of IT security in order to keep ahead of "black hat" techniques. Papers written by individuals pursuing GIAC certifications are presented at the SANS Reading Room on GIAC's website. Initially all SANS GIAC certifications required a written paper or "practical" on a specific area of the certification in order to achieve the certification. In April 2005, the SANS organization changed the format of the certification by breaking it into two separate levels. The "silver" level certification is achieved upon completion of a multiple choice exam. The "gold" level certification can be obtained by completing a research paper and has the silver level as a prerequisite. As of August 27, 2022, GIAC has granted 173,822 certifications worldwide. SANS GIAC Certifications Certifications listed as 'unavailable' are not listed in official SANS or GIAC sources, and are found elsewhere. They are not the same as retired courses. Cyber Defense Penetration Testing Management, Audit, Legal Operations Developer Incident Response and Forensics Industrial Control Systems GSE Unobtainable Certifications The following certifications are no longer issued. External links Notes Computer security qualifications Digital forensics certification
https://en.wikipedia.org/wiki/Power%20of%2010
A power of 10 is any of the integer powers of the number ten; in other words, ten multiplied by itself a certain number of times (when the power is a positive integer). By definition, the number one is a power (the zeroth power) of ten. The first few non-negative powers of ten are: 1, 10, 100, 1,000, 10,000, 100,000, 1,000,000, 10,000,000. ... Positive powers In decimal notation the nth power of ten is written as '1' followed by n zeroes. It can also be written as 10n or as 1En in E notation. See order of magnitude and orders of magnitude (numbers) for named powers of ten. There are two conventions for naming positive powers of ten, beginning with 109, called the long and short scales. Where a power of ten has different names in the two conventions, the long scale name is shown in parentheses. The positive 10 power related to a short scale name can be determined based on its Latin name-prefix using the following formula: 10[(prefix-number + 1) × 3] Examples: billion = 10[(2 + 1) × 3] = 109 octillion = 10[(8 + 1) × 3] = 1027 Negative powers The sequence of powers of ten can also be extended to negative powers. Similar to the positive powers, the negative power of 10 related to a short scale name can be determined based on its Latin name-prefix using the following formula: 10−[(prefix-number + 1) × 3] Examples: billionth = 10−[(2 + 1) × 3] = 10−9 quintillionth = 10−[(5 + 1) × 3] = 10−18 Googol The number googol is 10100. The term was coined by 9-year-old Milton Sirotta, nephew of American mathematician Edward Kasner. It was popularized in Kasner's 1940 book Mathematics and the Imagination, where it was used to compare and illustrate very large numbers. Googolplex, a much larger power of ten (10 to the googol power, or 1010100), was also introduced in that book. (Read below) Googolplex The number googolplex is 10googol, or 1010,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,00
https://en.wikipedia.org/wiki/Byte%20addressing
Byte addressing in hardware architectures supports accessing individual bytes. Computers with byte addressing are sometimes called byte machines, in contrast to word-addressable architectures, word machines, that access data by word. Background The basic unit of digital storage is a bit, storing a single 0 or 1. Many common instruction set architectures can address more than 8 bits of data at a time. For example, 32-bit x86 processors have 32-bit general-purpose registers and can handle 32-bit (4-byte) data in single instructions. However, data in memory may be of various lengths. Instruction sets that support byte addressing supports accessing data in units that are narrower than the word length. An eight-bit processor like the Intel 8008 addresses eight bits, but as this is the full width of the accumulator and other registers, this is could be considered either byte-addressable or word-addressable. 32-bit x86 processors, which address memory in 8-bit units but have 32-bit general-purpose registers and can operate on 32-bit items with a single instruction, are byte-addressable. The advantage of word addressing is that more memory can be addressed in the same number of bits. The IBM 7094 has 15-bit addresses, so could address 32,768 words of 36 bits. The machines were often built with a full complement of addressable memory. Addressing 32,768 bytes of 6 bits would have been much less useful for scientific and engineering users. Or consider 32-bit x86 processors. Their 32-bit linear addresses can address 4 billion different items. Using word addressing, a 32-bit processor could address 4 Gigawords; or 16 Gigabytes using the modern 8-bit byte. If the 386 and its successors had used word addressing, scientists, engineers, and gamers could all have run programs that were 4x larger on 32-bit machines. However, word processing, rendering HTML, and all other text applications would have run more slowly. When computers were so costly that they were only or mainly used
https://en.wikipedia.org/wiki/Parts%20stress%20modelling
Parts stress modelling is a method in engineering and especially electronics to find an expected value for the rate of failure of the mechanical and electronic components of a system. It is based upon the idea that the more components that there are in the system, and the greater stress that they undergo in operation, the more often they will fail. Parts count modelling is a simpler variant of the method, with component stress not taken into account. Various organisations have published standards specifying how parts stress modelling should be carried out. Some from electronics are: MIL-HDBK-217 (US Department of Defense) SR-332, Reliability Prediction Procedure for Electronic Equipment HRD-4 (British Telecom) SR-1171, Methods and Procedures for System Reliability Analysis and many others These "standards" produce different results, often by a factor of more than two, for the same modelled system. The differences illustrate the fact that this modelling is not an exact science. System designers often have to do the modelling using a standard specified by a customer, so that the customer can compare the results with other systems modelled in the same way. All of these standards compute an expected overall failure rate for all the components in the system, which is not necessarily the rate at which the system as a whole fails. Systems often incorporate redundancy or fault tolerance so that they do not fail when an individual component fails. Several companies provide programs for performing parts stress modelling calculations. It's also possible to do the modelling with a spreadsheet. All these models implicitly assume the idea of "random failure". Individual components fail at random times but at a predictable rate, analogous to the process of nuclear decay. One justification for this idea is that components fail by a process of wearout, a predictable decay after manufacture, but that the wearout life of individual components is scattered widely about some
https://en.wikipedia.org/wiki/UK%20Museum%20of%20Ordure
The UK Museum of Ordure (UKMO) was an online arts project initiated with the intention to explore the curatorial value of ordure, or human waste. It consisted of a website which initially collected public submissions of ordure, as well as documenting various installations undertaken by the museum. It has since been renamed Museum of Ordure in recognition that its remit is international in scope. Founded by Stuart Brisley, Geoff Cox and Adrian Ward in 2001, it now consists of the members Rosse Yael Sirb as acting director, as well as Maya Balgioglu, Stuart Brisely, Geoff Cox, and Les Liens Invisibles. It currently defines itself as follows: See also Stuart Brisley References External links Museum of Ordure website UK Museum of Ordure website Year of establishment missing Arts in the United Kingdom Virtual museums British websites Excretion
https://en.wikipedia.org/wiki/Ciphertext%20stealing
In cryptography, ciphertext stealing (CTS) is a general method of using a block cipher mode of operation that allows for processing of messages that are not evenly divisible into blocks without resulting in any expansion of the ciphertext, at the cost of slightly increased complexity. General characteristics Ciphertext stealing is a technique for encrypting plaintext using a block cipher, without padding the message to a multiple of the block size, so the ciphertext is the same size as the plaintext. It does this by altering processing of the last two blocks of the message. The processing of all but the last two blocks is unchanged, but a portion of the second-to-last block's ciphertext is "stolen" to pad the last plaintext block. The padded final block is then encrypted as usual. The final ciphertext, for the last two blocks, consists of the partial penultimate block (with the "stolen" portion omitted) plus the full final block, which are the same size as the original plaintext. Decryption requires decrypting the final block first, then restoring the stolen ciphertext to the penultimate block, which can then be decrypted as usual. In principle any block-oriented block cipher mode of operation can be used, but stream-cipher-like modes can already be applied to messages of arbitrary length without padding, so they do not benefit from this technique. The common modes of operation that are coupled with ciphertext stealing are Electronic Codebook (ECB) and Cipher Block Chaining (CBC). Ciphertext stealing for ECB mode requires the plaintext to be longer than one block. A possible workaround is to use a stream cipher-like block cipher mode of operation when the plaintext length is one block or less, such as the CTR, CFB or OFB modes. Ciphertext stealing for CBC mode doesn't necessarily require the plaintext to be longer than one block. In the case where the plaintext is one block long or less, the Initialization vector (IV) can act as the prior block of ciphert
https://en.wikipedia.org/wiki/Boolean%20data%20type
In computer science, the Boolean (sometimes shortened to Bool) is a data type that has one of two possible values (usually denoted true and false) which is intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type—logic does not always need to be Boolean (see probabilistic logic). Generalities In programming languages with a built-in Boolean data type, such as Pascal and Java, the comparison operators such as > and ≠ are usually defined to return a Boolean value. Conditional and iterative commands may be defined to test Boolean-valued expressions. Languages with no explicit Boolean data type, like C90 and Lisp, may still represent truth values by some other data type. Common Lisp uses an empty list for false, and any other value for true. The C programming language uses an integer type, where relational expressions like i > j and logical expressions connected by && and || are defined to have value 1 if true and 0 if false, whereas the test parts of if, while, for, etc., treat any non-zero value as true. Indeed, a Boolean variable may be regarded (and implemented) as a numerical variable with one binary digit (bit), or as a bit string of length one, which can store only two values. The implementation of Booleans in computers are most likely represented as a full word, rather than a bit; this is usually due to the ways computers transfer blocks of information. Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations such as conjunction (AND, &, *), disjunction (OR, |, +), equivalence (EQV, =, ==), exclusive or/non-eq
https://en.wikipedia.org/wiki/Raj%20Chandra%20Bose
Raj Chandra Bose (or Basu) (19 June 1901 – 31 October 1987) was an Indian American mathematician and statistician best known for his work in design theory, finite geometry and the theory of error-correcting codes in which the class of BCH codes is partly named after him. He also invented the notions of partial geometry, association scheme, and strongly regular graph and started a systematic study of difference sets to construct symmetric block designs. He was notable for his work along with S. S. Shrikhande and E. T. Parker in their disproof of the famous conjecture made by Leonhard Euler dated 1782 that for no n do there exist two mutually orthogonal Latin squares of order 4n + 2. Early life Bose was born in Hoshangabad, India; he was the first of five children. His father was a physician and life was good until 1918 when his mother died in the influenza pandemic. His father died of a stroke the following year. Despite difficult circumstances, Bose continued to study securing first class in both the Masters examinations in Pure and Applied mathematics in 1925 and 1927 respectively at the Rajabazar Science College campus of University of Calcutta. His research was performed under the supervision of the geometry Professor Syamadas Mukhopadhyaya from Calcutta. Bose worked as a lecturer at Asutosh College, Calcutta. He published his work on the differential geometry of convex curves. Academic life Bose's course changed in December 1932 when P. C. Mahalanobis, director of the new (1931) Indian Statistical Institute, offered Bose a part-time job. Mahalanobis had seen Bose's geometrical work and wanted him to work on statistics. The day after Bose moved in, the secretary brought him all the volumes of Biometrika with a list of 50 papers to read and also Ronald Fisher's Statistical Methods for Research Workers. Mahalanobis told him, "You were saying that you do not know much statistics. You master the 50 papers ... and Fisher's book. This will suffice for your statistic
https://en.wikipedia.org/wiki/Alexis%20Rockman
Alexis Rockman (born 1962) is an American contemporary artist known for his paintings that provide depictions of future landscapes as they might exist with impacts of climate change and evolution influenced by genetic engineering. He has exhibited his work in the United States since 1985, including a 2004 exhibition at the Brooklyn Museum, and internationally since 1989. He lives with his wife, Dorothy Spears in Warren, CT and NYC. Life Rockman was born and raised in New York City. Rockman's stepfather, Russell Rockman, an Australian jazz musician, brought the family to Australia frequently. As a child, Rockman frequented the American Museum of Natural History in New York City, where his mother, Diana Wall, worked briefly for anthropologist Margaret Mead. Growing up, Rockman had an interest in natural history and science, and developed fascination for film, animation, and the arts. From 1980 to 1982, Rockman studied animation at the Rhode Island School of Design, and continued studies at the School of Visual Arts in Manhattan, receiving a BFA in fine arts in 1985. Aside from his art career, Rockman has taken on requests from conservation groups, including the Riverkeeper project and the Rainforest Alliance. He lives with his wife, Dorothy Spears in Warren, CT and NYC. Career Early career 1985–1993 Rockman began exhibiting his work at the Jay Gorney Modern Art gallery in New York City in 1986 and was represented by the gallery from 1986 to 2005. Rockman also had exhibitions at galleries in Los Angeles, Boston, and Philadelphia in the late 1980s. Early work was inspired by natural history iconography. In Phylum, Rockman draws upon the work of Ernst Haeckel, an artist and proponent of Darwinism. A series of works by Rockman in the early 1990s, including Barnyard Scene (1990), Jungle Fever (1991), and The Trough (1992), use dark humor in depicting different species mating with one another. In Barnyard Scene, Rockman depicts a raccoon mating with a rooster, an
https://en.wikipedia.org/wiki/Network%20automaton
A network automaton (plural network automata) is a mathematical system consisting of a network of nodes that evolves over time according to predetermined rules. It is similar in concept to a cellular automaton, but much less studied. Stephen Wolfram's book A New Kind of Science, which is primarily concerned with cellular automata, briefly discusses network automata, and suggests (without positive evidence) that the universe might at the very lowest level be a network automaton. Networks Cellular automata
https://en.wikipedia.org/wiki/Quantum%20efficiency
The term quantum efficiency (QE) may apply to incident photon to converted electron (IPCE) ratio of a photosensitive device, or it may refer to the TMR effect of a Magnetic Tunnel Junction. This article deals with the term as a measurement of a device's electrical sensitivity to light. In a charge-coupled device (CCD) or other photodetector, it is the ratio between the number of charge carriers collected at either terminal and the number of photons hitting the device's photoreactive surface. As a ratio, QE is dimensionless, but it is closely related to the responsivity, which is expressed in amps per watt. Since the energy of a photon is inversely proportional to its wavelength, QE is often measured over a range of different wavelengths to characterize a device's efficiency at each photon energy level. For typical semiconductor photodetectors, QE drops to zero for photons whose energy is below the band gap. A photographic film typically has a QE of much less than 10%, while CCDs can have a QE of well over 90% at some wavelengths. Quantum efficiency of solar cells A solar cell's quantum efficiency value indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. If the cell's quantum efficiency is integrated over the whole solar electromagnetic spectrum, one can evaluate the amount of current that the cell will produce when exposed to sunlight. The ratio between this energy-production value and the highest possible energy-production value for the cell (i.e., if the QE were 100% over the whole spectrum) gives the cell's overall energy conversion efficiency value. Note that in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon. Types Two types of quantum efficiency of a solar cell are often considered: External Quantum Effic
https://en.wikipedia.org/wiki/Maintenance%20mode
In the world of software development, maintenance mode refers to a point in a computer program's life when it has reached all of its goals and is generally considered to be "complete" and bug-free. The term can also refer to the point in a software product's evolution when it is no longer competitive with other products or current with regard to the technology environment it operates within. In both cases, continued development is deemed unnecessary or ill-advised, but occasional bug fixes and security patches are still issued, hence the term maintenance mode. Maintenance mode often transitions to abandonware. Sometimes, when a popular free software project undergoes a major overhaul, the pre-overhaul version is kept active and put into maintenance mode because it will still be widely used in production for the foreseeable future. Project forks can also spawn from programs that go into maintenance mode too soon or have enough developer support for a more advanced version. A good example of this is the vi editor, which was in maintenance mode and forked into Vi IMproved. The Vim fork has many useful features that vi does not, such as syntax highlighting and the ability to have multiple open buffers. See also Steady state Software maintenance
https://en.wikipedia.org/wiki/Ferredoxin
Ferredoxins (from Latin ferrum: iron + redox, often abbreviated "fd") are iron–sulfur proteins that mediate electron transfer in a range of metabolic reactions. The term "ferredoxin" was coined by D.C. Wharton of the DuPont Co. and applied to the "iron protein" first purified in 1962 by Mortenson, Valentine, and Carnahan from the anaerobic bacterium Clostridium pasteurianum. Another redox protein, isolated from spinach chloroplasts, was termed "chloroplast ferredoxin". The chloroplast ferredoxin is involved in both cyclic and non-cyclic photophosphorylation reactions of photosynthesis. In non-cyclic photophosphorylation, ferredoxin is the last electron acceptor thus reducing the enzyme NADP+ reductase. It accepts electrons produced from sunlight-excited chlorophyll and transfers them to the enzyme ferredoxin: NADP+ oxidoreductase . Ferredoxins are small proteins containing iron and sulfur atoms organized as iron–sulfur clusters. These biological "capacitors" can accept or discharge electrons, with the effect of a change in the oxidation state of the iron atoms between +2 and +3. In this way, ferredoxin acts as an electron transfer agent in biological redox reactions. Other bioinorganic electron transport systems include rubredoxins, cytochromes, blue copper proteins, and the structurally related Rieske proteins. Ferredoxins can be classified according to the nature of their iron–sulfur clusters and by sequence similarity. Bioenergetics of ferredoxins Ferredoxins typically carry out a single electron transfer. + <=> However a few bacterial ferredoxins (of the 2[4Fe4S] type) have two iron sulfur clusters and can carry out two electron transfer reactions. Depending on the sequence of the protein, the two transfers can have nearly identical reduction potentials or they may be significantly different. + <=> + <=> Ferredoxins are one of the most reducing biological electron carriers. They typically have a mid point potential of -420 mV. The reduction p
https://en.wikipedia.org/wiki/People%20meter
A people meter is an audience measurement tool used to measure the viewing habits of TV and cable audiences. Meter The People Meter is a 'box', about the size of a paperback book. The box is hooked up to each television set and is accompanied by a remote control unit. Each family member in a sample household is assigned a personal 'viewing button'. It identifies each household member's age and sex. If the TV is turned on and the viewer doesn't identify themselves, the meter flashes to remind them. Additional buttons on the People Meter enable guests to participate in the sample by recording their age, sex and viewing status into the system. Another version of the device is small, about the size of a beeper, that plugs into the wall below or near each TV set in household. It monitors anything that comes on the TV and relays the information with the small Portable People Meter to narrow down who is watching what and when. The device, known as a 'frequency-based meter', was invented by a British company called Audits of Great Britain (AGB). The successor company to AGB is TNS, which is active in 34 countries around the globe. Originally, these meters identified the frequency of the channels - VHF or UHF - watched on the viewer's TV set. This system became obsolete when Direct to Home (DTH) satellite dish became popular and viewers started to get their own satellite decoders. In addition, this system doesn't measure digital broadcasts. Before the People Meter advances, Nielsen used the diary method, which consisted of viewers physically recording the shows they watched. However, there were setbacks with the system. Lower-rated stations claimed the diary method was inaccurate and biased. They argued that because they had lower ratings, those who depended on memory for the diary method may only remember to track their favorite shows. Stations also argued that if it wasn’t low ratings that skewed the diary method, it might also be the new variety of channels for view
https://en.wikipedia.org/wiki/Powerset%20construction
In the theory of computation and automata theory, the powerset construction or subset construction is a standard method for converting a nondeterministic finite automaton (NFA) into a deterministic finite automaton (DFA) which recognizes the same formal language. It is important in theory because it establishes that NFAs, despite their additional flexibility, are unable to recognize any language that cannot be recognized by some DFA. It is also important in practice for converting easier-to-construct NFAs into more efficiently executable DFAs. However, if the NFA has n states, the resulting DFA may have up to 2n states, an exponentially larger number, which sometimes makes the construction impractical for large NFAs. The construction, sometimes called the Rabin–Scott powerset construction (or subset construction) to distinguish it from similar constructions for other types of automata, was first published by Michael O. Rabin and Dana Scott in 1959. Intuition To simulate the operation of a DFA on a given input string, one needs to keep track of a single state at any time: the state that the automaton will reach after seeing a prefix of the input. In contrast, to simulate an NFA, one needs to keep track of a set of states: all of the states that the automaton could reach after seeing the same prefix of the input, according to the nondeterministic choices made by the automaton. If, after a certain prefix of the input, a set of states can be reached, then after the next input symbol the set of reachable states is a deterministic function of and . Therefore, the sets of reachable NFA states play the same role in the NFA simulation as single DFA states play in the DFA simulation, and in fact the sets of NFA states appearing in this simulation may be re-interpreted as being states of a DFA. Construction The powerset construction applies most directly to an NFA that does not allow state transformations without consuming input symbols (aka: "ε-moves"). Such an automa
https://en.wikipedia.org/wiki/Negotiation%20theory
The foundations of negotiation theory are decision analysis, behavioral decision-making, game theory, and negotiation analysis. Another classification of theories distinguishes between Structural Analysis, Strategic Analysis, Process Analysis, Integrative Analysis and behavioral analysis of negotiations. Negotiation is a strategic discussion that resolves an issue in a way that both parties find acceptable. Individuals should make separate, interactive decisions; and negotiation analysis considers how groups of reasonably bright individuals should and could make joint, collaborative decisions. These theories are interleaved and should be approached from the synthetic perspective. Common assumptions of most theories Negotiation is a specialized and formal version of conflict resolution, most frequently employed when important issues must be agreed upon. Negotiation is necessary when one party requires the other party's agreement to achieve its aim. The aim of negotiating is to build a shared environment leading to long-term trust, and it often involves a third, neutral party to extract the issues from the emotions and keep the individuals concerned focused. It is a powerful method for resolving conflict and requires skill and experience. Henry Kissinger (1969) defines negotiation as "a process of combining conflicting positions into a common position under a decision rule of unanimity, a phenomenon in which the outcome is determined by the process." Druckman (1986) adds that negotiations pass through stages that consist of agenda-setting, a search for guiding principles, defining the issues, bargaining for favorable concession exchanges, and a search for implementing details. Transitions between stages are referred to as turning points. Most theories of negotiations share the notion of negotiations as a process, but they differ in their description of the process. Structural, strategic and procedural analysis builds on rational actors, who are able to prioritize
https://en.wikipedia.org/wiki/DLNA
Digital Living Network Alliance (DLNA) is a set of interoperability standards for sharing home digital media among multimedia devices. It allows users to share or stream stored media files to various certified devices on the same network like PCs, smartphones, TV sets, game consoles, stereo systems, and NASs. DLNA incorporates several existing public standards, including Universal Plug and Play (UPnP) for media management and device discovery and control, wired and wireless networking standards, and widely used digital media formats. DLNA was created by Sony and Intel and the consortium soon included various PC and consumer electronics companies, publishing its first set of guidelines in June 2004. The Digital Living Network Alliance developed and promoted it under the auspices of a certification standard, with a claimed membership of "more than 200 companies" before dissolving in 2017. By September 2014 over 25,000 device models had obtained "DLNA Certified" status, indicated by a logo on their packaging and confirming their interoperability with other devices. In many cases DLNA protocols are in use by services or software without openly stating the name: examples include Nokia's Home Network functionality, Samsung's All Share, the Play To functionality in Windows 8.1, and in applications such as VLC media player or Roku Media Player. Specification The DLNA Certified Device Classes are separated as follows: Home network devices Digital Media Server (DMS): store content and make it available to networked digital media players (DMP) and digital media renderers (DMR). Examples include PCs and network-attached storage (NAS) devices. Digital Media Player (DMP): find content on digital media servers (DMS) and provide playback and rendering capabilities. Examples include TVs, stereos and home theaters, wireless monitors and game consoles. Digital Media Renderer (DMR): play content as instructed by a digital media controller (DMC), which will find content from a dig
https://en.wikipedia.org/wiki/Injector
An injector is a system of ducting and nozzles used to direct the flow of a high-pressure fluid in such a way that a lower pressure fluid is entrained in the jet and carried through a duct to a region of higher pressure. It is a fluid-dynamic pump with no moving parts except a valve to control inlet flow. Steam injector The steam injector is a common device used for delivering water to steam boilers, especially in steam locomotives. It is a typical application of the injector principle used to deliver cold water to a boiler against its own pressure, using its own live or exhaust steam, replacing any mechanical pump. When first developed, its operation was intriguing because it seemed paradoxical, almost like perpetual motion, but it was later explained using thermodynamics. Other types of injector may use other pressurised motive fluids such as air. Depending on the application, an injector can also take the form of an eductor-jet pump, a water eductor or an aspirator. An ejector operates on similar principles to create a vacuum feed connection for braking systems etc. History Giffard The injector was invented by Henri Giffard in early 1850s and patented in France in 1858, for use on steam locomotives. It was patented in the United Kingdom by Sharp, Stewart and Company of Glasgow. After some initial scepticism resulting from the unfamiliar and superficially paradoxical mode of operation, the injector became widely adopted for steam locomotives as an alternative to mechanical pumps. Kneass Strickland Landis Kneass was a civil engineer, experimenter, and author who became president of the Pennsylvania Railroad in 1880, following many other accomplishments involving railroading. Kneass began publishing a mathematical model of the physics of the injector, which he had verified by experimenting with steam. A steam injector has three primary sections: Steam nozzle, a diverging duct, which converts high pressure steam to low pressure, high velocity steam Combini
https://en.wikipedia.org/wiki/Torus%20knot
In knot theory, a torus knot is a special kind of knot that lies on the surface of an unknotted torus in R3. Similarly, a torus link is a link which lies on the surface of a torus in the same way. Each torus knot is specified by a pair of coprime integers p and q. A torus link arises if p and q are not coprime (in which case the number of components is gcd(p, q)). A torus knot is trivial (equivalent to the unknot) if and only if either p or q is equal to 1 or −1. The simplest nontrivial example is the (2,3)-torus knot, also known as the trefoil knot. Geometrical representation A torus knot can be rendered geometrically in multiple ways which are topologically equivalent (see Properties below) but geometrically distinct. The convention used in this article and its figures is the following. The (p,q)-torus knot winds q times around a circle in the interior of the torus, and p times around its axis of rotational symmetry.. If p and q are not relatively prime, then we have a torus link with more than one component. The direction in which the strands of the knot wrap around the torus is also subject to differing conventions. The most common is to have the strands form a right-handed screw for p q > 0. The (p,q)-torus knot can be given by the parametrization where and . This lies on the surface of the torus given by (in cylindrical coordinates). Other parameterizations are also possible, because knots are defined up to continuous deformation. The illustrations for the (2,3)- and (3,8)-torus knots can be obtained by taking , and in the case of the (2,3)-torus knot by furthermore subtracting respectively and from the above parameterizations of x and y. The latter generalizes smoothly to any coprime p,q satisfying . Properties A torus knot is trivial iff either p or q is equal to 1 or −1. Each nontrivial torus knot is prime and chiral. The (p,q) torus knot is equivalent to the (q,p) torus knot. This can be proved by moving the strands on the surface of the to
https://en.wikipedia.org/wiki/Active%20Desktop
Active Desktop was a feature of Microsoft Internet Explorer 4.0's optional Windows Desktop Update that allowed users to add HTML content to the desktop, along with some other features. This function was intended to be installed on the then-current Windows 95 operating system. It was also included in Windows 98 and later Windows operating systems up through 32-bit XP, but was absent from XP Professional x64 Edition (for AMD64) and all subsequent versions of Windows. Its status on XP 64-bit edition (for Itanium) and on both 32-bit and 64-bit versions of Windows Server 2003 is not widely known. This corresponded to version Internet Explorer 4.0 to 6.x, but not Internet Explorer 7. HTML could be added both in place of the regular wallpaper and as independent resizable desktop items. Items available on-line could be regularly updated and synchronized so users could stay updated without visiting the website in their browser. Active Desktop worked much like desktop widget technology in that it allowed users to place customized information on their desktop. History The introduction of the Active Desktop marked Microsoft's attempt to capitalize on the push technology trend led by PointCast. Active Desktop allowed embedding a number of "channels" on the user's computer desktop that could provide continually-updated information such as web pages, without requiring the user to open dedicated programs such as a web browser. Example uses include overview over news headlines and stock quotes. However, its most notable feature was that it allowed Motion JPEGs and animated GIFs to animate correctly when set as the desktop wallpaper. Active Desktop debuted as part of an Internet Explorer 4.0 preview release in July 1997, and came out with the launch of the 4.0 browser in September that year. for Windows 95 and Windows NT 4.0, as a feature of the optional Windows Desktop Update offered to users during the upgrade installation. While the Windows Desktop Update is commonly ref
https://en.wikipedia.org/wiki/Viewing%20angle
In display technology parlance, viewing angle is the angle at which a display can be viewed with an acceptable visual performance. In a technical context, the angular range is called viewing cone defined by a multitude of viewing directions. The viewing angle can be an angular range over which the display view is acceptable, or it can be the angle of generally acceptable viewing, such as a twelve o'clock viewing angle for a display optimized or viewing from the top. The image may seem garbled, poorly saturated, of poor contrast, blurry, or too faint outside the stated viewing angle range, the exact mode of "failure" depends on the display type in question. For example, some projection screens reflect more light perpendicular to the screen and less light to the sides, making the screen appear much darker (and sometimes colors distorted) if the viewer is not in front of the screen. Many manufacturers of projection screens thus define the viewing angle as the angle at which the luminance of the image is exactly half of the maximum. With LCD screens, some manufacturers have opted to measure the contrast ratio and report the viewing angle as the angle where the contrast ratio exceeds 5:1 or 10:1, giving minimally acceptable viewing conditions. The viewing angle is measured from one direction to the opposite, giving a maximum of 180° for a flat, one-sided screen. A display may exhibit different behavior in horizontal and vertical axes, requiring users and manufacturers to specify maximum usable viewing angles in both directions. Usually, the screens are designed to facilitate greater viewing angles at the horizontal level, and smaller angles at the vertical level, should the two of them differ in magnitude. The viewing angle for some displays is specified in only a general direction, such as 6 o'clock or 12 o'clock. Early LCDs had strikingly narrow viewing cones, a situation that has been improved with current technology. Narrow viewing cones of some types of display
https://en.wikipedia.org/wiki/Non-recurring%20engineering
Non-recurring engineering (NRE) cost refers to the one-time cost to research, design, develop and test a new product or product enhancement. When budgeting for a new product, NRE must be considered to analyze if a new product will be profitable. Even though a company will pay for NRE on a project only once, NRE costs can be prohibitively high and the product will need to sell well enough to produce a return on the initial investment. NRE is unlike production costs, which must be paid constantly to maintain production of a product. It is a form of fixed cost in economics terms. Once a system is designed any number of units can be manufactured without increasing NRE cost. NRE can be also formulated and paid via another commercial term called Royalty Fee. The Royalty Fee could be a percentage of sales revenue or profit or combination of these two, which have to be incorporated in a mid to long term agreement between technology supplier and the OEM. In a project-type (manufacturing) company, large parts (possibly all) of the project represent NRE. In this case the NRE costs are likely to be included in the first project's costs, this can also be called research and development (R&D). If the firm cannot recover these costs, it must consider funding part of these from reserves, possibly take a project loss, in the hope that the investment can be recovered from further profit on future projects. The concept of full product NRE as described above may lead readers to believe that NRE expenses are unnecessarily high. However, focused NRE wherein small amounts of NRE money can yield large returns by making existing product changes is an option to consider as well. A small adjustment to an existing assembly may be considered, in order to use a less expensive or improved subcomponent or to replace a subcomponent which is no longer available. In the world of embedded firmware, NRE may be invested in code development to fix problems or to add features where the costs to implemen
https://en.wikipedia.org/wiki/Crime%20prevention%20through%20environmental%20design
Crime prevention through environmental design (CPTED) is an agenda for manipulating the built environment to create safer neighborhoods. It originated in the contiguous United States around 1960, when urban renewal strategies were felt to be destroying the social framework needed for self-policing. Architect Oscar Newman created the concept of "defensible space", developed further by criminologist C. Ray Jeffery, who coined the term CPTED. Growing interest in environmental criminology led to detailed study of specific topics such as natural surveillance, access control and territoriality. The "broken window" principle, that neglected zones invite crime, reinforced the need for good property maintenance to assert visible ownership of space. Appropriate environmental design can also increase the perceived likelihood of detection and apprehension, known to be the biggest single deterrent to crime. There has also been new interest in the interior design of prisons as an environment that significantly affects decisions to offend. Wide-ranging recommendations to architects include the planting of trees and shrubs, the elimination of escape routes, the correct use of lighting, and the encouragement of pedestrian and bicycle traffic in streets. Tests show that the application of CPTED measures overwhelmingly reduces criminal activity. History CPTED was originally coined and formulated by criminologist C. Ray Jeffery. A more limited approach, termed defensible space, was developed concurrently by architect Oscar Newman. Both men built on the previous work of Elizabeth Wood, Jane Jacobs and Schlomo Angel. Jeffery's book, "Crime Prevention Through Environmental Design" came out in 1971, but his work was ignored throughout the 1970s. Newman's book, "Defensible Space: – Crime Prevention through Urban Design" came out in 1972. His principles were widely adopted but with mixed success. The defensible space approach was subsequently revised with additional built environment a
https://en.wikipedia.org/wiki/Boltzmann%20machine
A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising–Lenz–Little model) is a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field. Boltzmann machines are theoretically intriguing because of the locality and Hebbian nature of their training algorithm (being trained by Hebb's rule), and because of their parallelism and the resemblance of their dynamics to simple physical processes. Boltzmann machines with unconstrained connectivity have not been proven useful for practical problems in machine learning or inference, but if the connectivity is properly constrained, the learning can be made efficient enough to be useful for practical problems. They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. They were heavily popularized and promoted by Geoffrey Hinton, Terry Sejnowski and Yann LeCun in cognitive sciences communities and in machine learning. As a more general class within machine learning these models are called "energy based models" (EBM), because Hamiltonians of spin glasses are used as a starting point to define the learning task. Structure A Boltzmann machine, like a Sherrington–Kirkpatrick model, is a network of units with a total "energy" (Hamiltonian) defined for the overall network. Its units produce binary results. Boltzmann machine weights are stochastic. The global energy in a Boltzmann machine is identical in form to that of Hopfield networks and Ising models: Where: is the connection strength between unit and unit . is the state, , of unit . is the bias of unit in the global energy function. ( is the activation threshold for the unit.) Often the weights are represented as a symmetric matrix with zeros along the diagonal. Uni
https://en.wikipedia.org/wiki/Principle%20of%20distributivity
The principle of distributivity states that the algebraic distributive law is valid, where both logical conjunction and logical disjunction are distributive over each other so that for any propositions A, B and C the equivalences and hold. The principle of distributivity is valid in classical logic, but both valid and invalid in quantum logic. The article "Is Logic Empirical?" discusses the case that quantum logic is the correct, empirical logic, on the grounds that the principle of distributivity is inconsistent with a reasonable interpretation of quantum phenomena. References Abstract algebra Principles Propositional calculus
https://en.wikipedia.org/wiki/Gretl
gretl is an open-source statistical package, mainly for econometrics. The name is an acronym for Gnu Regression, Econometrics and Time-series Library. It has both a graphical user interface (GUI) and a command-line interface. It is written in C, uses GTK+ as widget toolkit for creating its GUI, and calls gnuplot for generating graphs. The native scripting language of gretl is known as hansl (see below); it can also be used together with TRAMO/SEATS, R, Stata, Python, Octave, Ox and Julia. It includes natively all the basic statistical techniques employed in contemporary Econometrics and Time-Series Analysis. Additional estimators and tests are available via user-contributed function packages, which are written in hansl. gretl can output models as LaTeX files. Besides English, gretl is also available in Albanian, Basque, Bulgarian, Catalan, Chinese, Czech, French, Galician, German, Greek, Italian, Polish, Portuguese (both varieties), Romanian, Russian, Spanish, Turkish and Ukrainian. Gretl has been reviewed several times in the Journal of Applied Econometrics and, more recently, in the Australian Economic Review. A review also appeared in the Journal of Statistical Software in 2008. Since then, the journal has featured several articles in which gretl is used to implement various statistical techniques. Supported data formats gretl offers its own fully documented, XML-based data format. It can also import ASCII, CSV, databank, EViews, Excel, Gnumeric, GNU Octave, JMulTi, OpenDocument spreadsheets, PcGive, RATS 4, SAS xport, SPSS, and Stata files. Since version 2020c, the GeoJSON and Shapefile formats are also supported, for thematic map creation. It can export to Stata, GNU Octave, R, CSV, JMulTi, and PcGive file formats. hansl Gretl has its own scripting language, called hansl (which is a recursive acronym for Hansl's A Neat Scripting Language). Hansl is a Turing-complete, interpreted programming language, featuring loops, conditionals, user-defined funct
https://en.wikipedia.org/wiki/SmartLink%20%28television%29
SmartLink was the trademark name for a proprietary technology by Sharp Corporation for wireless transmission of television signals. The system involved two devices, each a little bit bigger than a paperback book: one attached to a television screen and the other was hooked up to a TV tuner, DVD player, or any playback device. The video information was transmitted wirelessly using the 802.11b wireless standard, allowing the playback device to be up to away from the television screen. The SmartLink system was to go on sale in Japan on August 3, 2001, at a list price of $400 ( present day US dollars). SmartLink was incorporated into Sharp's Wireless AQUOS television, model LC-15L1U-S. References Television technology
https://en.wikipedia.org/wiki/Electroreception%20and%20electrogenesis
Electroreception and electrogenesis are the closely related biological abilities to perceive electrical stimuli and to generate electric fields. Both are used to locate prey; stronger electric discharges are used in a few groups of fishes (most famously the electric eel, which is not actually an eel but a knifefish) to stun prey. The capabilities are found almost exclusively in aquatic or amphibious animals, since water is a much better conductor of electricity than air. In passive electrolocation, objects such as prey are detected by sensing the electric fields they create. In active electrolocation, fish generate a weak electric field and sense the different distortions of that field created by objects that conduct or resist electricity. Active electrolocation is practised by two groups of weakly electric fish, the Gymnotiformes (knifefishes) and the Mormyridae (elephantfishes), and by Gymnarchus niloticus, the African knifefish. An electric fish generates an electric field using an electric organ, modified from muscles in its tail. The field is called weak if it is only enough to detect prey, and strong if it is powerful enough to stun or kill. The field may be in brief pulses, as in the elephantfishes, or a continuous wave, as in the knifefishes. Some strongly electric fish, such as the electric eel, locate prey by generating a weak electric field, and then discharge their electric organs strongly to stun the prey; other strongly electric fish, such as the electric ray, electrolocate passively. The stargazers are unique in being strongly electric but not using electrolocation. The electroreceptive ampullae of Lorenzini evolved early in the history of the vertebrates; they are found in both cartilaginous fishes such as sharks, and in bony fishes such as coelacanths and sturgeons, and must therefore be ancient. Most bony fishes have secondarily lost their ampullae of Lorenzini, but other non-homologous electroreceptors have repeatedly evolved, including in two gr
https://en.wikipedia.org/wiki/Agricultural%20biodiversity
Agricultural biodiversity or agrobiodiversity is a subset of general biodiversity pertaining to agriculture. It can be defined as "the variety and variability of animals, plants and micro-organisms at the genetic, species and ecosystem levels that sustain the ecosystem structures, functions and processes in and around production systems, and that provide food and non-food agricultural products.” It is managed by farmers, pastoralists, fishers and forest dwellers, agrobiodiversity provides stability, adaptability and resilience and constitutes a key element of the livelihood strategies of rural communities throughout the world. Agrobiodiversity is central to sustainable food systems and sustainable diets. The use of agricultural biodiversity can contribute to food security, nutrition security, and livelihood security, and it is critical for climate adaptation and climate mitigation. Etymology It is not clear when exactly the term agrobiodiversity was coined nor by whom. The 1990 annual report of the International Board for Plant Genetic Resources (IBPGR, now Bioversity International) is one of the earliest references to biodiversity in the context of agriculture. Most references to agricultural biodiversity date from the late 1990s onwards. While similar, different definitions are used by different bodies to describe biodiversity in connection with food production. CGIAR tends to use agricultural biodiversity or agrobiodiversity, while the Food and Agriculture Organization of the UN (FAO) uses 'biodiversity for food and agriculture' and the Convention on Biological Diversity (CBD) uses the term 'agricultural diversity'. The CBD more or less (but not entirely) excludes marine aquatic organisms and forestry in its usage because they have their own groups and international frameworks for discussion of international policies and actions. Decision V/5 of the CBD provides the framing description. Types Crop biodiversity Livestock biodiversity Levels Genetic diversit
https://en.wikipedia.org/wiki/Exponential%20formula
In combinatorial mathematics, the exponential formula (called the polymer expansion in physics) states that the exponential generating function for structures on finite sets is the exponential of the exponential generating function for connected structures. The exponential formula is a power-series version of a special case of Faà di Bruno's formula. Algebraic statement Here is a purely algebraic statement, as a first introduction to the combinatorial use of the formula. For any formal power series of the form we have where and the index runs through the list of all partitions of the set . (When the product is empty and by definition equals .) Formula in other expressions One can write the formula in the following form: and thus where is the th complete Bell polynomial. Alternatively, the exponential formula can also be written using the cycle index of the symmetric group, as follows:where stands for the cycle index polynomial, for the symmetric group defined as:and denotes the number of cycles of of size . This is a consequence of the general relation between and Bell polynomials: The combinatorial formula In applications, the numbers count the number of some sort of "connected" structure on an -point set, and the numbers count the number of (possibly disconnected) structures. The numbers count the number of isomorphism classes of structures on points, with each structure being weighted by the reciprocal of its automorphism group, and the numbers count isomorphism classes of connected structures in the same way. Examples because there is one partition of the set that has a single block of size , there are three partitions of that split it into a block of size and a block of size , and there is one partition of that splits it into three blocks of size . This also follows from , since one can write the group as , using cyclic notation for permutations. If is the number of graphs whose vertices are a given -point set, then is the nu
https://en.wikipedia.org/wiki/Arbroath%20smokie
The Arbroath smokie is a type of smoked haddock, and is a speciality of the town of Arbroath in Angus, Scotland. History The Arbroath smokie is said to have originated in the small fishing village of Auchmithie, three miles northeast of Arbroath. Local legend has it a store caught fire one night, destroying barrels of haddock preserved in salt. The following morning, the people found some of the barrels had caught fire, cooking the haddock inside. Inspection revealed the haddock to be quite tasty. It is much more likely the villagers were of Scandinavian descent, as the 'Smokie making' process is similar to smoking methods which are still employed in areas of Scandinavia. Towards the end of the 19th century, as Arbroath's fishing industry died, the Town Council offered the fisherfolk from Auchmithie land in an area of the town known as the fit o' the toon. It also offered them use of the modern harbour. Much of the Auchmithie population then relocated, bringing the Arbroath Smokie recipe with them. Today, 15 local businesses produce Arbroath smokies, selling them in major supermarkets in the UK and online. In 2004, the European Commission registered the designation "Arbroath smokies" as a Protected Geographical Indication under the EU's Protected Food Name Scheme, acknowledging its unique status. Preparation Arbroath smokies are prepared using traditional methods dating back to the late 1800s. The fish are first salted overnight. They are then tied in pairs using hemp twine, and left overnight to dry. Once they have been salted, tied and dried, they are hung over a triangular length of wood to smoke. This "kiln stick" fits between the two tied smokies, one fish on either side. The sticks are then used to hang the dried fish in a special barrel containing a hardwood fire. When the fish are hung over the fire, the top of the barrel is covered with a lid and sealed around the edges with wet jute sacks (the water prevents the jute sacks from catching fire). All
https://en.wikipedia.org/wiki/Embryonic%20diapause
Embryonic diapause (delayed implantation in mammals) is a reproductive strategy used by a number of animal species across different biological classes. In more than 130 types of mammals where this takes place, the process occurs at the blastocyst stage of embryonic development, and is characterized by a dramatic reduction or complete cessation of mitotic activity, arresting most often in the G0 or G1 phase of division. In placental embryonic diapause, the blastocyst does not immediately implant in the uterus after sexual reproduction has resulted in the zygote, but rather remains in this non-dividing state of dormancy until conditions allow for attachment to the uterine wall to proceed as normal. As a result, the normal gestation period is extended for a species-specific time. Diapause provides a survival advantage to offspring, because birth or emergence of young can be timed to coincide with the most hospitable conditions, regardless of when mating occurs or length of gestation; any such gain in survival rates of progeny confers an evolutionary advantage. Evolutionary significance Organisms which undergo embryonic diapause are able to synchronize the birth of offspring to the most favorable conditions for reproductive success, irrespective of when mating took place. Many different factors can induce embryonic diapause, such as the time of year, temperature, lactation and supply of food. Embryonic diapause is a relatively widespread phenomenon outside of mammals, with known occurrence in the reproductive cycles of many insects, nematodes, fish, and other non-mammalian vertebrates. It has been observed in approximately 130 mammalian species, which is less than two percent of all species of mammals. These include certain rodents, bears, armadillos, mustelids (e.g. weasels and badgers), and marsupials (e.g. kangaroos). Some groups only have one species that undergoes embryonic diapause, such as the roe deer in the order Artiodactyla. Experimental induction of emb
https://en.wikipedia.org/wiki/Robot%20locomotion
Robot locomotion is the collective name for the various methods that robots use to transport themselves from place to place. Wheeled robots are typically quite energy efficient and simple to control. However, other forms of locomotion may be more appropriate for a number of reasons, for example traversing rough terrain, as well as moving and interacting in human environments. Furthermore, studying bipedal and insect-like robots may beneficially impact on biomechanics. A major goal in this field is in developing capabilities for robots to autonomously decide how, when, and where to move. However, coordinating numerous robot joints for even simple matters, like negotiating stairs, is difficult. Autonomous robot locomotion is a major technological obstacle for many areas of robotics, such as humanoids (like Honda's Asimo). Types of locomotion Walking See Passive dynamics See Zero Moment Point See Leg mechanism See Hexapod (robotics) Walking robots simulate human or animal gait, as a replacement for wheeled motion. Legged motion makes it possible to negotiate uneven surfaces, steps, and other areas that would be difficult for a wheeled robot to reach, as well as causes less damage to environmental terrain as wheeled robots, which would erode it. Hexapod robots are based on insect locomotion, most popularly the cockroach and stick insect, whose neurological and sensory output is less complex than other animals. Multiple legs allow several different gaits, even if a leg is damaged, making their movements more useful in robots transporting objects. Examples of advanced running robots include ASIMO, BigDog, HUBO 2, RunBot, and Toyota Partner Robot. Rolling In terms of energy efficiency on flat surfaces, wheeled robots are the most efficient. This is because an ideal rolling (but not slipping) wheel loses no energy. A wheel rolling at a given velocity needs no input to maintain its motion. This is in contrast to legged robots which suffer an impact with the gr
https://en.wikipedia.org/wiki/Volumetric%20display
A volumetric display device is a display device that forms a visual representation of an object in three physical dimensions, as opposed to the planar image of traditional screens that simulate depth through a number of different visual effects. One definition offered by pioneers in the field is that volumetric displays create 3D imagery via the emission, scattering, or relaying of illumination from well-defined regions in (x,y,z) space. A true volumetric display produces in the observer a visual experience of a material object in three-dimensional space, even though no such object is present. The perceived object displays characteristics similar to an actual material object by allowing the observer to view it from any direction, to focus a camera on a specific detail, and to see perspective – meaning that the parts of the image closer to the viewer appear larger than those further away. Volumetric 3D displays are technically not autostereoscopic, even though they create three-dimensional imagery visible to the unaided eye. This is because the displays do not generate stereoscopic images; They naturally provide focally-accurate holographic wavefronts to the eyes. Due to this, they have accurate characteristics of material objects such as focal depth, motion parallax, and vergence. Volumetric displays are one of several kinds of 3D displays. Other types are stereoscopes, view-sequential displays, electro-holographic displays, "two view" displays, and panoramagrams. Although first postulated in 1912, and a staple of science fiction, volumetric displays are still not widely used in everyday life. There are numerous potential markets for volumetric displays with use cases including medical imaging, mining, education, advertising, simulation, video games, communication and geophysical visualisation. When compared to other 3D visualisation tools such as virtual reality, volumetric displays offer an inherently different mode of interaction, providing the opport
https://en.wikipedia.org/wiki/Nilpotent%20matrix
In linear algebra, a nilpotent matrix is a square matrix N such that for some positive integer . The smallest such is called the index of , sometimes the degree of . More generally, a nilpotent transformation is a linear transformation of a vector space such that for some positive integer (and thus, for all ). Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings. Examples Example 1 The matrix is nilpotent with index 2, since . Example 2 More generally, any -dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index . For example, the matrix is nilpotent, with The index of is therefore 4. Example 3 Although the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example, although the matrix has no zero entries. Example 4 Additionally, any matrices of the form such as or square to zero. Example 5 Perhaps some of the most striking examples of nilpotent matrices are square matrices of the form: The first few of which are: These matrices are nilpotent but there are no zero entries in any powers of them less than the index. Example 6 Consider the linear space of polynomials of a bounded degree. The derivative operator is a linear map. We know that applying the derivative to a polynomial decreases its degree by one, so when applying it iteratively, we will eventually obtain zero. Therefore, on such a space, the derivative is representable by a nilpotent matrix. Characterization For an square matrix with real (or complex) entries, the following are equivalent: is nilpotent. The characteristic polynomial for is . The minimal polynomial for is for some positive integer . The only complex eigenvalue for is 0. The last theorem holds true for matrices over any field of characteristic 0 or sufficiently large characteristic. (cf. Newton's identities) This theorem has several consequences, including: The i
https://en.wikipedia.org/wiki/List%20of%20mathematical%20knots%20and%20links
This article contains a list of mathematical knots and links. See also list of knots, list of geometric topology topics. Knots Prime knots 01 knot/Unknot - a simple un-knotted closed loop 31 knot/Trefoil knot - (2,3)-torus knot, the two loose ends of a common overhand knot joined together 41 knot/Figure-eight knot (mathematics) - a prime knot with a crossing number four 51 knot/Cinquefoil knot, (5,2)-torus knot, Solomon's seal knot, pentafoil knot - a prime knot with crossing number five which can be arranged as a {5/2} star polygon (pentagram) 52 knot/Three-twist knot - the twist knot with three-half twists 61 knot/Stevedore knot (mathematics) - a prime knot with crossing number six, it can also be described as a twist knot with four twists 62 knot - a prime knot with crossing number six 63 knot - a prime knot with crossing number six 71 knot, septafoil knot, (7,2)-torus knot - a prime knot with crossing number seven, which can be arranged as a {7/2} star polygon (heptagram) 74 knot, "endless knot" 818 knot, "carrick mat" 10161/10162, known as the Perko pair; this was a single knot listed twice in Dale Rolfsen's knot table; the duplication was discovered by Kenneth Perko 12n242/(−2,3,7) pretzel knot (p, q)-torus knot - a special kind of knot that lies on the surface of an unknotted torus in R3 Composite Square knot (mathematics) - a composite knot obtained by taking the connected sum of a trefoil knot with its reflection Granny knot (mathematics) - a composite knot obtained by taking the connected sum of two identical trefoil knots Links 0 link/Unlink - equivalent under ambient isotopy to finitely many disjoint circles in the plane 2 link/Hopf link - the simplest nontrivial link with more than one component; it consists of two circles linked together exactly once (L2a1) 4 link/Solomon's knot (a two component "link" rather than a one component "knot") - a traditional decorative motif used since ancient times (L4a1) 5 link/Whitehead link - two projections of the
https://en.wikipedia.org/wiki/NeuRFon
The neuRFon project (named for a combination of "neuron" and "RF") was a research program begun in 1999 at Motorola Labs to develop ad hoc wireless networking for wireless sensor network applications. The biological analogy was that, while individual neurons were not very useful, in a large network they became very powerful; the same was thought to hold true for simple, low power wireless devices. Much of the technology developed in the neuRFon program was placed in the IEEE 802.15.4 standard and in the Zigbee specification; examples are the 2.4 GHz physical layer of the IEEE 802.15.4 standard and significant portions of the Zigbee multi-hop routing protocol. References External links IEEE 802.15.4 ZigBee Alliance Wireless sensor network
https://en.wikipedia.org/wiki/Pl%C3%BCcker%20coordinates
In geometry, Plücker coordinates, introduced by Julius Plücker in the 19th century, are a way to assign six homogeneous coordinates to each line in projective 3-space, . Because they satisfy a quadratic constraint, they establish a one-to-one correspondence between the 4-dimensional space of lines in and points on a quadric in (projective 5-space). A predecessor and special case of Grassmann coordinates (which describe -dimensional linear subspaces, or flats, in an -dimensional Euclidean space), Plücker coordinates arise naturally in geometric algebra. They have proved useful for computer graphics, and also can be extended to coordinates for the screws and wrenches in the theory of kinematics used for robot control. Geometric intuition A line in 3-dimensional Euclidean space is determined by two distinct points that it contains, or by two distinct planes that contain it. Consider the first case, with points and The vector displacement from to is nonzero because the points are distinct, and represents the direction of the line. That is, every displacement between points on is a scalar multiple of . If a physical particle of unit mass were to move from to , it would have a moment about the origin. The geometric equivalent to this moment, is a vector whose direction is perpendicular to the plane containing and the origin, and whose length equals twice the area of the triangle formed by the displacement and the origin. Treating the points as displacements from the origin, the moment is , where "×" denotes the vector cross product. For a fixed line, , the area of the triangle is proportional to the length of the segment between and , considered as the base of the triangle; it is not changed by sliding the base along the line, parallel to itself. By definition the moment vector is perpendicular to every displacement along the line, so , where "⋅" denotes the vector dot product. Although neither nor alone is sufficient to determine , together the pair doe
https://en.wikipedia.org/wiki/Metatable
A metatable is the section of a database or other data holding structure that is designated to hold data that will act as source code or metadata. In most cases, specific software has been written to read the data from the metatables and perform different actions depending on the data it finds. See also Magic number (programming) Virtual method table External links Binding With Metatable And Closures Metadata Programming constructs Software architecture
https://en.wikipedia.org/wiki/List%20of%20algebraic%20number%20theory%20topics
This is a list of algebraic number theory topics. Basic topics These topics are basic to the field, either as prototypical examples, or as basic objects of study. Algebraic number field Gaussian integer, Gaussian rational Quadratic field Cyclotomic field Cubic field Biquadratic field Quadratic reciprocity Ideal class group Dirichlet's unit theorem Discriminant of an algebraic number field Ramification (mathematics) Root of unity Gaussian period Important problems Fermat's Last Theorem Class number problem for imaginary quadratic fields Stark–Heegner theorem Heegner number Langlands program General aspects Different ideal Dedekind domain Splitting of prime ideals in Galois extensions Decomposition group Inertia group Frobenius automorphism Chebotarev's density theorem Totally real field Local field p-adic number p-adic analysis Adele ring Idele group Idele class group Adelic algebraic group Global field Hasse principle Hasse–Minkowski theorem Galois module Galois cohomology Brauer group Class field theory Class field theory Abelian extension Kronecker–Weber theorem Hilbert class field Takagi existence theorem Hasse norm theorem Artin reciprocity Local class field theory Iwasawa theory Iwasawa theory Herbrand–Ribet theorem Vandiver's conjecture Stickelberger's theorem Euler system p-adic L-function Arithmetic geometry Arithmetic geometry Complex multiplication Abelian variety of CM-type Chowla–Selberg formula Hasse–Weil zeta function Mathematics-related lists
https://en.wikipedia.org/wiki/Suricata%20%28software%29
Suricata is an open-source based intrusion detection system (IDS) and intrusion prevention system (IPS). It was developed by the Open Information Security Foundation (OISF). A beta version was released in December 2009, with the first standard release following in July 2010. Free intrusion detection systems OSSEC HIDS Prelude Hybrid IDS Sagan Snort Zeek NIDS See also Aanval References External links Open Information Security Foundation Computer security software Free security software Free network-related software Intrusion detection systems Linux security software Unix security-related software
https://en.wikipedia.org/wiki/Zero%20matrix
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of matrices, and is denoted by the symbol or followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are Properties The set of matrices with entries in a ring K forms a ring . The zero matrix in is the matrix with all entries equal to , where is the additive identity in K. The zero matrix is the additive identity in . That is, for all it satisfies the equation There is exactly one zero matrix of any given dimension m×n (with entries from a given ring), so when the context is clear, one often refers to the zero matrix. In general, the zero element of a ring is unique, and is typically denoted by 0 without any subscript indicating the parent ring. Hence the examples above represent zero matrices over any ring. The zero matrix also represents the linear transformation which sends all the vectors to the zero vector. It is idempotent, meaning that when it is multiplied by itself, the result is itself. The zero matrix is the only matrix whose rank is 0. Occurrences The mortal matrix problem is the problem of determining, given a finite set of n × n matrices with integer entries, whether they can be multiplied in some order, possibly with repetition, to yield the zero matrix. This is known to be undecidable for a set of six or more 3 × 3 matrices, or a set of two 15 × 15 matrices. In ordinary least squares regression, if there is a perfect fit to the data, the annihilator matrix is the zero matrix. See also Identity matrix, the multiplicative identity for matrices Matrix of ones, a matrix where all elements are one Nilpotent matrix Single-entry matrix, a matrix where all but one element is zero References Matrices 0 (number) Sparse matrices
https://en.wikipedia.org/wiki/Roll-away%20computer
A roll-away computer is an idea introduced as part of a series by Toshiba in 2000, which aimed to predict the trends in personal computing five years into the future. Since its announcement, the roll-away computer has remained a theoretical device. A roll-away computer is a computer with a flexible polymer-based display technology, measuring 1 mm thick and weighing around 200 grams. Flexible and rollable displays started entering the market in 2006 (see electronic paper). The R&D department of Seiko Epson has demonstrated a flexible active-matrix LCD panel (including the pixel thin film transistors and the peripheral TFT drivers), a flexible active-matrix OLED panel, the world's first flexible 8-bit asynchronous CPU (ACT11)—which uses the world's first flexible SRAM. University of Tokyo researchers have demonstrated flexible flash memory. LG Corporation has demonstrated an 18-inch high-definition video display panel that can roll up into a 3 cm diameter tube. See also Tablet PC Roll-up keyboard References External links http://www.toshiba-europe.com/computers/tnt/visions2000/7/ "Foldable, Stretchable Circuits" by Kate Greene 2008 Classes of computers
https://en.wikipedia.org/wiki/Hybridoma%20technology
Hybridoma technology is a method for producing large numbers of identical antibodies (also called monoclonal antibodies). This process starts by injecting a mouse (or other mammal) with an antigen that provokes an immune response. A type of white blood cell, the B cell, produces antibodies that bind to the injected antigen. These antibody producing B-cells are then harvested from the mouse and, in turn, fused with immortal B cell cancer cells, a myeloma, to produce a hybrid cell line called a hybridoma, which has both the antibody-producing ability of the B-cell and the longevity and reproductivity of the myeloma. The hybridomas can be grown in culture, each culture starting with one viable hybridoma cell, producing cultures each of which consists of genetically identical hybridomas which produce one antibody per culture (monoclonal) rather than mixtures of different antibodies (polyclonal). The myeloma cell line that is used in this process is selected for its ability to grow in tissue culture and for an absence of antibody synthesis. In contrast to polyclonal antibodies, which are mixtures of many different antibody molecules, the monoclonal antibodies produced by each hybridoma line are all chemically identical. The production of monoclonal antibodies was invented by César Milstein and Georges J. F. Köhler in 1975. They shared the Nobel Prize of 1984 for Medicine and Physiology with Niels Kaj Jerne, who made other contributions to immunology. The term hybridoma was coined by Leonard Herzenberg during his sabbatical in César Milstein's laboratory in 1976–1977. Method Laboratory animals (mammals, e.g. mice) are first exposed to the antigen against which an antibody is to be generated. Usually this is done by a series of injections of the antigen in question, over the course of several weeks. These injections are typically followed by the use of in vivo electroporation, which significantly enhances the immune response. Once splenocytes are isolated from the m
https://en.wikipedia.org/wiki/Advice%20%28programming%29
In aspect and functional programming, advice describes a class of functions which modify other functions when the latter are run; it is a certain function, method or procedure that is to be applied at a given join point of a program. Use The practical use of advice functions is generally to modify or otherwise extend the behavior of functions which cannot be easily modified or extended. The Emacspeak Emacs-addon makes extensive use of advice: it must modify thousands of existing Emacs modules and functions such that it can produce audio output for the blind corresponding to the visual presentation, but it would be infeasible to copy all of them and redefine them to produce audio output in addition to their normal outputs; so, the Emacspeak programmers define advice functions which run before and after. Another Emacs example; suppose after one corrected a misspelled word through ispell, one wanted to re-spellcheck the entire buffer. ispell-word offers no such functionality, even if the spellchecked word is used a thousand times. One could track down the definition of ispell-word, copy it into one's Emacs, and write the additional functionality, but this is tedious, prone to broken-ness (the Emacs version will get out of sync with the actual Ispell Elisp module, if it even works out of its home). What one wants is fairly simple: just to run another command after ispell-word runs. Using advice functions, it can be done as simply as this: (defadvice ispell (after advice) (flyspell-buffer)) (ad-activate 'ispell t) Implementations A form of advices were part of C with Classes in the late 1970s and early 1980s, namely functions called call and return defined in a class, which were called before (respectively, after) member functions of the class. However, these were dropped from C++. Advices are part of the Common Lisp Object System (CLOS), as :before, :after, and :around methods, which are combined with the primary method under "standard method combination".
https://en.wikipedia.org/wiki/Chirality%20%28mathematics%29
In geometry, a figure is chiral (and said to have chirality) if it is not identical to its mirror image, or, more precisely, if it cannot be mapped to its mirror image by rotations and translations alone. An object that is not chiral is said to be achiral. A chiral object and its mirror image are said to be enantiomorphs. The word chirality is derived from the Greek (cheir), the hand, the most familiar chiral object; the word enantiomorph stems from the Greek (enantios) 'opposite' + (morphe) 'form'. Examples Some chiral three-dimensional objects, such as the helix, can be assigned a right or left handedness, according to the right-hand rule. Many other familiar objects exhibit the same chiral symmetry of the human body, such as gloves and shoes. Right shoes differ from left shoes only by being mirror images of each other. In contrast thin gloves may not be considered chiral if you can wear them inside-out. The J, L, S and Z-shaped tetrominoes of the popular video game Tetris also exhibit chirality, but only in a two-dimensional space. Individually they contain no mirror symmetry in the plane. Chirality and symmetry group A figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry. (In Euclidean geometry any isometry can be written as with an orthogonal matrix and a vector . The determinant of is either 1 or −1 then. If it is −1 the isometry is orientation-reversing, otherwise it is orientation-preserving. A general definition of chirality based on group theory exists. It does not refer to any orientation concept: an isometry is direct if and only if it is a product of squares of isometries, and if not, it is an indirect isometry. The resulting chirality definition works in spacetime. Chirality in two dimensions In two dimensions, every figure which possesses an axis of symmetry is achiral, and it can be shown that every bounded achiral figure must have an axis of symmetry. (An axis of symmetry of a figure
https://en.wikipedia.org/wiki/Chirality%20%28chemistry%29
In chemistry, a molecule or ion is called chiral () if it cannot be superposed on its mirror image by any combination of rotations, translations, and some conformational changes. This geometric property is called chirality (). The terms are derived from Ancient Greek (cheir) 'hand'; which is the canonical example of an object with this property. A chiral molecule or ion exists in two stereoisomers that are mirror images of each other, called enantiomers; they are often distinguished as either "right-handed" or "left-handed" by their absolute configuration or some other criterion. The two enantiomers have the same chemical properties, except when reacting with other chiral compounds. They also have the same physical properties, except that they often have opposite optical activities. A homogeneous mixture of the two enantiomers in equal parts is said to be racemic, and it usually differs chemically and physically from the pure enantiomers. Chiral molecules will usually have a stereogenic element from which chirality arises. The most common type of stereogenic element is a stereogenic center, or stereocenter. In the case of organic compounds, stereocenters most frequently take the form of a carbon atom with four distinct groups attached to it in a tetrahedral geometry. A given stereocenter has two possible configurations, which give rise to stereoisomers (diastereomers and enantiomers) in molecules with one or more stereocenter. For a chiral molecule with one or more stereocenter, the enantiomer corresponds to the stereoisomer in which every stereocenter has the opposite configuration. An organic compound with only one stereogenic carbon is always chiral. On the other hand, an organic compound with multiple stereogenic carbons is typically, but not always, chiral. In particular, if the stereocenters are configured in such a way that the molecule can take a conformation having a plane of symmetry or an inversion point, then the molecule is achiral and is known as a
https://en.wikipedia.org/wiki/Chirality%20%28physics%29
A chiral phenomenon is one that is not identical to its mirror image (see the article on mathematical chirality). The spin of a particle may be used to define a handedness, or helicity, for that particle, which, in the case of a massless particle, is the same as chirality. A symmetry transformation between the two is called parity transformation. Invariance under parity transformation by a Dirac fermion is called chiral symmetry. Chirality and helicity The helicity of a particle is positive (“right-handed”) if the direction of its spin is the same as the direction of its motion. It is negative (“left-handed”) if the directions of spin and motion are opposite. So a standard clock, with its spin vector defined by the rotation of its hands, has left-handed helicity if tossed with its face directed forwards. Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: “left” is negative, “right” is positive. The chirality of a particle is more abstract: It is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massless particles – photons, gluons, and (hypothetical) gravitons – chirality is the same as helicity; a given massless particle appears to spin in the same direction along its axis of motion regardless of point of view of the observer. For massive particles – such as electrons, quarks, and neutrinos – chirality and helicity must be distinguished: In the case of these particles, it is possible for an observer to change to a reference frame moving faster than the spinning particle, in which case the particle will then appear to move backwards, and its helicity (which may be thought of as “apparent chirality”) will be reversed. That is, helicity is a constant of motion, but it is not Lorentz invariant. Chirality is Lorentz invariant, but is not a constant of motion: A massive left-handed spinor, when propagating, will evolve into a right handed spinor over ti
https://en.wikipedia.org/wiki/Disphenocingulum
In geometry, the disphenocingulum or pentakis elongated gyrobifastigium is one of the Johnson solids (). It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. Cartesian coordinates Let a ≈ 0.76713 be the second smallest positive root of the polynomial and and . Then, Cartesian coordinates of a disphenocingulum with edge length 2 are given by the union of the orbits of the points under the action of the group generated by reflections about the xz-plane and the yz-plane. References External links Johnson solids
https://en.wikipedia.org/wiki/Bilunabirotunda
In geometry, the bilunabirotunda is one of the Johnson solids (). Geometry It is one of the elementary Johnson solids, which do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. However, it does have a strong relationship to the icosidodecahedron, an Archimedean solid. Either one of the two clusters of two pentagons and two triangles can be aligned with a congruent patch of faces on the icosidodecahedron. If two bilunabirotundae are aligned this way on opposite sides of the icosidodecahedron, then two vertices of the bilunabirotundae meet in the very center of the icosidodecahedron. The other two clusters of faces of the bilunabirotunda, the lunes (each lune featuring two triangles adjacent to opposite sides of one square), can be aligned with a congruent patch of faces on the rhombicosidodecahedron. If two bilunabirotundae are aligned this way on opposite sides of the rhombicosidodecahedron, then a cube can be put between the bilunabirotundae at the very center of the rhombicosidodecahedron. Each of the two pairs of adjacent pentagons (each pair of pentagons sharing an edge) can be aligned with the pentagonal faces of a metabidiminished icosahedron as well. The bilunabirotunda has a weak relationship with the cuboctahedron, as it may be created by replacing four square faces of the cuboctahedron with pentagons. Cartesian coordinates The following define the vertices of a bilunabirotunda centered at the origin with edge length 1: where is the golden ratio. Related polyhedra and honeycombs Six bilunabirotundae can be augmented around a cube with pyritohedral symmetry. B. M. Stewart labeled this six-bilunabirotunda model as 6J91(P4). The bilunabirotunda can be used with the regular dodecahedron and cube as a space-filling honeycomb. External links Miracle Spacefilling (Dodecahedron&Cube&Johnson solid No.91) Johnson solids
https://en.wikipedia.org/wiki/Triangular%20hebesphenorotunda
In geometry, the triangular hebesphenorotunda is one of the Johnson solids (). It is one of the elementary Johnson solids, which do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. However, it does have a strong relationship to the icosidodecahedron, an Archimedean solid. Most evident is the cluster of three pentagons and four triangles on one side of the solid. If these faces are aligned with a congruent patch of faces on the icosidodecahedron, then the hexagonal face will lie in the plane midway between two opposing triangular faces of the icosidodecahedron. The triangular hebesphenorotunda also has clusters of faces that can be aligned with corresponding faces of the rhombicosidodecahedron: the three lunes, each lune consisting of a square and two antipodal triangles adjacent to the square. The faces around each vertex can also be aligned with the corresponding faces of various diminished icosahedra. Johnson uses the prefix hebespheno- to refer to a blunt wedge-like complex formed by three adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. The suffix (triangular) -rotunda refers to the complex of three equilateral triangles and three regular pentagons surrounding another equilateral triangle, which bears structural resemblance to the pentagonal rotunda. The triangular hebesphenorotunda is the only Johnson solid with faces of 3, 4, 5 and 6 sides. Cartesian coordinates Cartesian coordinates for the triangular hebesphenorotunda with edge length – 1 are given by the union of the orbits of the points under the action of the group generated by rotation by 120° around the z-axis and the reflection about the yz-plane. Here, = (sometimes written φ) is the golden ratio. The first point generates the triangle opposite the hexagon, the second point generates the bases of the triangles surrounding the previous triangle, the third point generates the tips of the pentagons opposite the fir
https://en.wikipedia.org/wiki/Wingtip%20vortices
Wingtip vortices are circular patterns of rotating air left behind a wing as it generates lift. The name is a misnomer because the cores of the vortices are slightly inboard of the wing tips. Wingtip vortices are sometimes named trailing or lift-induced vortices because they also occur at points other than at the wing tips. Indeed, vorticity is trailed at any point on the wing where the lift varies span-wise (a fact described and quantified by the lifting-line theory); it eventually rolls up into large vortices near the wingtip, at the edge of flap devices, or at other abrupt changes in wing planform. Wingtip vortices are associated with induced drag, the imparting of downwash, and are a fundamental consequence of three-dimensional lift generation. Careful selection of wing geometry (in particular, wingspan), as well as of cruise conditions, are design and operational methods to minimize induced drag. Wingtip vortices form the primary component of wake turbulence. Depending on ambient atmospheric humidity as well as the geometry and wing loading of aircraft, water may condense or freeze in the core of the vortices, making the vortices visible. Generation of trailing vortices When a wing generates aerodynamic lift, it results in a region of downwash between the two vortices. Three-dimensional lift and the occurrence of wingtip vortices can be approached with the concept of horseshoe vortex and described accurately with the Lanchester–Prandtl theory. In this view, the trailing vortex is a continuation of the wing-bound vortex inherent to the lift generation. Effects and mitigation Wingtip vortices are associated with induced drag, an unavoidable consequence of three-dimensional lift generation. The rotary motion of the air within the shed wingtip vortices (sometimes described as a "leakage") reduces the effective angle of attack of the air on the wing. The lifting-line theory describes the shedding of trailing vortices as span-wise changes in lift distributi
https://en.wikipedia.org/wiki/Augmented%20sphenocorona
In geometry, the augmented sphenocorona is one of the Johnson solids (), and is obtained by adding a square pyramid to one of the square faces of the sphenocorona. It is the only Johnson solid arising from "cut and paste" manipulations where the components are not all prisms, antiprisms or sections of Platonic or Archimedean solids. Johnson uses the prefix spheno- to refer to a wedge-like complex formed by two adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. Likewise, the suffix -corona refers to a crownlike complex of 8 equilateral triangles. Finally, the descriptor augmented implies that another polyhedron, in this case a pyramid, is adjointed. Joining both complexes together with the pyramid results in the augmented sphenocorona. Cartesian coordinates To calculate Cartesian coordinates for the augmented sphenocorona, one may start by calculating the coordinates of the sphenocorona. Let k ≈ 0.85273 be the smallest positive root of the quartic polynomial Then, Cartesian coordinates of a sphenocorona with edge length 2 are given by the union of the orbits of the points under the action of the group generated by reflections about the xz-plane and the yz-plane. Calculating the centroid and the normal unit vector of one of the square faces gives the location of its last vertex as One may then calculate the surface area of a snub square of edge length a as and its volume as References External links Johnson solids
https://en.wikipedia.org/wiki/Hitsville%20U.S.A.
"Hitsville U.S.A." is the nickname given to Motown's first headquarters and recording studio. The house (formerly a photographers' studio) is located at 2648 West Grand Boulevard in Detroit, Michigan, near the New Center area. The house was purchased by Motown founder Berry Gordy in 1959. After purchasing the house, Gordy converted it for use as the record label's administrative building and recording studio. Following mainstream success in the mid 1960s through mid 1970s, Gordy moved the label to Los Angeles and established the Hitsville West studio there, as a part of his focus on television and film production as well as music production. Today, the “Hitsville U.S.A” property operates as the Motown Museum, which is dedicated to the legacy of the record label, its artists, and its music. The museum occupies the original house and an adjacent former residence. West Grand Boulevard In 1959, Gordy formed his first label, Tamla Records, and purchased the property that would become Motown's Hitsville U.S.A. studio. The photography studio located in the back of the property was modified into a small recording studio, which was open 22 hours a day (closing from 8 a.m. to 10 a.m. for maintenance), and the Gordys moved into the second-floor living quarters. Within seven years, Motown would occupy seven additional neighboring houses: Hitsville U.S.A., 1959: (ground floor) administrative office, tape library, control room, Studio A; (upper floor) Gordy living quarters (1959–1962), artists and repertoire (1962–1972) Jobete Publishing office, 1961: sales, billing, collections, shipping, and public relations Berry Gordy Jr. Enterprises, 1962: offices for Berry Gordy, Jr. and his sister Esther Gordy Edwards Finance department, 1965: royalties and payroll Artist personal development, 1966: Harvey Fuqua (head of artist development and producer of stage performances), Maxine Powell (instructor in grooming, poise, and social graces for Motown artists), Maurice King (vocal c
https://en.wikipedia.org/wiki/Pasch%27s%20theorem
In geometry, Pasch's theorem, stated in 1882 by the German mathematician Moritz Pasch, is a result in plane geometry which cannot be derived from Euclid's postulates. Statement The statement is as follows: [Here, for example, (, , ) means that point lies between points and .] See also Ordered geometry Pasch's axiom Notes References External links Euclidean plane geometry Foundations of geometry Order theory Theorems in plane geometry
https://en.wikipedia.org/wiki/Radiosurgery
Radiosurgery is surgery using radiation, that is, the destruction of precisely selected areas of tissue using ionizing radiation rather than excision with a blade. Like other forms of radiation therapy (also called radiotherapy), it is usually used to treat cancer. Radiosurgery was originally defined by the Swedish neurosurgeon Lars Leksell as "a single high dose fraction of radiation, stereotactically directed to an intracranial region of interest". In stereotactic radiosurgery (SRS), the word "stereotactic" refers to a three-dimensional coordinate system that enables accurate correlation of a virtual target seen in the patient's diagnostic images with the actual target position in the patient. Stereotactic radiosurgery may also be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR) when used outside the central nervous system (CNS). History Stereotactic radiosurgery was first developed in 1949 by the Swedish neurosurgeon Lars Leksell to treat small targets in the brain that were not amenable to conventional surgery. The initial stereotactic instrument he conceived used probes and electrodes. The first attempt to supplant the electrodes with radiation was made in the early fifties, with x-rays. The principle of this instrument was to hit the intra-cranial target with narrow beams of radiation from multiple directions. The beam paths converge in the target volume, delivering a lethal cumulative dose of radiation there, while limiting the dose to the adjacent healthy tissue. Ten years later significant progress had been made, due in considerable measure to the contribution of the physicists Kurt Liden and Börje Larsson. At this time, stereotactic proton beams had replaced the x-rays. The heavy particle beam presented as an excellent replacement for the surgical knife, but the synchrocyclotron was too clumsy. Leksell proceeded to develop a practical, compact, precise and simple tool which could be handled by the surgeon hi
https://en.wikipedia.org/wiki/WTTO
WTTO (channel 21) is a television station licensed to Homewood, Alabama, United States, serving the Birmingham area as an affiliate of The CW. It is owned by Sinclair Broadcast Group alongside MyNetworkTV affiliate WABM (channel 68) and ABC affiliate WBMA-LD (channel 58). The stations share studios at the Riverchase office park on Concourse Parkway in Hoover (with a Birmingham mailing address), while WTTO's transmitter is located atop Red Mountain, near the Goldencrest neighborhood of southwestern Birmingham. In Tuscaloosa, west Alabama, and the western portions of the Birmingham area, WTTO's CW channel and two subchannels of WBMA-LD are rebroadcast on WDBB (channel 17), which is licensed to Bessemer. It is owned by Cunningham Broadcasting and managed by Sinclair under a local marketing agreement (LMA); however, Sinclair effectively owns WDBB, as the majority of Cunningham's stock is owned by the family of deceased group founder Julian Smith. WTTO had a tortuous history prior to starting operations. It took nearly two decades for the station to be approved and built. Once on air, the station was a successful independent for the Birmingham area. It served as the Fox affiliate for the market from 1990 to 1996, when an affiliation shuffle resulted in the loss of the affiliation. History Early history of UHF channel 21 in central Alabama The UHF channel 21 allocation in Central Alabama was originally allocated to Gadsden. The first television station in the region to occupy the allocation was WTVS, which operated during the 1950s as an affiliate of the DuMont Television Network, and was one of the earliest UHF television stations in the United States. However, it was never able to gain a viewership foothold against the region's other stations; its owners ceased the operations of WTVS in 1957, as it had suffered from severely limited viewership due to the lack of television sets in Central Alabama that were capable of receiving stations on the UHF band (electronics
https://en.wikipedia.org/wiki/Oscillator%20sync
Oscillator sync is a feature in some synthesizers with two or more VCOs, DCOs, or "virtual" oscillators. As one oscillator finishes a cycle, it resets the period of another oscillator, forcing the latter to have the same base frequency. This can produce a harmonically rich sound, the timbre of which can be altered by varying the synced oscillator's frequency. A synced oscillator that resets other oscillator(s) is called the master; the oscillators which it resets are called slaves. There are two common forms of oscillator sync which appear on synthesizers: Hard Sync and Soft Sync. According to Sound on Sound journalist Gordon Reid, oscillator sync is "one of the least understood facilities on any synthesizer". Hard Sync The leader oscillator's pitch is generated by user input (typically the synthesizer's keyboard), and is arbitrary. The follower oscillator's pitch may be tuned to (or detuned from) this frequency, or may remain constant. Every time the leader oscillator's cycle repeats, the follower is retriggered, regardless of its position. If the follower is tuned to a lower frequency than the leader it will be forced to repeat before it completes an entire cycle, and if it is tuned to a higher frequency it will be forced to repeat partway through a second or third cycle. This technique ensures that the oscillators are technically playing at the same frequency, but the irregular cycle of the follower oscillator often causes complex timbres and the impression of harmony. If the tuning of the follower oscillator is swept, one may discern a harmonic sequence. This effect may be achieved by measuring the zero axis crossings of the leader oscillator and retriggering the follower oscillator after every other crossing. This form of oscillator sync is more common than soft sync, but is prone to generating aliasing in naive digital implementations. Soft Sync There are several other kinds of sync which may also be called Soft Sync. In a Hard Sync setup, the follower os
https://en.wikipedia.org/wiki/JavaBeans
In computing based on the Java Platform, JavaBeans is a technology developed by Sun Microsystems and released in 1996, as part of JDK 1.1. The 'beans' of JavaBeans are classes that encapsulate one or more objects into a single standardized object (the bean). This standardization allows the beans to be handled in a more generic fashion, allowing easier code reuse and introspection. This in turn allows the beans to be treated as software components, and to be manipulated visually by editors and IDEs without needing any initial configuration, or to know any internal implementation details. As part of the standardization, all beans must be serializable, have a zero-argument constructor, and allow access to properties using getter and setter methods. Features Introspection Introspection is a process of analyzing a Bean to determine its capabilities. This is an essential feature of the Java Beans specification because it allows another application, such as a design tool, to obtain information about a component. Properties A property is a subset of a Bean's state. The values assigned to the properties determine the behaviour and appearance of that component. They are set through a setter method and can be obtained by a getter method. Customization A customizer can provide a step-by-step guide that the process must follow to use the component in a specific context. Events Beans may interact with the EventObject EventListener model. Persistence Persistence is the ability to save the current state of a Bean, including the values of a Bean's properties and instance variables, to nonvolatile storage and to retrieve them at a later time. Methods A Bean should use accessor methods to encapsulate the properties. A Bean can provide other methods for business logic not related to the access to the properties. Advantages The properties, events, and methods of a bean can be exposed to another application. A bean may register to receive events from other objects and can gen
https://en.wikipedia.org/wiki/Network%20access%20server
A network access server (NAS) is a group of components that provides remote users with a point of access to a network. Overview A NAS concentrates dial-in and dial-out user communications. An access server may have a mixture of analog and digital interfaces and support hundreds of simultaneous users. A NAS consists of a communications processor that connects asynchronous devices to a LAN or WAN through network and terminal emulation software. It performs both synchronous and asynchronous routing of supported protocols. The NAS is meant to act as a gateway to guard access to a protected resource. This can be anything from a telephone network, to printers, to the Internet. A client connects to the NAS. The NAS then connects to another resource asking whether the client's supplied credentials are valid. Based on that answer the NAS then allows or disallows access to the protected resource. Examples The above translates into different implementations for different uses. Here are some examples. An Internet service provider which provides network access via common modem or modem-like devices (be it PSTN, DSL, cable or GPRS/UMTS) can have one or more NAS (network access server) devices which accept PPP, PPPoE or PPTP connections, checking credentials and recording accounting data via back-end RADIUS servers, and allowing users access through that connection. The captive portal mechanism used by many WiFi providers: a user wants to access the Internet and opens a browser. The NAS detects that the user is not currently authorized to have access to the Internet, so the NAS prompts the user for their username and password. The user supplies them and sends them back to the NAS. The NAS then uses the RADIUS protocol to connect to an AAA server and passes off the username and password. The RADIUS server searches through its resources and finds that the credentials are valid and notifies the NAS that it should grant the access. The NAS then grants the user access to the Inter
https://en.wikipedia.org/wiki/Runlevel
A runlevel is a mode of operation in the computer operating systems that implements Unix System V-style initialization. Conventionally, seven runlevels exist, numbered from zero to six. S is sometimes used as a synonym for one of the levels. Only one runlevel is executed on startup; run levels are not executed one after another (i.e. only runlevel 2, 3, or 4 is executed, not more of them sequentially or in any other order). A runlevel defines the state of the machine after boot. Different runlevels are typically assigned (not necessarily in any particular order) to the single-user mode, multi-user mode without network services started, multi-user mode with network services started, system shutdown, and system reboot system states. The exact setup of these configurations varies between operating systems and Linux distributions. For example, runlevel 4 might be a multi-user GUI no-server configuration on one distribution, and nothing on another. Runlevels commonly follow the general patterns described in this article; however, some distributions employ certain specific configurations. In standard practice, when a computer enters runlevel zero, it shuts off, and when it enters runlevel six, it reboots. The intermediate runlevels (1–5) differ in terms of which drives are mounted and which network services are started. Default runlevels are typically 3, 4, or 5. Lower runlevels are useful for maintenance or emergency repairs, since they usually offer no network services at all. The particular details of runlevel configuration differ widely among operating systems, and also among system administrators. In various Linux distributions, the traditional script used in the Version 7 Unix was first replaced by runlevels and then by systemd states on most major distributions. Standard runlevels Linux Although systemd is, , used by default in most major Linux distributions, runlevels can still be used through the means provided by the sysvinit project. After the Linux ker
https://en.wikipedia.org/wiki/Pediatric%20endocrinology
Pediatric endocrinology (British: Paediatric) is a medical subspecialty dealing with disorders of the endocrine glands, such as variations of physical growth and sexual development in childhood, diabetes and many more. By age, pediatric endocrinologists, depending upon the age range of the patients they treat, care for patients from infancy to late adolescence and young adulthood. The most common disease of the specialty is type 1 diabetes, which usually accounts for at least 50% of a typical clinical practice. The next most common problem is growth disorders, especially those amenable to growth hormone treatment. Pediatric endocrinologists are usually the primary physicians involved in the medical care of infants and children with intersex disorders. The specialty also deals with hypoglycemia and other forms of hyperglycemia in childhood, variations of puberty, as well other adrenal, thyroid, and pituitary problems. Many pediatric endocrinologists have interests and expertise in bone metabolism, lipid metabolism, adolescent gynecology, or inborn errors of metabolism. In the United States and Canada, pediatric endocrinology is a subspecialty of the American Board of Pediatrics or the American Osteopathic Board of Pediatrics, with board certification following fellowship training. It is a relatively small and primarily cognitive specialty, with few procedures and an emphasis on diagnostic evaluation. Most pediatric endocrinologists in North America and many from around the world can trace their professional genealogy to Lawson Wilkins, who pioneered the specialty in the pediatrics department of Johns Hopkins School of Medicine and the Harriet Lane Home in Baltimore in between the late 1940s and the mid-1960s. The principal North American professional association was originally named the Lawson Wilkins Pediatric Endocrine Society, now renamed the Pediatric Endocrine Society. Other longstanding pediatric endocrine associations include the European Society for Pae
https://en.wikipedia.org/wiki/Current%20loop
In electrical signalling an analog current loop is used where a device must be monitored or controlled remotely over a pair of conductors. Only one current level can be present at any time. A major application of current loops is the industry de facto standard 4–20 mA current loop for process control applications, where they are extensively used to carry signals from process instrumentation to proportional–integral–derivative (PID) controllers, supervisory control and data acquisition (SCADA) systems, and programmable logic controllers (PLCs). They are also used to transmit controller outputs to the modulating field devices such as control valves. These loops have the advantages of simplicity and noise immunity, and have a large international user and equipment supplier base. Some 4–20 mA field devices can be powered by the current loop itself, removing the need for separate power supplies, and the "smart" Highway Addressable Remote Transducer (HART) Protocol uses the loop for communications between field devices and controllers. Various automation protocols may replace analog current loops, but 4–20 mA is still a principal industrial standard. Process control 4–20 mA loops In industrial process control, analog 4–20 mA current loops are commonly used for electronic signalling, with the two values of 4 and 20 mA representing 0–100% of the range of measurement or control. These loops are used both for carrying sensor information from field instrumentation and carrying control signals to the process modulating devices, such as a valve. The key advantages of the current loop are: The loop can often power the remote device, with power supplied by the controller, thus removing need for power cabling. Many instrumentation manufacturers produce 4–20 mA sensors which are "loop powered". The "live" or "elevated" zero of 4 mA allows powering of the device even with no process signal output from the field transmitter. The accuracy of the signal is not affected by volt
https://en.wikipedia.org/wiki/Primary%20battery
A primary battery or primary cell is a battery (a galvanic cell) that is designed to be used once and discarded, and not recharged with electricity and reused like a secondary cell (rechargeable battery). In general, the electrochemical reaction occurring in the cell is not reversible, rendering the cell unrechargeable. As a primary cell is used, chemical reactions in the battery use up the chemicals that generate the power; when they are gone, the battery stops producing electricity. In contrast, in a secondary cell, the reaction can be reversed by running a current into the cell with a battery charger to recharge it, regenerating the chemical reactants. Primary cells are made in a range of standard sizes to power small household appliances such as flashlights and portable radios. Primary batteries make up about 90% of the $50 billion battery market, but secondary batteries have been gaining market share. About 15 billion primary batteries are thrown away worldwide every year, virtually all ending up in landfills. Due to the toxic heavy metals and strong acids and alkalis they contain, batteries are hazardous waste. Most municipalities classify them as such and require separate disposal. The energy needed to manufacture a battery is about 50 times greater than the energy it contains. Due to their high pollutant content compared to their small energy content, the primary battery is considered a wasteful, environmentally unfriendly technology. Due mainly to increasing sales of wireless devices and cordless tools which cannot be economically powered by primary batteries and come with integral rechargeable batteries, the secondary battery industry has high growth and has slowly been replacing the primary battery in high end products. Usage trend In the early twenty-first century, primary cells began losing market share to secondary cells, as relative costs declined for the latter. Flashlight power demands were reduced by the switch from incandescent bulbs to light-em
https://en.wikipedia.org/wiki/Performance%20indicator
A performance indicator or key performance indicator (KPI) is a type of performance measurement. KPIs evaluate the success of an organization or of a particular activity (such as projects, programs, products and other initiatives) in which it engages. KPIs provide a focus for strategic and operational improvement, create an analytical basis for decision making and help focus attention on what matters most. Often success is simply the repeated, periodic achievement of some levels of operational goal (e.g. zero defects, 10/10 customer satisfaction), and sometimes success is defined in terms of making progress toward strategic goals. Accordingly, choosing the right KPIs relies upon a good understanding of what is important to the organization. What is deemed important often depends on the department measuring the performance – e.g. the KPIs useful to finance will differ from the KPIs assigned to sales. Since there is a need to understand well what is important, various techniques to assess the present state of the business, and its key activities, are associated with the selection of performance indicators. These assessments often lead to the identification of potential improvements, so performance indicators are routinely associated with 'performance improvement' initiatives. A very common way to choose KPIs is to apply a management framework such as the balanced scorecard. The importance of such performance indicators is evident in the typical decision-making process (e.g. in management of organisations). When a decision-maker considers several options, they must be equipped to properly analyse the status quo to predict the consequences of future actions. Should they make their analysis on the basis of faulty or incomplete information, the predictions will not be reliable and consequently the decision made might yield an unexpected result. Therefore, the proper usage of performance indicators is vital to avoid such mistakes and minimise the risk. Categorization o
https://en.wikipedia.org/wiki/Lichen%20sclerosus
Lichen sclerosus (LS) is a chronic, inflammatory skin disease of unknown cause which can affect any body part of any person but has a strong preference for the genitals (penis, vulva) and is also known as balanitis xerotica obliterans (BXO) when it affects the penis. Lichen sclerosus is not contagious. There is a well-documented increase of skin cancer risk in LS, potentially improvable with treatment. LS in adult age women is normally incurable, but improvable with treatment, and often gets progressively worse if not treated properly. Most males with mild or intermediate disease restricted to foreskin or glans can be cured by either medical or surgical treatment. Signs and symptoms LS can occur without symptoms. White patches on the LS body area, itching, pain, dyspareunia (in genital LS), easier bruising, cracking, tearing and peeling, and hyperkeratosis are common symptoms in both men and women. In women, the condition most commonly occurs on the vulva and around the anus with ivory-white elevations that may be flat and glistening. In males, the disease may take the form of whitish patches on the foreskin and its narrowing (preputial stenosis), forming an "indurated ring", which can make retraction more difficult or impossible (phimosis). In addition there can be lesions, white patches or reddening on the glans. In contrast to women, anal involvement is less frequent. Meatal stenosis, making it more difficult or even impossible to urinate, may also occur. On the non-genital skin, the disease may manifest as porcelain-white spots with small visible plugs inside the orifices of hair follicles or sweat glands on the surface. Thinning of the skin may also occur. Psychological effect Distress due to the discomfort and pain of lichen sclerosus is normal, as are concerns with self-esteem and sex. Counseling can help. According to the National Vulvodynia Association, which also supports women with lichen sclerosus, vulvo-vaginal conditions can cause feelings of iso
https://en.wikipedia.org/wiki/General%20Motors%20Local%20Area%20Network
General Motors Local Area Network (GMLAN) is an application- and transport-layer protocol using controller area network for lower layer services. It was standardized as SAE J2411 for use in OBD-II vehicle networks. Transport-layer services Transport-layer services include the transmission of multi-CAN-frame messages based on the ISO 15765-2 multi-frame messaging scheme. It was developed and is used primarily by General Motors for in-vehicle communication and diagnostics. GM's Tech2 uses the CANdi (Controller Area Network diagnostic interface) adapter to communicate over GMLAN. Applications Some software applications that allow interfacing to GMLAN are Intrepid Control Systems, Inc.'s Vehicle Spy 3; Vector's CANoe; Dearborn Group's Hercules, ETAS' ES-1222, ES590, ES715, and ES580; ScanTool.net's OBDLink MX; EControls by Enovation Controls' CANCapture; and GMLAN vehicle universal remote control GMRC for Android devices Tesla uses J2411 (single-wire CAN over the Control Pilot) for their DC Supercharger (newer units are also capable of PLC over the control pilot) and AC Destination Charging. References General Motors Serial buses
https://en.wikipedia.org/wiki/VC-1
SMPTE 421, informally known as VC-1, is a video coding format. Most of it was initially developed as Microsoft's proprietary video format Windows Media Video 9 in 2003. With some enhancements including the development of a new Advanced Profile, it was officially approved as a SMPTE standard on April 3, 2006. It was primarily marketed as a lower-complexity competitor to the H.264/MPEG-4 AVC standard. After its development, several companies other than Microsoft asserted that they held patents that applied to the technology, including Panasonic, LG Electronics and Samsung Electronics. VC-1 is supported in the now-deprecated Microsoft Silverlight, the briefly-offered HD DVD disc format, and the Blu-ray Disc format. Format VC-1 is an evolution of the conventional block-based motion-compensated hybrid video coding design also found in H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, and MPEG-4 Part 2. It was widely characterized as an alternative to the ITU-T and MPEG video codec standard known as H.264/MPEG-4 AVC. The Advanced Profile of VC-1 contains tools designed for coding interlaced video sequences as well as progressive scan video. The main goal of the development and standardization of the VC-1 Advanced Profile was to support interlace-optimized compression of interlaced content without first converting it to progressive scan, making it more attractive to broadcast and video industry professionals using the 1080i format. Both HD DVD and Blu-ray Disc adopted VC-1 as a supported video format, meaning their video playback devices are required to be capable of decoding and playing video-content compressed using VC-1. Windows Vista partially supports HD DVD playback by including the VC-1 decoder and some related components needed for playback of VC-1 encoded HD DVD movies. Microsoft designated VC-1 as the Xbox 360 video game console's official video format, and game developers could use VC-1 for full motion video included with games. By means of an October 31, 20
https://en.wikipedia.org/wiki/Aurora%20programme
The Aurora programme (sometimes called Aurora Exploration Programme, or simply Exploration Programme) was a human spaceflight programme of the European Space Agency (ESA) established in 2001. The objective was to formulate and then to implement a European long-term plan for exploration of the Solar System using robotic spacecraft and human spaceflight to investigate bodies holding promise for traces of life beyond the Earth. Overview Member states commit to participation in the Aurora programme for five-year periods, after which they can change their level of participation or pull out entirely. In the early years the Aurora programme planned for flagship missions and arrow missions for key technology demonstrations, such as Earth re-entry vehicle/capsule and Mars aerocapture demonstrator. Although human spaceflight has remained a long-term goal of the programme, with some basic technology development in this area, the thrust has been on implementation of the ExoMars mission and preparations for an international Mars sample return mission. The Aurora programme was a response to Europe's Strategy for space which was endorsed by European Union Council of Research and the ESA Council. Europe strategy for space had three main points including:"explore the solar system and the Universe", "stimulate new technology", and "inspire the young people of Europe to take a greater interest in science and technology". One of the foundational principles of the Aurora program is recognising the interdependence of technology and exploration;. Missions The first decade is planned to focus on robotic missions. Flagship missions ESA describes some Aurora programme missions as "Flagship" missions. The first Flagship mission is ExoMars, a dual robotic mission to Mars made in cooperation with the Russian Federal Space Agency (Roskosmos). It will involve development of a Mars orbiter (ExoMars Trace Gas Orbiter), a technology demonstrator descent module (Schiaparelli lander) and the Ro
https://en.wikipedia.org/wiki/Copper%28II%29%20oxide
Copper(II) oxide or cupric oxide is an inorganic compound with the formula CuO. A black solid, it is one of the two stable oxides of copper, the other being Cu2O or copper(I) oxide (cuprous oxide). As a mineral, it is known as tenorite. It is a product of copper mining and the precursor to many other copper-containing products and chemical compounds. Production It is produced on a large scale by pyrometallurgy, as one stage in extracting copper from its ores. The ores are treated with an aqueous mixture of ammonium carbonate, ammonia, and oxygen to give copper(I) and copper(II) ammine complexes, which are extracted from the solids. These complexes are decomposed with steam to give CuO. It can be formed by heating copper in air at around 300–800°C: 2 Cu + O2 → 2 CuO For laboratory uses, pure copper(II) oxide is better prepared by heating copper(II) nitrate, copper(II) hydroxide, or basic copper(II) carbonate: 2 Cu(NO3)2(s) → 2 CuO(s) + 4 NO2(g) + O2(g) (180°C) Cu2(OH)2CO3(s) → 2 CuO(s) + CO2(g) + H2O(g) Cu(OH)2(s) → CuO(s) + H2O(g) Reactions Copper(II) oxide dissolves in mineral acids such as hydrochloric acid, sulfuric acid or nitric acid to give the corresponding copper(II) salts: CuO + 2 HNO3 → Cu(NO3)2 + H2O CuO + 2 HCl → CuCl2 + H2O CuO + H2SO4 → CuSO4 + H2O In presense of water It reacts with concentrated alkali to form the corresponding cuprate salts: 2 MOH + CuO + H2O → M2[Cu(OH)4] 2 NaOH + CuO + H2O → Na2[Cu(OH)4] It can also be reduced to copper metal using hydrogen, carbon monoxide, or carbon: CuO + H2 → Cu + H2O CuO + CO → Cu + CO2 2 CuO + C → 2Cu + CO2 When cupric oxide is substituted for iron oxide in thermite the resulting mixture is a low explosive, not an incendiary. Structure and physical properties Copper(II) oxide belongs to the monoclinic crystal system. The copper atom is coordinated by 4 oxygen atoms in an approximately square planar configuration. The work function of bulk CuO is 5.3 eV Uses As a signi
https://en.wikipedia.org/wiki/Voodoo%205
The Voodoo 5 was the last and most powerful graphics card line that 3dfx Interactive released. All members of the family were based upon the VSA-100 graphics processor. Only the single-chip Voodoo 4 4500 and dual-chip Voodoo 5 5500 made it to market. Architecture and performance The VSA-100 graphics chip is a direct descendant of "Avenger", more commonly known as Voodoo3. It was built on a 250 nm semiconductor manufacturing process, as with Voodoo3. However, the process was tweaked with a sixth metal layer to allow for better density and speed, and the transistors have a slightly shorter gate length and thinner gate oxide. VSA-100 has a transistor count of roughly 14 million, compared to Voodoo3's ~8 million. The chip has a larger texture cache than its predecessors and the data paths are 32 bits wide rather than 16-bit. Rendering calculations are 40 bits wide in VSA-100 but the operands and results are stored as 32-bit. One of the design goals for the VSA-100 was scalability. The name of the chip is an abbreviation for "Voodoo Scalable Architecture." By using one or more VSA-100 chips on a board, the various market segments for graphics cards are satisfied with just a single graphics chip design. Theoretically, anywhere from 1 to 32 VSA-100 GPUs could be run in parallel on a single graphics card, and the fillrate of the card would increase proportionally. On cards with more than one VSA-100, the chips are linked using 3dfx's Scan-Line Interleave (SLI) technology. A major drawback to this method of performance scaling is that various parts of hardware are needlessly duplicated on the cards and board complexity increases with each additional processor. 3dfx changed the rendering pipeline from one pixel pipeline with twin texture mapping units (Voodoo2/3) to a dual pixel pipeline design with one texture mapping unit on each. This design, commonly referred to as a 2×1 configuration, has an advantage over the prior 1×2 design with the ability to always output 2 pix
https://en.wikipedia.org/wiki/Zech%27s%20logarithm
Zech logarithms are used to implement addition in finite fields when elements are represented as powers of a generator . Zech logarithms are named after Julius Zech, and are also called Jacobi logarithms, after Carl G. J. Jacobi who used them for number theoretic investigations. Definition Given a primitive element of a finite field, the Zech logarithm relative to the base is defined by the equation which is often rewritten as The choice of base is usually dropped from the notation when it is clear from the context. To be more precise, is a function on the integers modulo the multiplicative order of , and takes values in the same set. In order to describe every element, it is convenient to formally add a new symbol , along with the definitions where is an integer satisfying , that is for a field of characteristic 2, and for a field of odd characteristic with elements. Using the Zech logarithm, finite field arithmetic can be done in the exponential representation: These formulas remain true with our conventions with the symbol , with the caveat that subtraction of is undefined. In particular, the addition and subtraction formulas need to treat as a special case. This can be extended to arithmetic of the projective line by introducing another symbol satisfying and other rules as appropriate. For fields of characteristic two, . Uses For sufficiently small finite fields, a table of Zech logarithms allows an especially efficient implementation of all finite field arithmetic in terms of a small number of integer addition/subtractions and table look-ups. The utility of this method diminishes for large fields where one cannot efficiently store the table. This method is also inefficient when doing very few operations in the finite field, because one spends more time computing the table than one does in actual calculation. Examples Let be a root of the primitive polynomial . The traditional representation of elements of this field is as polynom
https://en.wikipedia.org/wiki/Elongated%20triangular%20pyramid
In geometry, the elongated triangular pyramid is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a tetrahedron by attaching a triangular prism to its base. Like any elongated pyramid, the resulting solid is topologically (but not geometrically) self-dual. Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: The height is given by If the edges are not the same length, use the individual formulae for the tetrahedron and triangular prism separately, and add the results together. Dual polyhedron Topologically, the elongated triangular pyramid is its own dual. Geometrically, the dual has seven irregular faces: one equilateral triangle, three isosceles triangles and three isosceles trapezoids. Related polyhedra and honeycombs The elongated triangular pyramid can form a tessellation of space with square pyramids and/or octahedra. References External links Johnson solids Self-dual polyhedra Pyramids and bipyramids
https://en.wikipedia.org/wiki/Elongated%20square%20pyramid
In geometry, the elongated square pyramid is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a square pyramid () by attaching a cube to its square base. Like any elongated pyramid, it is topologically (but not geometrically) self-dual. Formulae The following formulae for the height (), surface area () and volume () can be used if all faces are regular, with edge length : Dual polyhedron The dual of the elongated square pyramid has 9 faces: 4 triangular, 1 square and 4 trapezoidal. Related polyhedra and honeycombs The elongated square pyramid can form a tessellation of space with tetrahedra, similar to a modified tetrahedral-octahedral honeycomb. See also Elongated square bipyramid References External links Johnson solids Self-dual polyhedra Pyramids and bipyramids
https://en.wikipedia.org/wiki/Elongated%20triangular%20bipyramid
In geometry, the elongated triangular bipyramid (or dipyramid) or triakis triangular prism is one of the Johnson solids (), convex polyhedra whose faces are regular polygons. As the name suggests, it can be constructed by elongating a triangular bipyramid () by inserting a triangular prism between its congruent halves. The nirrosula, an African musical instrument woven out of strips of plant leaves, is made in the form of a series of elongated bipyramids with non-equilateral triangles as the faces of their end caps. Formulae The following formulae for volume (), surface area () and height () can be used if all faces are regular, with edge length a: Dual polyhedron The dual of the elongated triangular bipyramid is called a triangular bifrustum and has 8 faces: 6 trapezoidal and 2 triangular. References External links Johnson solids Pyramids and bipyramids
https://en.wikipedia.org/wiki/Elongated%20square%20bipyramid
In geometry, the elongated square bipyramid (or elongated octahedron) is one of the Johnson solids (). As the name suggests, it can be constructed by elongating an octahedron by inserting a cube between its congruent halves. It has been named the pencil cube or 12-faced pencil cube due to its shape. A zircon crystal is an example of an elongated square bipyramid. Formulae The following formulae for volume (), surface area () and height () can be used if all faces are regular, with edge length : Dual polyhedron The dual of the elongated square bipyramid is called a square bifrustum and has 10 faces: 8 trapezoidal and 2 square. Related polyhedra and honeycombs A special kind of elongated square bipyramid without all regular faces allows a self-tessellation of Euclidean space. The triangles of this elongated square bipyramid are not regular; they have edges in the ratio 2::. It can be considered a transitional phase between the cubic and rhombic dodecahedral honeycombs. The cells are here colored white, red, and blue based on their orientation in space. The square pyramid caps have shortened isosceles triangle faces, with six of these pyramids meeting together to form a cube. The dual of this honeycomb is composed of two kinds of octahedra (regular octahedra and triangular antiprisms), formed by superimposing octahedra into the cuboctahedra of the rectified cubic honeycomb. Both honeycombs have a symmetry of [[4,3,4]]. Cross-sections of the honeycomb, through cell centers produces a chamfered square tiling, with flattened horizontal and vertical hexagons, and squares on the perpendicular polyhedra. With regular faces, the elongated square bipyramid can form a tessellation of space with tetrahedra and octahedra. (The octahedra can be further decomposed into square pyramids.) This honeycomb can be considered an elongated version of the tetrahedral-octahedral honeycomb. See also Elongated square pyramid References External links Johnson solids Pyramid
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20bipyramid
In geometry, the elongated pentagonal bipyramid or pentakis pentagonal prism is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal bipyramid () by inserting a pentagonal prism between its congruent halves. Dual polyhedron The dual of the elongated square bipyramid is a pentagonal bifrustum. See also Elongated pentagonal pyramid External links Johnson solids Pyramids and bipyramids
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20cupola
In geometry, the elongated pentagonal cupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal cupola () by attaching a decagonal prism to its base. The solid can also be seen as an elongated pentagonal orthobicupola () with its "lid" (another pentagonal cupola) removed. Formulas The following formulas for the volume and surface area can be used if all faces are regular, with edge length a: Dual polyhedron The dual of the elongated pentagonal cupola has 25 faces: 10 isosceles triangles, 5 kites, and 10 quadrilaterals. References External links Johnson solids
https://en.wikipedia.org/wiki/Gyroelongated%20pentagonal%20cupola
In geometry, the gyroelongated pentagonal cupola is one of the Johnson solids (J24). As the name suggests, it can be constructed by gyroelongating a pentagonal cupola (J5) by attaching a decagonal antiprism to its base. It can also be seen as a gyroelongated pentagonal bicupola (J46) with one pentagonal cupola removed. Area and Volume With edge length a, the surface area is and the volume is Dual polyhedron The dual of the gyroelongated pentagonal cupola has 25 faces: 10 kites, 5 rhombi, and 10 pentagons. External links Johnson solids
https://en.wikipedia.org/wiki/Gyrobifastigium
In geometry, the gyrobifastigium is the 26th Johnson solid (). It can be constructed by joining two face-regular triangular prisms along corresponding square faces, giving a quarter-turn to one prism. It is the only Johnson solid that can tile three-dimensional space. It is also the vertex figure of the nonuniform duoantiprism (if and are greater than 2). Despite the fact that would yield a geometrically identical equivalent to the Johnson solid, it lacks a circumscribed sphere that touches all vertices, except for the case which represents a uniform great duoantiprism. Its dual, the elongated tetragonal disphenoid, can be found as cells of the duals of the duoantiprisms. History and name The name of the gyrobifastigium comes from the Latin fastigium, meaning a sloping roof. In the standard naming convention of the Johnson solids, bi- means two solids connected at their bases, and gyro- means the two halves are twisted with respect to each other. The gyrobifastigium's place in the list of Johnson solids, immediately before the bicupolas, is explained by viewing it as a digonal gyrobicupola. Just as the other regular cupolas have an alternating sequence of squares and triangles surrounding a single polygon at the top (triangle, square or pentagon), each half of the gyrobifastigium consists of just alternating squares and triangles, connected at the top only by a ridge. Honeycomb The gyrated triangular prismatic honeycomb can be constructed by packing together large numbers of identical gyrobifastigiums. The gyrobifastigium is one of five convex polyhedra with regular faces capable of space-filling (the others being the cube, truncated octahedron, triangular prism, and hexagonal prism) and it is the only Johnson solid capable of doing so. Cartesian coordinates Cartesian coordinates for the gyrobifastigium with regular faces and unit edge lengths may easily be derived from the formula of the height of unit edge length as follows: To calculate formulae f
https://en.wikipedia.org/wiki/Pentagonal%20orthobicupola
In geometry, the pentagonal orthobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by joining two pentagonal cupolae () along their decagonal bases, matching like faces. A 36-degree rotation of one cupola before the joining yields a pentagonal gyrobicupola (). The pentagonal orthobicupola is the third in an infinite set of orthobicupolae. Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Pentagonal%20gyrobicupola
In geometry, the pentagonal gyrobicupola is one of the Johnson solids (). Like the pentagonal orthobicupola (), it can be obtained by joining two pentagonal cupolae () along their bases. The difference is that in this solid, the two halves are rotated 36 degrees with respect to one another. The pentagonal gyrobicupola is the third in an infinite set of gyrobicupolae. The pentagonal gyrobicupola is what you get when you take a rhombicosidodecahedron, chop out the middle parabidiminished rhombicosidodecahedron (), and paste the two opposing cupolae back together. Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20orthobicupola
In geometry, the elongated pentagonal orthobicupola or cantellated pentagonal prism is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal orthobicupola () by inserting a decagonal prism between its two congruent halves. Rotating one of the cupolae through 36 degrees before inserting the prism yields an elongated pentagonal gyrobicupola (). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20gyrobicupola
In geometry, the elongated pentagonal gyrobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal gyrobicupola () by inserting a decagonal prism between its congruent halves. Rotating one of the pentagonal cupolae () through 36 degrees before inserting the prism yields an elongated pentagonal orthobicupola (). Formulae The following formulae for volume and surface area can be used if all faces are regular, with edge length a: References External links Johnson solids
https://en.wikipedia.org/wiki/Augmented%20triangular%20prism
In geometry, the augmented triangular prism is one of the Johnson solids (). As the name suggests, it can be constructed by augmenting a triangular prism by attaching a square pyramid () to one of its equatorial faces. The resulting solid bears a superficial resemblance to the gyrobifastigium (), the difference being that the latter is constructed by attaching a second triangular prism, rather than a square pyramid. It is also the vertex figure of the nonuniform duoantiprism (if ). Despite the fact that would yield a geometrically identical equivalent to the Johnson solid, it lacks a circumscribed sphere that touches all vertices. Its dual, a triangular bipyramid with one of its 4-valence vertices truncated, can be found as cells of the duoantitegums (duals of the duoantiprisms). External links Johnson solids
https://en.wikipedia.org/wiki/Biaugmented%20triangular%20prism
In geometry, the biaugmented triangular prism is one of the Johnson solids (). As the name suggests, it can be constructed by augmenting a triangular prism by attaching square pyramids () to two of its equatorial faces. It is related to the augmented triangular prism () and the triaugmented triangular prism (). External links Johnson solids