source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/192%20%28number%29
192 (one hundred [and] ninety-two) is the natural number following 191 and preceding 193. In mathematics 192 has the prime factorization . Because it has so many small prime factors, it is the smallest number with 14 divisors, namely 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, and 192 itself. Because its only prime factors are 2 and 3, it is a 3-smooth number. 192 is the sum of ten consecutive primes (5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37). 192 is a Leyland number of the second kind. See also 192 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Maldivian%20writing%20systems
Several Dhivehi scripts have been used by Maldivians during their history. The early Dhivehi scripts fell into the abugida category, while the more recent Thaana has characteristics of both an abugida and a true alphabet. An ancient form of Nagari script, as well as the Arabic and Devanagari scripts, have also been extensively used in the Maldives, but with a more restricted function. Latin was official only during a very brief period of the Islands' history. The first Dhivehi script likely appeared in association with the expansion of Buddhism throughout South Asia. This was over two millennia ago, in the Mauryan period, during emperor Ashoka's time. Manuscripts used by Maldivian Buddhist monks were probably written in a script that slowly evolved into a characteristic Dhivehi form. Few of those ancient documents have been discovered and the early forms of the Maldivian script are only found etched on a few coral rocks and copper plates. Ancient scripts (Evēla Akuru) Dhivehi Akuru "island letters" is a script formerly used to write the Dhivehi language. Unlike the modern Thaana script, Divehi Akuru has its origins in the Brahmi script and thus was written from left to right. Dhivehi Akuru was separated into two variants, a more recent and an ancient one and christened "Dives Akuru" and "Evēla Akuru" respectively by Harry Charles Purvis Bell in the early 20th century. Bell was British and studied Maldivian epigraphy when he retired from the colonial government service in Colombo. Bell wrote a monograph on the archaeology, history and epigraphy of the Maldives. He was the first modern scholar to study these ancient writings and he undertook an extensive and serious research on the available epigraphy. The division that Bell made based on the differences he perceived between the two types of Dhivehi scripts is convenient for the study of old Dhivehi documents. Dhives Akuru developed from Brahmi. The oldest attested inscription bears a clear resemblance to South
https://en.wikipedia.org/wiki/Ovi%20%28magazine%29
Ovi (meaning Door in English) is a multilingual non-profit daily publication that carries articles about ideas and opinion. It is based in Helsinki. History and profile Launched in December 2004 by two immigrants to Finland, Asa Butcher and Thanos Kalamidas, Ovi carries contributions to society, politics and culture in a number of different languages. In 2006 Ovi was chosen as a Uranus.fi Success Story Since 4 September 2006 the site has had daily updates and continues to cover various global issues, including discrimination, inequality, poverty, human rights and children's rights. In January 2007, Ovi came second in Newropeans Magazine's Grands Prix 2006 awards. They were nominated as one of the three finalists in its 'Citizenship - Information' section. The awards recognise people active in the democratisation of the EU. A registered jury of around 1,000 people voted online, awarding Ovi 29% of the vote. See also List of Finnish magazines References External links Ovi magazine website 2004 establishments in Finland Cultural magazines Magazines established in 2004 Magazines published in Helsinki Online magazines Political magazines published in Finland
https://en.wikipedia.org/wiki/Microsoft%20Software%20Assurance
Microsoft Software Assurance (SA) is a Microsoft maintenance program aimed at business users who use Microsoft Windows, Microsoft Office, and other server and desktop applications. The core premise behind SA is to give users the ability to spread payments over several years, while offering "free" upgrades to newer versions during that time period. Overview Microsoft differentiates License and Software Assurance. Customers may purchase (depending on the program), a license without Software Assurance, Software Assurance only (but only to be used in combination with an existing license), both a License and Software Assurance together. The three possibilities are not always available, depending on the program (single license or volume license). Features The full list of benefits, effective March 2006, are as follows: Free upgrades: Subscribers may upgrade to newer versions of their Microsoft software Access to exclusive software products: Windows Fundamentals for Legacy PCs, Windows Vista Enterprise Edition, Windows 7 Enterprise Edition, Windows 8 Enterprise Edition and Microsoft Desktop Optimization Pack are only available to Software Assurance customers Training: Free training from Microsoft and access to Microsoft E-Learning, a series of interactive online training tools for users. This training can only be taken at a Microsoft Certified Partner for Learning Solutions and can only be redeemed for training that is categorized as Microsoft Official Curriculum. Home use: Employees of a company with SA can use an additional copy of Microsoft software Access to source code for larger companies (1,500+ desktops) 24x7 telephone and web support Additional error reporting tools Free licenses for additional servers provisioned as "Cold backups" of live servers Access to Microsoft TechNet managed newsgroups Access to Microsoft TechNet downloads for 1 user Extended Hotfix support: Typically Microsoft charges for non-security hotfixes after mainstream support for
https://en.wikipedia.org/wiki/Daedalian%20Opus
is a puzzle game for the Game Boy and was released in July 1990. Gameplay The game is essentially a series of 36 jigsaw puzzles with pentominos that must be assembled into a specific shape. The puzzles start off with rectangular shapes and simple solutions, but the puzzles quickly grow more complex, with odder shapes like a rocket ship, a gun, and even enlarged versions of some of the pentominoes themselves. Each level is timed, and once the timer is started it cannot be stopped until the level is finished. One starts off the game with only three pentomino pieces, and at the completion of each early level, a new piece is awarded to the player. At the final level, the player is given the 2x2 square O tetromino and must complete an 8x8 square puzzle. After completing each level, the player was given a password to access that level at a later time. Each password was a common English four-letter word, so that by guessing common four-letter words, players could potentially access levels they had not actually reached by playing the game. Development and ports The name of the game was inspired by Daedalus, the mythical character of Greek legend who created the labyrinth. A faithful fan version was later coded for the MSX computer system by Karoshi Corporation in 2006 for the game development contest MSXdev'06. The game has been ported to different platforms, such as PC and GP2X. References Daedalian Opus at GameFAQs 1990 video games Game Boy games GP2X games MSX games Puzzle video games Vic Tokai games Video games developed in Japan Windows games
https://en.wikipedia.org/wiki/Porosimetry
Porosimetry is an analytical technique used to determine various quantifiable aspects of a material's porous structure, such as pore diameter, total pore volume, surface area, and bulk and absolute densities. The technique involves the intrusion of a non-wetting liquid (often mercury) at high pressure into a material through the use of a porosimeter. The pore size can be determined based on the external pressure needed to force the liquid into a pore against the opposing force of the liquid's surface tension. A force balance equation known as Washburn's equation for the above material having cylindrical pores is given as: = pressure of liquid = pressure of gas = surface tension of liquid = contact angle of intrusion liquid = pore diameter Since the technique is usually performed within a vacuum, the initial gas pressure is zero. The contact angle of mercury with most solids is between 135° and 142°, so an average of 140° can be taken without much error. The surface tension of mercury at 20 °C under vacuum is 480 mN/m. With the various substitutions, the equation becomes: As pressure increases, so does the cumulative pore volume. From the cumulative pore volume, one can find the pressure and pore diameter where 50% of the total volume has been added to give the median pore diameter. See also BET theory, measurement of specific surface Evapoporometry Porosity Wood's metal, also injected for pore structure impregnation and replica References Measurement Scientific techniques Porous media
https://en.wikipedia.org/wiki/Glossary%20of%20Unified%20Modeling%20Language%20terms
Glossary of Unified Modeling Language (UML) terms provides a compilation of terminology used in all versions of UML, along with their definitions. Any notable distinctions that may exist between versions are noted with the individual entry it applies to. A Abstract - An indicator applied to a classifier (e.g., actor, class, use case) or to some features of a classifier (e.g., a class's operations) showing that the feature is incomplete and is intended not to be instantiated, but to be specialized by other definitions. Abstract class - A class that does not provide a complete declaration, perhaps because it has no implementation method identified for an operation. By declaring a class as abstract, one intends to prohibit direct instantiation of the class. An abstract class cannot directly instantiate objects; it must be inherited from before it can be used. Abstract data type Abstract operation - Unlike attributes, class operations can be abstract, meaning that there is no provided implementation. Generally, a class containing an abstract operation should be marked as an abstract class. An Operation must have a method supplied in some specialized Class before it can be used. Abstraction is the process of picking out common features and deriving essential characteristics from objects and procedure entities that distinguish it from other kinds of entities. Action - An action is the fundamental unit of behaviour specification and represents some transformation or processing in the modeled system, such as invoking a method of a class or a sub activity Action sequence - Action state - Action steps - Activation - the time during which an object has a method executing. It is often indicated by a thin box or bar superimposed on the Object's lifeline in a Sequence Diagram Activity diagram - a diagram that describes procedural logic, business process or work flow. An activity diagram contains a number of Activities and connected by Control Flows and Object Flo
https://en.wikipedia.org/wiki/Motorboating%20%28electronics%29
In electronics, motorboating is a type of low frequency parasitic oscillation (unwanted cyclic variation of the output voltage) that sometimes occurs in audio and radio equipment and often manifests itself as a sound similar to an idling motorboat engine, a "put-put-put", in audio output from speakers or earphones. It is a problem encountered particularly in radio transceivers and older vacuum tube audio systems, guitar amplifiers, PA systems and is caused by some type of unwanted feedback in the circuit. The amplifying devices in audio and radio equipment are vulnerable to a variety of feedback problems, which can cause distinctive noise in the output. The term motorboating is applied to oscillations whose frequency is below the range of hearing, from 1 to 10 hertz, so the individual oscillations are heard as pulses. Sometimes the oscillations can even be seen visually as the woofer cones in speakers slowly moving in and out. Besides sounding annoying, motorboating can cause clipping of the audio output waveform, and thus distortion in the output. Occurrence Although low frequency parasitic oscillations in audio equipment may be due to a range of causes, there are a few types of equipment in which it is frequently seen: Older audio amplifiers with capacitive (RC) or inductive (transformer) coupling between stages. This design is mostly used in vacuum tube (valve) equipment. Motorboating was a problem throughout the era of vacuum tube electronics but became rare as vacuum tube gear was replaced in the 1970s with modern solid state designs, which are direct-coupled. The recent resurgence in popularity of traditional tube-type audio equipment in guitar amplifiers and home audio systems has led to a reappearance of motorboating problems. The problem is sometimes caused in older equipment by the evaporation of the electrolyte from old-style "wet" electrolytic capacitors used in the power circuits of legacy equipment, or in equipment of any age where an ampli
https://en.wikipedia.org/wiki/Blotto%20%28biology%29
In biology, BLOTTO is a blocking reagent made from nonfat dry milk, phosphate buffered saline, and sodium azide. Its name is an almost-acronym of bovine lacto transfer technique optimizer. It constitutes an inexpensive source of nonspecific protein (milk casein) which blocks protein binding sites in a variety of experimental paradigms, notably Southern blots, Western blots, and ELISA. Its use was first reported in 1984 by Johnson and Elder's lab at Scripps. Prior to 1984, partially purified proteins such as bovine serum albumin, ovalbumin, or gelatin from various species had been used as blocking reagents but had the disadvantage of being expensive. References Immunology
https://en.wikipedia.org/wiki/Invention%20of%20radio
The invention of radio communication was preceded by many decades of establishing theoretical underpinnings, discovery and experimental investigation of radio waves, and engineering and technical developments related to their transmission and detection. These developments allowed Guglielmo Marconi to turn radio waves into a wireless communication system. The idea that the wires needed for electrical telegraph could be eliminated, creating a wireless telegraph, had been around for a while before the establishment of radio-based communication. Inventors attempted to build systems based on electric conduction, electromagnetic induction, or on other theoretical ideas. Several inventors/experimenters came across the phenomenon of radio waves before its existence was proven; it was written off as electromagnetic induction at the time. The discovery of electromagnetic waves, including radio waves, by Heinrich Rudolf Hertz in the 1880s came after theoretical development on the connection between electricity and magnetism that started in the early 1800s. This work culminated in a theory of electromagnetic radiation developed by James Clerk Maxwell by 1873, which Hertz demonstrated experimentally. Hertz considered electromagnetic waves to be of little practical value. Other experimenters, such as Oliver Lodge and Jagadish Chandra Bose, explored the physical properties of electromagnetic waves, and they developed electric devices and methods to improve the transmission and detection of electromagnetic waves. But they did not apparently see the value in developing a communication system based on electromagnetic waves. In the mid-1890s, building on techniques physicists were using to study electromagnetic waves, Guglielmo Marconi developed the first apparatus for long-distance radio communication. On 23 December 1900, the Canadian inventor Reginald A. Fessenden became the first person to send audio (wireless telephony) by means of electromagnetic waves, successfully transmitt
https://en.wikipedia.org/wiki/Inverse%20tangent%20integral
The inverse tangent integral is a special function, defined by: Equivalently, it can be defined by a power series, or in terms of the dilogarithm, a closely related special function. Definition The inverse tangent integral is defined by: The arctangent is taken to be the principal branch; that is, −/2 < arctan(t) < /2 for all real t. Its power series representation is which is absolutely convergent for The inverse tangent integral is closely related to the dilogarithm and can be expressed simply in terms of it: That is, for all real x. Properties The inverse tangent integral is an odd function: The values of Ti2(x) and Ti2(1/x) are related by the identity valid for all x > 0 (or, more generally, for Re(x) > 0). This can be proven by differentiating and using the identity . The special value Ti2(1) is Catalan's constant . Generalizations Similar to the polylogarithm , the function is defined analogously. This satisfies the recurrence relation: Relation to other special functions The inverse tangent integral is related to the Legendre chi function by: Note that can be expressed as , similar to the inverse tangent integral but with the inverse hyperbolic tangent instead. The inverse tangent integral can also be written in terms of the Lerch transcendent History The notation Ti2 and Tin is due to Lewin. Spence (1809) studied the function, using the notation . The function was also studied by Ramanujan. References Special functions
https://en.wikipedia.org/wiki/Asynchronous%20system
The primary focus of this article is asynchronous control in digital electronic systems. In a synchronous system, operations (instructions, calculations, logic, etc.) are coordinated by one, or more, centralized clock signals. An asynchronous system, in contrast, has no global clock. Asynchronous systems do not depend on strict arrival times of signals or messages for reliable operation. Coordination is achieved using event-driven architecture triggered by network packet arrival, changes (transitions) of signals, handshake protocols, and other methods. Modularity Asynchronous systems – much like object-oriented software – are typically constructed out of modular 'hardware objects', each with well-defined communication interfaces. These modules may operate at variable speeds, whether due to data-dependent processing, dynamic voltage scaling, or process variation. The modules can then be combined to form a correct working system, without reference to a global clock signal. Typically, low power is obtained since components are activated only on demand. Furthermore, several asynchronous styles have been shown to accommodate clocked interfaces, and thereby support mixed-timing design. Hence, asynchronous systems match well the need for correct-by-construction methodologies in assembling large-scale heterogeneous and scalable systems. Design styles There is a large spectrum of asynchronous design styles, with tradeoffs between robustness and performance (and other parameters such as power). The choice of design style depends on the application target: reliability/ease-of-design vs. speed. The most robust designs use 'delay-insensitive circuits', whose operation is correct regardless of gate and wire delays; however, only limited useful systems can be designed with this style. Slightly less robust, but much more useful, are quasi-delay-insensitive circuits (also known as speed-independent circuits), such as delay-insensitive minterm synthesis, which opera
https://en.wikipedia.org/wiki/Internet%20Fibre%20Channel%20Protocol
Internet Fibre Channel Protocol (iFCP) is a gateway-to-gateway network protocol standard that provides Fibre Channel fabric functionality to Fibre Channel devices over an IP network. It is officially ratified by the Internet Engineering Task Force. Its most common forms are in 1 Gbit/s, 2 Gbit/s, 4 Gbit/s, 8 Gbit/s, and 10 Gbit/s. Technical overview The iFCP protocol enables the implementation of Fibre Channel functionality over an IP network, within which the Fibre Channel switching and routing infrastructure is replaced by IP components and technology. Congestion control, error detection and recovery are provided through the use of TCP (Transmission Control Protocol). The primary objective of iFCP is to allow existing Fibre Channel devices to be networked and interconnected over an IP based network at wire speeds. The method of address translation defined and the protocol permit Fibre Channel storage devices and host adapters to be attached to an IP-based fabric using transparent gateways. The iFCP protocol layer's main function is to transport Fibre Channel frame images between Fibre Channel ports attached both locally and remotely. iFCP encapsulates and routes the fibre channel frames that make up each Fibre Channel information unit via a predetermined TCP connection for transport across the IP network when transporting frames to a remote Fibre Channel port. See also Fibre Channel over Ethernet (FCoE) Fibre Channel over IP (FCIP) Internet SCSI (iSCSI) References Ethernet Fibre Channel Network protocols Internet protocols
https://en.wikipedia.org/wiki/Certified%20ethical%20hacker
Certified Ethical Hacker (CEH) is a qualification given by EC-Council and obtained by demonstrating knowledge of assessing the security of computer systems by looking for weaknesses and vulnerabilities in target systems, using the same knowledge and tools as a malicious hacker, but in a lawful and legitimate manner to assess the security posture of a target system. This knowledge is assessed by answering multiple choice questions regarding various ethical hacking techniques and tools. The code for the CEH exam is 312–50. This certification has now been made a baseline with a progression to the CEH (Practical), launched in March 2018, a test of penetration testing skills in a lab environment where the candidate must demonstrate the ability to apply techniques and use penetration testing tools to compromise various simulated systems within a virtual environment. Ethical hackers are employed by organizations to penetrate networks and computer systems with the purpose of finding and fixing security vulnerabilities. The EC-Council offers another certification, known as Certified Network Defense Architect (CNDA). This certification is designed for United States Government agencies and is available only to members of selected agencies including some private government contractors, primarily in compliance to DOD Directive 8570.01-M. It is also ANSI accredited and is recognized as a GCHQ Certified Training (GCT). Examination Certification is achieved by taking the CEH examination after having either attended training at an Accredited Training Center (ATC), or completed through EC-Council's learning portal, iClass. If a candidate opts to self-study, an application must be filled out and proof submitted of two years of relevant information security work experience. Those without the required two years of information security related work experience can request consideration of educational background. The current version of the CEH is V12, released in September 2022. The exa
https://en.wikipedia.org/wiki/BitTorrent%20protocol%20encryption
Protocol encryption (PE), message stream encryption (MSE) or protocol header encrypt (PHE) are related features of some peer-to-peer file-sharing clients, including BitTorrent clients. They attempt to enhance privacy and confidentiality. In addition, they attempt to make traffic harder to identify by third parties including internet service providers (ISPs). However, encryption will not protect one from DMCA notices from sharing not legal content, as one is still uploading material and the monitoring firms can merely connect to the swarm. MSE/PE is implemented in BitComet, BitTornado, Deluge, Flashget, KTorrent, libtorrent (used by various BitTorrent clients, including qBittorrent), Mainline, μTorrent, qBittorrent, rTorrent, Transmission, Tixati and Vuze. PHE was implemented in old versions of BitComet. Similar protocol obfuscation is supported in up-to-date versions of some other (non-BitTorrent) systems including eMule. Purpose As of January 2005, BitTorrent traffic made up more than a third of total residential internet traffic, although this dropped to less than 20% as of 2009. Some ISPs deal with this traffic by increasing their capacity whilst others use specialised systems to slow peer-to-peer traffic to cut costs. Obfuscation and encryption make traffic harder to detect and therefore harder to throttle. These systems were designed initially to provide anonymity or confidentiality, but became required in countries where Internet Service Providers were granted the power to throttle BitTorrent users and even ban those they believed were guilty of illegal file sharing. History Early approach Protocol header encryption (PHE) was conceived by RnySmile and first implemented in BitComet version 0.60 on 8 September 2005. Some software like IPP2P claims BitComet traffic is detectable even with PHE. PHE is detectable because only part of the stream is encrypted. Since there are no open specifications to this protocol implementation, the only possibility to suppo
https://en.wikipedia.org/wiki/Bigraph
A bigraph can be modelled as the superposition of a graph (the link graph) and a set of trees (the place graph). Each node of the bigraph is part of a graph and also part of some tree that describes how the nodes are nested. Bigraphs can be conveniently and formally displayed as diagrams. They have applications in the modelling of distributed systems for ubiquitous computing and can be used to describe mobile interactions. They have also been used by Robin Milner in an attempt to subsume Calculus of Communicating Systems (CCS) and π-calculus. They have been studied in the context of category theory. Anatomy of a bigraph Aside from nodes and (hyper-)edges, a bigraph may have associated with it one or more regions which are roots in the place forest, and zero or more holes in the place graph, into which other bigraph regions may be inserted. Similarly, to nodes we may assign controls that define identities and an arity (the number of ports for a given node to which link-graph edges may connect). These controls are drawn from a bigraph signature. In the link graph we define inner and outer names, which define the connection points at which coincident names may be fused to form a single link. Foundations A bigraph is a 5-tuple: where is a set of nodes, is a set of edges, is the control map that assigns controls to nodes, is the parent map that defines the nesting of nodes, and is the link map that defines the link structure. The notation indicates that the bigraph has holes (sites) and a set of inner names and regions, with a set of outer names . These are respectively known as the inner and outer interfaces of the bigraph. Formally speaking, each bigraph is an arrow in a symmetric partial monoidal category (usually abbreviated spm-category) in which the objects are these interfaces. As a result, the composition of bigraphs is definable in terms of the composition of arrows in the category. Extensions and variants Directed Bigraphs Directed Bigr
https://en.wikipedia.org/wiki/Floating%20body%20effect
The floating body effect is the effect of dependence of the body potential of a transistor realized by the silicon on insulator (SOI) technology on the history of its biasing and the carrier recombination processes. The transistor's body forms a capacitor against the insulated substrate. The charge accumulates on this capacitor and may cause adverse effects, for example, opening of parasitic transistors in the structure and causing off-state leakages, resulting in higher current consumption and in case of DRAM in loss of information from the memory cells. It also causes the history effect, the dependence of the threshold voltage of the transistor on its previous states. In analog devices, the floating body effect is known as the kink effect. One countermeasure to floating body effect involves use of fully depleted (FD) devices. The insulator layer in FD devices is significantly thinner than the channel depletion width. The charge and thus also the body potential of the transistors is therefore fixed. However, the short-channel effect is worsened in the FD devices, the body may still charge up if both source and drain are high, and the architecture is unsuitable for some analog devices that require contact with the body. Hybrid trench isolation is another approach. While floating body effect presents a problem in SOI DRAM chips, it is exploited as the underlying principle for Z-RAM and T-RAM technologies. For this reason, the effect is sometimes called the Cinderella effect in the context of these technologies, because it transforms a disadvantage into an advantage. AMD and Hynix licensed Z-RAM, but as of 2008 had not put it into production. Another similar technology (and Z-RAM competitor) developed at Toshiba and refined at Intel is Floating Body Cell (FBC). References Further reading Semiconductors
https://en.wikipedia.org/wiki/Location%20identifier
A location identifier is a symbolic representation for the name and the location of an airport, navigation aid, or weather station, and is used for staffed air traffic control facilities in air traffic control, telecommunications, computer programming, weather reports, and related services. ICAO location indicator The International Civil Aviation Organization establishes sets of four-letter location indicators which are published in ICAO Publication 7910. These are used by air traffic control agencies to identify airports and by weather agencies to produce METAR weather reports. The first letter indicates the region; for example, K for the contiguous United States, C for Canada, E for northern Europe, R for the Asian Far East, and Y for Australia. Examples of ICAO location indicators are RPLL for Manila Ninoy Aquino Airport and KCEF for Westover Joint Air Reserve Base. IATA identifier The International Air Transport Association uses sets of three-letter IATA identifiers which are used for airline operations, baggage routing, and ticketing. There is no specific organization scheme to IATA identifiers; typically they take on the abbreviation of the airport or city such as MNL for Manila Ninoy Aquino Airport. In the United States, the IATA identifier usually equals the FAA identifier, but this is not always the case. A prominent example is Sawyer International Airport in Marquette, Michigan, which uses the FAA identifier SAW and the IATA identifier MQT. FAA identifier The Federal Aviation Administration location identifier (FAA LID) is a three- to five-character alphanumeric code identifying aviation-related facilities inside the United States, though some codes are reserved for, and are managed by other entities. For nearly all major airports, the assigned identifiers are alphabetic three-letter codes, such as ORD for Chicago O’Hare International Airport. Minor airfields are typically assigned a mix of alphanumeric characters, such as 8N2 for Skydive Chica
https://en.wikipedia.org/wiki/Patrick%20Flanagan
Patrick Flanagan (October 11, 1944 - December 19, 2019) was an American New Age author and inventor. Flanagan wrote books focused on Egyptian sacred geometry and Pyramidology. In 1958, at the age of 14, while living in Bellaire, Texas, Flanagan invented the neurophone, an electronic device that claims to transmit sound through the body’s nervous system directly to the brain. It was patented in the United States in 1968 (Patent #3,393,279). The invention earned him a profile in Life magazine, which called him a "unique, mature and inquisitive scientist." Pyramid power During the 1970s, Flanagan was a proponent of pyramid power. He wrote several books and promoted it with lectures and seminars. According to Flanagan, pyramids with the exact relative dimensions of Egyptian pyramids act as "an effective resonator of randomly polarized microwave signals which can be converted into electrical energy." One of his first books, Pyramid Power, was featured in the lyrics of The Alan Parsons Project album, Pyramid. Inventions and discoveries In 1958, at the age of 13, Flanagan invented a device which he called a Neurophone, which he claimed transmitted sound via the nervous system to the brain. Bibliography References External links PhiSciences Patrick Flanagan Official site 1944 births Living people American inventors Pyramidologists Sacred geometry
https://en.wikipedia.org/wiki/Edy
Edy, provided by Rakuten, Inc. in Japan is a prepaid rechargeable contactless smart card. While the name derives from euro, dollar, and yen, it works with yen only. History Edy was launched on January 18, 2001, by BitWallet, with financing primarily from Sony, in addition to then other companies, including NTT Docomo and the Sumitomo Group. NTT Docomo's i-mode mobile payment service Osaifu-Keitai, which launched on 10 July 2004, included support for BitWallet's Edy. In 2005, over a million payments had been made with the service. On 18 April 2006, Intel announced a five billion yen (approx. US$45 million, or 35 million euros as of May 20, 2006) investment in bitWallet, aimed at furthering its usage on computers. On 1 June 2012, Rakuten acquired Edy, changing the official name to RakutenEdy and the parent company from bitWallet to RakutenEdy Inc. The three-oval blue-tone logo was changed to the Rakuten logo and the font of the word 'Edy' was altered. Mobile phones Edy can be used on Osaifu-Keitai featured cellphones. Makers of these phones include major cell phone carriers such as docomo, au and SoftBank. The phones can be used physically like an Edy card, and online Edy features can be accessed from the phones as well, such as the ability to charge an Edy account. References External links Edy official homepage Rakuten Products introduced in 2001 2001 establishments in Japan 2012 mergers and acquisitions Japanese brands E-commerce in Japan Online payments Contactless smart cards
https://en.wikipedia.org/wiki/Finnix
Finnix is a Debian-based Live CD operating system, developed by Ryan Finnie and intended for system administrators for tasks such as filesystem recovery, network monitoring and OS installation. Finnix is a relatively small distribution, with an ISO download size of approximately 100 MiB, and is available for the x86 and PowerPC architectures, and paravirtualized (User Mode Linux and Xen) systems. Finnix can be run off a bootable CD, a USB flash drive, a hard drive, or network boot (PXE). History Finnix development first began in 1999, making it one of the oldest Linux distributions released with the intent of being run completely from a bootable CD (the other Live CD around at the time was the Linuxcare Bootable Business Card CD, first released in 1999). Finnix 0.01 was based on Red Hat Linux 6.0, and was created to help with administration and recovery of other Linux workstations around Finnie's office. The first public release of Finnix was 0.03, and was released in early 2000, based on an updated Red Hat Linux 6.1. Despite its 300 MiB ISO size and requirement of 32 MiB RAM (which, given RAM prices and lack of high-speed Internet proliferation at the time, was prohibitive for many), Finnix enjoyed moderate success, with over 10,000 downloads. After version 0.03, development ceased, and Finnix was left unmaintained until 2005. On 23 October 2005, Finnix 86.0 was released. Earlier unreleased versions (84, and 85.0 through 85.3) were "Knoppix remasters", with support for Linux LVM and dm-crypt being the main reason for creation. However, 86.0 was a departure from Knoppix, and was derived directly from the Debian "testing" tree. Usage Finnix is released as a small bootable CD ISO. A user can download the ISO, burn the image to CD, and boot into a text mode Linux environment. Finnix requires at least 32 MiB RAM to run properly, but can use more if present. Most hardware devices are detected and dealt with automatically, such as hard drives, network cards and U
https://en.wikipedia.org/wiki/Supplicant%20%28computer%29
In computer networking, a supplicant is an entity at one end of a point-to-point LAN segment that seeks to be authenticated by an authenticator attached to the other end of that link. The IEEE 802.1X standard uses the term "supplicant" to refer either to hardware or to software. In practice, a supplicant is a software application installed on an end-user's computer. The user invokes the supplicant and submits credentials to connect the computer to a secure network. If the authentication succeeds, the authenticator typically allows the computer to connect to the network. A supplicant, in some contexts, refers to a user or to a client in a network environment seeking to access network resources secured by the IEEE 802.1X authentication mechanism. But saying "user" or "client" over-generalizes; in reality, the interaction takes place through a personal computer, an Internet Protocol (IP) phone, or similar network device. Each of these must run supplicant software that initiates or reacts to IEEE 802.1X authentication requests for association. Overview Businesses, campuses, governments and all other social entities across-the-board in need of security may resort to the use of IEEE 802.1X authentication to regulate users access to their corresponding network infrastructure. And to enable this, client devices need to meet supplicant definition in order to gain access. In businesses, for example, it is very common that employees will receive their new computer with all the necessary settings appropriately set for IEEE 802.1X authentication, in particular when connecting wirelessly to the network. Access For a supplicant-capable device to gain access to the secured resources on a network, some preconditions should be observed and a context that will make this feasible. The network to which the supplicant needs to interact with must have a RADIUS Server (also known as an Authentication Server or an Authenticator), a Dynamic Host Configuration Protocol (DHCP) server if aut
https://en.wikipedia.org/wiki/Vital%20theory
According to the vital force theory, the conduction of water up the xylem vessel is a result of vital action of the living cells in the xylem tissue. These living cells are involved in ascent of sap. Relay pump theory and Pulsation theory support the active theory of ascent of sap. Emil Godlewski (senior) (1884) proposed Relay pump or Clamberinh force theory (through xylem parenchyma) and Jagadish Chandra Bose(1923) proposed pulsation theory (due to pulsatory activities of innermost cortical cells just outside endodermis). Jagadish Chandra Bose suggested a mechanism for the ascent of sap in 1927. His theory can be explained with the help of galvanometer of electric probes. He found electrical ‘pulsations’ or oscillations in electric potentials, and came to believe these were coupled with rhythmic movements in the telegraph plant Codariocalyx motorius (then Desmodium). On the basis of this Bose theorized that regular wave-like ‘pulsations’ in cell electric potential and turgor pressure were an endogenous form of cell signaling. According to him the living cells in the inner lining of the xylem tissue pump water by contractive and expulsive movements similar to the animal heart circulating blood. This mechanism has not been well supported, and in spite of some ongoing debate, the evidence overwhelmingly supports the cohesion-tension theory for the ascent of sap. See also Cohesion-tension theory External links Bioelectricity and the rhythms of sensitive plants – The biophysical research of Jagadis Chandra Bose Botany
https://en.wikipedia.org/wiki/Driver%20circuit
In electronics, a driver is a circuit or component used to control another circuit or component, such as a high-power transistor, liquid crystal display (LCD), stepper motors, SRAM memory, and numerous others. They are usually used to regulate current flowing through a circuit or to control other factors such as other components and some other devices in the circuit. The term is often used, for example, for a specialized integrated circuit that controls high-power switches in switched-mode power converters. An amplifier can also be considered a driver for loudspeakers, or a voltage regulator that keeps an attached component operating within a broad range of input voltages. Typically the driver stage(s) of a circuit requires different characteristics to other circuit stages. For example, in a transistor power amplifier circuit, typically the driver circuit requires current gain, often the ability to discharge the following transistor bases rapidly, and low output impedance to avoid or minimize distortion. In SRAM memory driver circuits are used to rapidly discharge necessary bit lines from a precharge level to the write margin or below. See also Hitachi HD44780 LCD controller References External links ADP3418 Driver Circuits Analog circuits
https://en.wikipedia.org/wiki/Plotkin%20bound
In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit (or bound) on the maximum possible number of codewords in binary codes of given length n and given minimum distance d. Statement of the bound A code is considered "binary" if the codewords use symbols from the binary alphabet . In particular, if all codewords have a fixed length n, then the binary code has length n. Equivalently, in this case the codewords can be considered elements of vector space over the finite field . Let be the minimum distance of , i.e. where is the Hamming distance between and . The expression represents the maximum number of possible codewords in a binary code of length and minimum distance . The Plotkin bound places a limit on this expression. Theorem (Plotkin bound): i) If is even and , then ii) If is odd and , then iii) If is even, then iv) If is odd, then where denotes the floor function. Proof of case i Let be the Hamming distance of and , and be the number of elements in (thus, is equal to ). The bound is proved by bounding the quantity in two different ways. On the one hand, there are choices for and for each such choice, there are choices for . Since by definition for all and (), it follows that On the other hand, let be an matrix whose rows are the elements of . Let be the number of zeros contained in the 'th column of . This means that the 'th column contains ones. Each choice of a zero and a one in the same column contributes exactly (because ) to the sum and therefore The quantity on the right is maximized if and only if holds for all (at this point of the proof we ignore the fact, that the are integers), then Combining the upper and lower bounds for that we have just derived, which given that is equivalent to Since is even, it follows that This completes the proof of the bound. See also Singleton bound Hamming bound Elias-Bassalygo bound Gilbert-Varshamov bound Johnson bou
https://en.wikipedia.org/wiki/Sipgate
Sipgate, stylised as sipgate, is a European VoIP and mobile telephony operator. Company Sipgate was founded in 2004 and became one of Germany's largest VoIP service providers for consumers and small businesses. Through its network, which used SIP protocol, it allowed making low-cost national and international calls and provided customers with an incoming geographical phone number. Customers were expected to use a client software or a SIP-compliant hardware (a VoIP phone or ATA) to access its services. Since 2011, Sipgate's network has been using the open-source project Yate for the core of its softswitch infrastructure. Sipgate are among the sponsors of the Kamailio World Conference & Exhibition. In January 2013, the firm entered the German mobile phone market as a full MVNO. Sipgate's German mobile phone services run over the Telefónica Germany network. Products sipgate team Introduced in 2009, the product is a hosted business phone system (PBX) providing online management of phone services for 1 to 250 Users. All billing, end user management, call management, etc. is through an online portal. A mobile solution was released in Germany in early 2013 that can be integrated with the 'Team' business VoIP service. SIM cards can be used as extensions in the Team web telephone system or used individually with mobile and landline numbers. sipgate trunking In Germany, SIP trunking services connect customer's third party VoIP PBXs via broadband with the public telephone network. SIP trunking can be combined with the team product. sipgate basic and sipgate basic plus The basic residential VoIP service was released in Germany and the UK in January, 2004. basic accounts receive one free UK or German geographic 'landline' phone number and a voicemail box. With a suitable fax-enabled VoIP adapter faxes may also be sent from conventional fax machines. On 6 October 2014, the firm released an open API sipgate io. Discontinued products Smartphone apps Sipgate pro
https://en.wikipedia.org/wiki/Bach%27s%20algorithm
Bach's algorithm is a probabilistic polynomial time algorithm for generating random numbers along with their factorizations, named after its discoverer, Eric Bach. It is of interest because no algorithm is known that efficiently factors numbers, so the straightforward method, namely generating a random number and then factoring it, is impractical. The algorithm performs, in expectation, O(log n) primality tests. A simpler, but less efficient algorithm (performing, in expectation, primality tests), is due to Adam Kalai. Bach's algorithm may theoretically be used within cryptographic algorithms. Overview Bach's algorithm produces a number uniformly at random in the range (for a given input ), along with its factorization. It does this by picking a prime number and an exponent such that , according to a certain distribution. The algorithm then recursively generates a number in the range , where , along with the factorization of . It then sets , and appends to the factorization of to produce the factorization of . This gives with logarithmic distribution over the desired range; rejection sampling is then used to get a uniform distribution. References Further reading Bach, Eric. Analytic methods in the Analysis and Design of Number-Theoretic Algorithms, MIT Press, 1984. Chapter 2, "Generation of Random Factorizations", part of which is available online here. Cryptographic algorithms Random number generation
https://en.wikipedia.org/wiki/Johnson%20bound
In applied mathematics, the Johnson bound (named after Selmer Martin Johnson) is a limit on the size of error-correcting codes, as used in coding theory for data transmission or communications. Definition Let be a q-ary code of length , i.e. a subset of . Let be the minimum distance of , i.e. where is the Hamming distance between and . Let be the set of all q-ary codes with length and minimum distance and let denote the set of codes in such that every element has exactly nonzero entries. Denote by the number of elements in . Then, we define to be the largest size of a code with length and minimum distance : Similarly, we define to be the largest size of a code in : Theorem 1 (Johnson bound for ): If , If , Theorem 2 (Johnson bound for ): (i) If (ii) If , then define the variable as follows. If is even, then define through the relation ; if is odd, define through the relation . Let . Then, where is the floor function. Remark: Plugging the bound of Theorem 2 into the bound of Theorem 1 produces a numerical upper bound on . See also Singleton bound Hamming bound Plotkin bound Elias Bassalygo bound Gilbert–Varshamov bound Griesmer bound References Coding theory
https://en.wikipedia.org/wiki/Introduction%20to%20Automata%20Theory%2C%20Languages%2C%20and%20Computation
Introduction to Automata Theory, Languages, and Computation is an influential computer science textbook by John Hopcroft and Jeffrey Ullman on formal languages and the theory of computation. Rajeev Motwani contributed to later editions beginning in 2000. Nickname The Jargon File records the book's nickname, Cinderella Book, thusly: "So called because the cover depicts a girl (putatively Cinderella) sitting in front of a Rube Goldberg device and holding a rope coming out of it. On the back cover, the device is in shambles after she has (inevitably) pulled on the rope." Edition history and reception The forerunner of this book appeared under the title Formal Languages and Their Relation to Automata in 1968. Forming a basis both for the creation of courses on the topic, as well as for further research, that book shaped the field of automata theory for over a decade, cf. (Hopcroft 1989). The first edition of Introduction to Automata Theory, Languages, and Computation was published in 1979, the second edition in November 2000, and the third edition appeared in February 2006. Since the second edition, Rajeev Motwani has joined Hopcroft and Ullman as the third author. Starting with the second edition, the book features extended coverage of examples where automata theory is applied, whereas large parts of more advanced theory were taken out. While this makes the second and third editions more accessible to beginners, it makes it less suited for more advanced courses. The new bias away from theory is not seen positively by all: As Shallit quotes one professor, "they have removed all good parts." (Shallit 2008). The first edition in turn constituted a major revision of a previous textbook also written by Hopcroft and Ullman, entitled Formal Languages and Their Relation to Automata. It was published in 1968 and is referred to in the introduction of the 1979 edition. In a personal historical note regarding the 1968 book, Hopcroft states: "Perhaps the success of the boo
https://en.wikipedia.org/wiki/Rustproofing
Rustproofing is the prevention or delay of rusting of iron and steel objects, or the permanent protection against corrosion. Typically, the protection is achieved by a process of surface finishing or treatment. Depending on mechanical wear or environmental conditions, the degradation may not be stopped completely, unless the process is periodically repeated. The term is particularly used in the automobile industry. Vehicle rustproofing Factory In the factory, car bodies are protected with special chemical formulations. Typically, phosphate conversion coatings were used. Some firms galvanized part or all of their car bodies before the primer coat of paint was applied. If a car is body-on-frame, then the frame (chassis) must also be rustproofed. In traditional automotive manufacturing of the early- and mid-20th century, paint was the final part of the rustproofing barrier between the body shell and the atmosphere, except on the underside. On the underside, an underseal rubberized or PVC-based coating was often sprayed on. These products will be breached eventually and can lead to unseen corrosion that spreads underneath the underseal. Old 1960s and 1970s rubberized underseal can become brittle on older cars and is particularly liable to this. The first electrodeposition primer was developed in the 1950s, but were found to be impractical for widespread use. Revised cathodic automotive electrocoat primer systems were introduced in the 1970s that markedly reduced the problem of corrosion that had been experienced by a vast number of automobiles in the first seven decades of automobile manufacturing. Termed e-coat, "electrocoat automotive primers are applied by totally submerging the assembled car body in a large tank that contains the waterborne e-coat, and the coating is applied through cathodic electrodeposition. This assures nearly 100% coverage of all metal surfaces by the primer. The coating chemistry is waterborne enamel based on epoxy, an aminoalcohol adduct,
https://en.wikipedia.org/wiki/SpareMiNT
SpareMiNT is a software distribution based on FreeMiNT, which consists of a MiNT-like operating system (OS) and kernel plus GEM compatible AES (Application Environment Services). Features and compatibility The English language distribution is intended for the Atari ST and derivative m68k computers, clones and emulators, such as the FireBee project or Hatari and ARAnyM. The MiNT itself, also once called MultiTOS, provided an Atari TOS compatible OS replacement with multitasking and multi-user switching capabilities and Unix-like operation, all of which the original TOS lacked. The distribution comes with Red Hat's rpm utility for managing the source- and binary packages. Unix/Linux-style software can be used, if ported, GEM-programs for TOS can run concurrently. The TOS clone EmuTOS, instead of Atari's original, can be used as a base to boot a MiNT, and e.g. XaAES, a modern AES derivate, as essential part of the GEM-GUI (Graphical user interface). FreeMiNT, and therefore SpareMiNT, is basically the enhanced and greatly improved derivate, and can be used on today's computers, even on different hardware platforms via emulation or Virtual Machines, thanks to the flexibility of the original MiNT and its components that made further development possible. Comparable Distributions EasyMiNT Derived from SpareMiNT is EasyMiNT, using its software repository and a GEM based installer, providing a folder system similar to the UNIX Filesystem Hierarchy Standard and German language translations to programs. AFROS AFROS (Atari FRee Operating System) comes as a set of files, creating a TOS compatible operating system; there exists a Live-CD to test. Its key components all consist of Free Software: EmuTOS and FreeMiNT; fVDI (free Virtual Device Interface), clone of GEM's VDI; XaAES; TeraDesk (Tera Desktop), clone of the original Desktop Filemanager and "shell" AFROS software is available to all Atari and/or TOS compatible platforms, but is optimized to be used with the ARA
https://en.wikipedia.org/wiki/Lattice%20model%20%28finance%29
In finance, a lattice model is a technique applied to the valuation of derivatives, where a discrete time model is required. For equity options, a typical example would be pricing an American option, where a decision as to option exercise is required at "all" times (any time) before and including maturity. A continuous model, on the other hand, such as Black–Scholes, would only allow for the valuation of European options, where exercise is on the option's maturity date. For interest rate derivatives lattices are additionally useful in that they address many of the issues encountered with continuous models, such as pull to par. The method is also used for valuing certain exotic options, where because of path dependence in the payoff, Monte Carlo methods for option pricing fail to account for optimal decisions to terminate the derivative by early exercise, though methods now exist for solving this problem. Equity and commodity derivatives In general the approach is to divide time between now and the option's expiration into N discrete periods. At the specific time n, the model has a finite number of outcomes at time n + 1 such that every possible change in the state of the world between n and n + 1 is captured in a branch. This process is iterated until every possible path between n = 0 and n = N is mapped. Probabilities are then estimated for every n to n + 1 path. The outcomes and probabilities flow backwards through the tree until a fair value of the option today is calculated. For equity and commodities the application is as follows. The first step is to trace the evolution of the option's key underlying variable(s), starting with today's spot price, such that this process is consistent with its volatility; log-normal Brownian motion with constant volatility is usually assumed. The next step is to value the option recursively: stepping backwards from the final time-step, where we have exercise value at each node; and applying risk neutral valuation at each ear
https://en.wikipedia.org/wiki/Commercial%20use%20of%20space
Commercial use of space is the provision of goods or services of commercial value by using equipment sent into Earth orbit or outer space. This phenomenon – aka Space Economy (or New Space Economy) – is accelerating cross-sector innovation processes combining the most advanced space and digital technologies to develop a broad portfolio of space-based services. The use of space technologies and of the data they collect, combined with the most advanced enabling digital technologies is generating a multitude of business opportunities that include the development of new products and services all the way to the creation of new business models, and the reconfiguration of value networks and relationships between companies. If well leveraged such technology and business opportunities can contribute to the creation of tangible and intangible value, through new forms and sources of revenue, operating efficiency and the start of new projects leading to multidimensional (e.g. society, environment) positive impact. Examples of the commercial use of space include satellite navigation, satellite television and commercial satellite imagery. Operators of such services typically contract the manufacturing of satellites and their launch to private or public companies, which form an integral part of the space economy. Some commercial ventures have long-term plans to exploit natural resources originating outside Earth, for example asteroid mining. Space tourism, currently an exceptional activity, could also be an area of future growth, as new businesses strive to reduce the costs and risks of human spaceflight. The first commercial use of outer space occurred in 1962, when the Telstar 1 satellite was launched to transmit television signals over the Atlantic Ocean. By 2004, global investment in all space sectors was estimated to be US$50.8 billion. As of 2010, 31% of all space launches were commercial. History The first commercial use of satellites may have been the Telstar 1 satelli
https://en.wikipedia.org/wiki/Square-free%20polynomial
In mathematics, a square-free polynomial is a polynomial defined over a field (or more generally, an integral domain) that does not have as a divisor any square of a non-constant polynomial. A univariate polynomial is square free if and only if it has no multiple root in an algebraically closed field containing its coefficients. This motivates that, in applications in physics and engineering, a square-free polynomial is commonly called a polynomial with no repeated roots. In the case of univariate polynomials, the product rule implies that, if divides , then divides the formal derivative of . The converse is also true and hence, is square-free if and only if is a greatest common divisor of the polynomial and its derivative. A square-free decomposition or square-free factorization of a polynomial is a factorization into powers of square-free polynomials where those of the that are non-constant are pairwise coprime square-free polynomials (here, two polynomials are said coprime is their greatest common divisor is a constant; in other words that is the coprimality over the field of fractions of the coefficients that is considered). Every non-zero polynomial admits a square-free factorization, which is unique up to the multiplication and division of the factors by non-zero constants. The square-free factorization is much easier to compute than the complete factorization into irreducible factors, and is thus often preferred when the complete factorization is not really needed, as for the partial fraction decomposition and the symbolic integration of rational fractions. Square-free factorization is the first step of the polynomial factorization algorithms that are implemented in computer algebra systems. Therefore, the algorithm of square-free factorization is basic in computer algebra. Over a field of characteristic 0, the quotient of by its GCD with its derivative is the product of the in the above square-free decomposition. Over a perfect field of non-zero
https://en.wikipedia.org/wiki/Lists%20of%20animals
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w
https://en.wikipedia.org/wiki/Radon%20mitigation
Radon mitigation is any process used to reduce radon gas concentrations in the breathing zones of occupied buildings, or radon from water supplies. Radon is a significant contributor to environmental radioactivity and can cause serious health problems such as lung cancer. Mitigation of radon in the air by active soil depressurization is most effective. Concrete slabs, sub-floors, and/or crawlspaces are sealed, an air pathway is then created to exhaust radon above the roof-line, and a radon mitigation fan is installed to run permanently. In particularly troublesome dwellings, air exchangers can be used to reduce indoor radon concentrations. Treatment systems using aeration or activated charcoal are available to remove radon from domestic water supplies. There is no proven link between radon in water and gastrointestinal cancers, however, extremely high radon concentrations in water can be aerosolized by faucets and shower heads and contribute to high indoor radon levels in the air. Testing The first step in mitigation is testing. No level of radiation is considered completely safe, but as it cannot be totally eliminated, governments around the world have set various action levels to provide guidance on when radon concentrations should be reduced. The World Health Organization's International Radon Project has recommended an action level of 100 Bq/m3 (2.7 pCi/L) for radon in the air. Radon in the air is considered to be a larger health threat than radon in domestic water. The US Environmental Protection Agency recommendation is to not test for radon in water unless a radon in air test shows concentrations above the action level. However, some U.S. states, such as Maine where radon levels are higher than the national average, recommend that all well water should be tested for radon. The U.S. government has not set an action level for radon in water. Air-radon levels fluctuate naturally on a daily and seasonal basis. A short term test (90 days or less) might no
https://en.wikipedia.org/wiki/Matthew%20Hennessy
Matthew Hennessy is an Irish computer scientist who has contributed especially to concurrency, process calculi and programming language semantics. Career During 1976–77, Matthew Hennessy was an assistant professor at the University of Waterloo in Canada. Then during 1977–78, he was a visiting professor at the Universidade Federal de Pernambuco in Brazil. Subsequently, he was a research associate (1979–81) and then lecturer (1981–85) at the University of Edinburgh in Scotland. During 1985, he was a guest lecturer/researcher at the University of Aarhus in Denmark. Hennessy was Professor of Computer Science at the Department of Informatics, University of Sussex, England, from 1985 until 2008. Since then, Hennessy has held a research professorship at the Department of Computer Science, Trinity College, Dublin. Hennessy's research interests are in the area of the semantic foundations of programming and specification languages, particularly involving distributed computing, including mobile computing. He also has an interest in verification tools. His co-authors include Robin Milner and Gordon Plotkin. Hennessy is a member of the Academy of Europe. He held a Royal Society/Leverhulme Trust Senior Research Fellowship during 2005–06 and has a Science Foundation Ireland Research Professorship at Trinity College Dublin. Books Matthew Hennessy has written a number of books: Hennessy, Matthew. A Distributed Pi-Calculus. Cambridge University Press, Cambridge, UK, 2007. . Hennessy, Matthew. Algebraic Theory of Processes. The MIT Press, Cambridge, Massachusetts, 1988. . Hennessy, Matthew. The Semantics of Programming Languages: An Elementary Introduction using Structural Operational Semantics. John Wiley and Sons, New York, 1990. . See also Hennessy–Milner logic Ó hAonghusa References External links Matthew Hennessy Trinity College Dublin home page Year of birth missing (living people) Living people 20th-century Irish people 21st-century Irish people Irish computer
https://en.wikipedia.org/wiki/Nintendo%20Wi-Fi%20USB%20Connector
The Nintendo Wi-Fi USB Connector is a wireless game adapter, developed by Nintendo and Buffalo Technology, which allows the Nintendo DS, Wii and 3DS users without a Wi-Fi connection or compatible Wi-Fi network to establish an Internet connection via a broadband-connected PC. When inserted into the host PC's USB port, the connector functions with the Nintendo DS, Wii, DSi and 3DS, permitting the user to connect to the Internet and play Nintendo games that require a Wi-Fi connection and access various other online services. According to the official Nintendo website, this product was the best-selling Nintendo accessory to date on 15 November 2007, but was discontinued in the same month. On September 9, 2005, Nintendo announced the Nintendo Wi-Fi Network Adapter, an 802.11g wireless router/bridge which serves a similar purpose. Functionality The Nintendo Wi-Fi USB Connector is essentially a re-branded version of the Buffalo WLI-U2-KG54-YB. The Buffalo WLI-U2-KG54-YB is often confused for the Buffalo WLI-U2-KG54-AI because the two adapters are almost identical, and only differ in that the Buffalo WLI-U2-KG54-AI features flash memory to allow for auto installation. Both are based on the Ralink RT2570 chipset. This differentiated the Nintendo Wi-Fi USB Connector from most other Wi-Fi adapters as it could operate as a software access point (also known as a soft AP). At the time of the Nintendo Wi-Fi USB Connector's release, few Wi-Fi adapters could do this on the Windows operating system as Windows lacked both the software necessary to configure a soft AP and capable drivers that were natively supported by hardware. By bundling a soft AP compatible device with their own proprietary software, Nintendo was able to overcome the limitations of Windows and simplified the otherwise complicated process of putting a supported device into soft AP mode, configuring it, and routing Internet traffic over it. Additionally, a number of community development tools and drivers exist whi
https://en.wikipedia.org/wiki/Dragon%20Buster
is a platform, action role-playing dungeon crawl game developed by Namco and released in 1984. It runs on Namco Pac-Land hardware, modified to support vertical scrolling. In Japan, the game was ported to the Family Computer (Famicom), MSX, and X68000; the latter version was later released for the Virtual Console in the same region on November 18, 2008. Dragon Buster has been ported for the PSP and is available as part of Namco Museum Battle Collection. It was followed by a Japan-only Famicom sequel, Dragon Buster II: Yami no Fūin, and was later followed by the PlayStation game Dragon Valor, which was both a remake and sequel. The game has side-scrolling platform gameplay and an overworld map similar to the later platform games for home consoles and personal computers. Dragon Buster was also the earliest game to feature a double jump mechanic, and one of the first to use a visual health meter. Plot In the beginning, a prince named Clovis was born the son of the kingdom's chief bodyguard to the royal Lawrence family. As a young child, Clovis was very mischievous and undisciplined, so his father thought it might be best to place him under the care of a monk who lived in the woods far from the kingdom. Under the monk's care, Clovis began to learn various aspects of knowledge, including how to be a superior swordsman. When word reached the monk that King Lawrence's 16-year-old daughter Celia had been abducted and held by a fearsome dragon, who wished to break the kingdom's spirit and coerce the kingdom to do his bidding, Clovis felt a sense of duty to chase after the dragon and rescue Celia in the name of his father. In order to save the Princess, he trained daily with the monk and learned to withstand injury, whether cut by swords or burned by the flame and still be just as capable a fighter as ever. Gameplay The player must guide the hero Clovis through each round on to the castle to rescue his beloved Princess Celia. There are multiple Princess Celias in the game,
https://en.wikipedia.org/wiki/E-social%20science
E-social science is a more recent development in conjunction with the wider developments in e-science. It is social science using grid computing and other information technologies to collect, process, integrate, share, and disseminate social and behavioural data. External links UK National Centre for e-Social Science Web Home Page Oxford e-Social Science This project has focused on the ethical, legal and institutional factors shaping e-Science. ReDReSS project This site provides resources for social scientists interested in using e-Social Science and e-Science tools and methodologies. Collaboratory for Quantitative e-Social Science Chinese e-Social Science E-Science Cyberinfrastructure
https://en.wikipedia.org/wiki/Event%20monitoring
In computer science, event monitoring is the process of collecting, analyzing, and signaling event occurrences to subscribers such as operating system processes, active database rules as well as human operators. These event occurrences may stem from arbitrary sources in both software or hardware such as operating systems, database management systems, application software and processors. Event monitoring may use a time series database. Basic concepts Event monitoring makes use of a logical bus to transport event occurrences from sources to subscribers, where event sources signal event occurrences to all event subscribers and event subscribers receive event occurrences. An event bus can be distributed over a set of physical nodes such as standalone computer systems. Typical examples of event buses are found in graphical systems such as X Window System, Microsoft Windows as well as development tools such as SDT. Event collection is the process of collecting event occurrences in a filtered event log for analysis. A filtered event log is logged event occurrences that can be of meaningful use in the future; this implies that event occurrences can be removed from the filtered event log if they are useless in the future. Event log analysis is the process of analyzing the filtered event log to aggregate event occurrences or to decide whether or not an event occurrence should be signalled. Event signalling is the process of signalling event occurrences over the event bus. Something that is monitored is denoted the monitored object; for example, an application, an operating system, a database, hardware etc. can be monitored objects. A monitored object must be properly conditioned with event sensors to enable event monitoring, that is, an object must be instrumented with event sensors to be a monitored object. Event sensors are sensors that signal event occurrences whenever an event occurs. Whenever something is monitored, the probe effect must be managed. Monitored object
https://en.wikipedia.org/wiki/Technical%20illustration
Technical Illustration is illustration meant to visually communicate information of a technical nature. Technical illustrations can be components of technical drawings or diagrams. Technical illustrations in general aim "to generate expressive images that effectively convey certain information via the visual channel to the human observer". Technical illustrations generally have to describe and explain the subjects to a nontechnical audience. Therefore, the visual image should be accurate in terms of dimensions and proportions, and should provide "an overall impression of what an object is or does, to enhance the viewer’s interest and understanding". Types Types of communication Today, technical illustration can be broken down into three categories based on the type of communication: Communication with the general public: informs the general public, for example illustrated instructions found in the manuals for automobiles and consumer electronics. This type of technical illustration contains simple terminology and symbols that can be understood by the lay person and is sometimes called creative technical illustration/graphics. Specialized engineering or scientific communication: used by engineers/scientists to communicate with their peers and in specifications. This use of technical illustration has its own complex terminology and specialized symbols; examples are the fields of atomic energy, aerospace and military/defense. These areas can be further broken down into disciplines of mechanical, electrical, architectural engineering and many more. Communication between highly skilled experts: used by engineers to communicate with people who are highly skilled in a field, but who are not engineers. Examples of this type of technical illustration are illustrations found in user/operator documentation. These illustrations can be very complex and have jargon and symbols not understood by the general public, such as illustrations that are part of instructional materi
https://en.wikipedia.org/wiki/Spectrum%20of%20a%20theory
In model theory, a branch of mathematical logic, the spectrum of a theory is given by the number of isomorphism classes of models in various cardinalities. More precisely, for any complete theory T in a language we write I(T, κ) for the number of models of T (up to isomorphism) of cardinality κ. The spectrum problem is to describe the possible behaviors of I(T, κ) as a function of κ. It has been almost completely solved for the case of a countable theory T. Early results In this section T is a countable complete theory and κ is a cardinal. The Löwenheim–Skolem theorem shows that if I(T,κ) is nonzero for one infinite cardinal then it is nonzero for all of them. Morley's categoricity theorem was the first main step in solving the spectrum problem: it states that if I(T,κ) is 1 for some uncountable κ then it is 1 for all uncountable κ. Robert Vaught showed that I(T,ℵ0) cannot be 2. It is easy to find examples where it is any given non-negative integer other than 2. Morley proved that if I(T,ℵ0) is infinite then it must be ℵ0 or ℵ1 or 2ℵ0. It is not known if it can be ℵ1 if the continuum hypothesis is false: this is called the Vaught conjecture and is the main remaining open problem (in 2005) in the theory of the spectrum. Morley's problem was a conjecture (now a theorem) first proposed by Michael D. Morley that I(T,κ) is nondecreasing in κ for uncountable κ. This was proved by Saharon Shelah. For this, he proved a very deep dichotomy theorem. Saharon Shelah gave an almost complete solution to the spectrum problem. For a given complete theory T, either I(T,κ) = 2κ for all uncountable cardinals κ, or for all ordinals ξ (See Aleph number and Beth number for an explanation of the notation), which is usually much smaller than the bound in the first case. Roughly speaking this means that either there are the maximum possible number of models in all uncountable cardinalities, or there are only "few" models in all uncountable cardinalities. Shelah also gave a de
https://en.wikipedia.org/wiki/Parasitic%20structure
In a semiconductor device, a parasitic structure is a portion of the device that resembles in structure some other, simpler semiconductor device, and causes the device to enter an unintended mode of operation when subjected to conditions outside of its normal range. For example, the internal structure of an NPN bipolar transistor resembles two P-N junction diodes connected together by a common anode. In normal operation the base-emitter junction does indeed form a diode, but in most cases it is undesirable for the base-collector junction to behave as a diode. If a sufficient forward bias is placed on this junction it will form a parasitic diode structure, and current will flow from base to collector. A common parasitic structure is that of a silicon controlled rectifier (SCR). Once triggered, an SCR conducts for as long as there is a current, necessitating a complete power-down to reset the behavior of the device. This condition is known as latchup. References Further reading Electrical circuits
https://en.wikipedia.org/wiki/Friendly%20number
In number theory, friendly numbers are two or more natural numbers with a common abundancy index, the ratio between the sum of divisors of a number and the number itself. Two numbers with the same "abundancy" form a friendly pair; n numbers with the same "abundancy" form a friendly n-tuple. Being mutually friendly is an equivalence relation, and thus induces a partition of the positive naturals into clubs (equivalence classes) of mutually "friendly numbers". A number that is not part of any friendly pair is called solitary. The "abundancy" index of n is the rational number σ(n) / n, in which σ denotes the sum of divisors function. A number n is a "friendly number" if there exists m ≠ n such that σ(m) / m = σ(n) / n. "Abundancy" is not the same as abundance, which is defined as σ(n) − 2n. "Abundancy" may also be expressed as where denotes a divisor function with equal to the sum of the k-th powers of the divisors of n. The numbers 1 through 5 are all solitary. The smallest "friendly number" is 6, forming for example, the "friendly" pair 6 and 28 with "abundancy" σ(6) / 6 = (1+2+3+6) / 6 = 2, the same as σ(28) / 28 = (1+2+4+7+14+28) / 28 = 2. The shared value 2 is an integer in this case but not in many other cases. Numbers with "abundancy" 2 are also known as perfect numbers. There are several unsolved problems related to the "friendly numbers". In spite of the similarity in name, there is no specific relationship between the friendly numbers and the amicable numbers or the sociable numbers, although the definitions of the latter two also involve the divisor function. Examples As another example, 30 and 140 form a friendly pair, because 30 and 140 have the same "abundancy": The numbers 2480, 6200 and 40640 are also members of this club, as they each have an "abundancy" equal to 12/5. For an example of odd numbers being friendly, consider 135 and 819 ("abundancy" 16/9 (deficient)). There are also cases of even being "friendly" to odd, such as 42 and 54463
https://en.wikipedia.org/wiki/Redundant%20code
In computer programming, redundant code is source code or compiled code in a computer program that is unnecessary, such as: recomputing a value that has previously been calculated and is still available, code that is never executed (known as unreachable code), code which is executed but has no external effect (e.g., does not change the output produced by a program; known as dead code). A NOP instruction might be considered to be redundant code that has been explicitly inserted to pad out the instruction stream or introduce a time delay, for example to create a timing loop by "wasting time". Identifiers that are declared, but never referenced, are termed redundant declarations. Examples The following examples are in C. int foo(int iX) { int iY = iX*2; return iX*2; } The second iX*2 expression is redundant code and can be replaced by a reference to the variable iY. Alternatively, the definition int iY = iX*2 can instead be removed. Consider: #define min(A,B) ((A)<(B)?(A):(B)) int shorter_magnitude(int u1, int v1, int u2, int v2) { /* Returns the shorter magnitude of (u1,v1) and (u2,v2) */ return sqrt(min(u1*u1 + v1*v1, u2*u2 + v2*v2)); } As a consequence of using the C preprocessor, the compiler will only see the expanded form: int shorter_magnitude(int u1, int v1, int u2, int v2) { int temp; if (u1*u1 + v1*v1 < u2*u2 + v2*v2) temp = u1*u1 + v1*v1; /* Redundant already calculated for comparison */ else temp = u2*u2 + v2*v2; /* Redundant already calculated for comparison */ return sqrt(temp); } Because the use of min/max macros is very common, modern compilers are programmed to recognize and eliminate redundancy caused by their use. There is no redundancy, however, in the following code: #define max(A,B) ((A)>(B)?(A):(B)) int random(int cutoff, int range) { return max(cutoff, rand()%range); } If the initial call to rand(), modulo range, is greater than or equal to cutoff, rand() will be called a seco
https://en.wikipedia.org/wiki/Matrox%20Parhelia
The Matrox Parhelia-512 is a graphics processing unit (GPU) released by Matrox in 2002. It has full support for DirectX 8.1 and incorporates several DirectX 9.0 features. At the time of its release, it was best known for its ability to drive three monitors ("Surround Gaming") and its Coral Reef tech demo. As had happened with previous Matrox products, the Parhelia was released just before competing companies released cards that completely outperformed it. In this case it was the ATI Radeon 9700, released only a few months later. The Parhelia remained a niche product, and was Matrox's last major effort to sell into the consumer market. Background The Parhelia series was Matrox's attempt to return to the market after a long hiatus, their first significant effort since the G200 and G400 lines had become uncompetitive. Their other post-G400 products, G450 and G550, were cost-reduced revisions of G400 technology and were not competitive with ATI's Radeon or NVIDIA's GeForce lines with regards to 3D computer graphics. Description Features The Parhelia-512 was the first GPU by Matrox to be equipped with a 256-bit memory bus, giving it an advantage over other cards of the time in the area of memory bandwidth. The "-512" suffix refers to the 512-bit ring bus. The Parhelia processor featured Glyph acceleration, where anti-aliasing of text was accelerated by the hardware. Parhelia-512 includes 4 32×4 vertex shaders with dedicated displacement mapping engine, pixel shader array with 4 texturing unit and 5-stage pixel shader per pixel pipeline. It supports 16× fragment anti-aliasing, all of which were featured prominently in Matrox's Coral Reef technical demo. Display controller component supports 10-bit color frame buffer (called "Gigacolor") with 10-bit 400 MHz RAMDACs on 2 RGB ports and 230 MHz RAMDAC on TV encoder port, which was an improvement over its competitors. The frame buffer is in RGBA (10:10:10:2) format, and supports full gamma correction. Dual link TMDS is
https://en.wikipedia.org/wiki/Matrox%20G400
The G400 is a video card made by Matrox, released in September 1999. The graphics processor contains a 2D GUI, video, and Direct3D 6.0 3D accelerator. Codenamed "Toucan", it was a more powerful and refined version of its predecessor, the G200. Overview The Matrox G200 graphics processor had been a successful product, competing with the various 2D & 3D combination cards available in 1998. Matrox took the technology developed from the G200 project, refined it, and essentially doubled it up to form the G400 processor. The new chip featured several new and innovative additions, such as multiple monitor output support, an all-around 32-bit rendering pipeline with high performance, further improved 2D and video acceleration, and a new 3D feature known as Environment Mapped Bump Mapping. Internally the G400 is a 256-bit processor, using what Matrox calls a "DualBus" architecture. This is an evolution of G200's "DualBus", which had been 128-bit. A Matrox "DualBus" chip consists of twin unidirectional buses internally, each moving data into or out of the chip. This increases the efficiency and bandwidth of data flow within the chip to each of its functional units. G400's 3D engine consists of 2 parallel pixel pipelines with 1 texture unit each, providing single-pass dual-texturing capability. The Millennium G400 MAX is capable of 333 megapixels per second fillrate at its 166 MHz core clock speed. It is purely a Direct3D 6.0 accelerator and, as such, lacks support for the later hardware transform and lighting acceleration of Direct3D 7.0 cards. The chip's external memory interface is 128-bit and is designed to use either SDRAM or SGRAM. Matrox released both 16 MiB and 32 MiB versions of the G400 boards, and used both types of RAM. The slowest models are equipped with 166 MHz SDRAM, while the fastest (G400 MAX) uses 200 MHz SGRAM. G400MAX had the highest memory bandwidth of any card before the release of the DDR-equipped version of NVIDIA GeForce 256. Perhaps the most not
https://en.wikipedia.org/wiki/P-y%20method
In geotechnical civil engineering, the p–y is a method of analyzing the ability of deep foundations to resist loads applied in the lateral direction. This method uses the finite difference method and p-y graphs to find a solution. P–y graphs are graphs which relate the force applied to soil to the lateral deflection of the soil. In essence, non-linear springs are attached to the foundation in place of the soil. The springs can be represented by the following equation: where '' is the non-linear spring stiffness defined by the p–y curve, is the deflection of the spring, and is the force applied to the spring. The p–y curves vary depending on soil type. The available geotechnical engineering software programs for the p–y method include FB-MultiPier by the Bridge Software Institute, DeepFND by Deep Excavation LLC, PileLAT by Innovative Geotechnics, LPile by Ensoft, and PyPile by Yong Technology. References Salgado, R. (2007). "The Engineering of Foundations." McGraw-Hill, in press. (1) Hasani, H., Golafshani, A., Estekanchi, H. Seismic performance evaluation of jacket-type offshore platforms using endurance time method considering soil-pile-superstructure interaction. Scientia Iranica, 2017; 24(4): 1843-1854. doi: 10.24200/sci.2017.4275 http://scientiairanica.sharif.edu/article_4275_f79d8b4fdd0cc8d159b91b1a3b968585.pdf Soil mechanics
https://en.wikipedia.org/wiki/Plain%20old%20CLR%20object
In software engineering, a plain old CLR object, or plain old class object (POCO) is a simple object created in the .NET Common Language Runtime (CLR) that is unencumbered by inheritance or attributes. This is often used in opposition to the complex or specialized objects that object-relational mapping frameworks often require. In essence, a POCO does not have any dependency on an external framework. Etymology Plain Old CLR Object is a play on the term plain old Java object from the Java EE programming world, which was coined by Martin Fowler in 2000. POCO is often expanded to plain old C# object, though POCOs can be created with any language targeting the CLR. An alternative acronym sometimes used is plain old .NET object. Benefits Some benefits of POCOs are: allows a simple storage mechanism for data, and simplifies serialization and passing data through layers; goes hand-in-hand with dependency injection and the repository pattern; minimised complexity and dependencies on other layers (higher layers only care about the POCOs, POCOs don't care about anything) which facilitates loose coupling; increases testability through simplification. See also Plain old data structure Plain old Java object Data transfer object References .NET terminology
https://en.wikipedia.org/wiki/JavaScript%20syntax
The syntax of JavaScript is the set of rules that define a correctly structured JavaScript program. The examples below make use of the log function of the console object present in most browsers for standard text output. The JavaScript standard library lacks an official standard text output function (with the exception of document.write). Given that JavaScript is mainly used for client-side scripting within modern web browsers, and that almost all Web browsers provide the alert function, alert can also be used, but is not commonly used. Origins Brendan Eich summarized the ancestry of the syntax in the first paragraph of the JavaScript 1.1 specification as follows: Basics Case sensitivity JavaScript is case sensitive. It is common to start the name of a constructor with a capitalised letter, and the name of a function or variable with a lower-case letter. Example: var a = 5; console.log(a); // 5 console.log(A); // throws a ReferenceError: A is not defined Whitespace and semicolons Unlike in C, whitespace in JavaScript source can directly impact semantics. Semicolons end statements in JavaScript. Because of automatic semicolon insertion (ASI), some statements that are well formed when a newline is parsed will be considered complete, as if a semicolon were inserted just prior to the newline. Some authorities advise supplying statement-terminating semicolons explicitly, because it may lessen unintended effects of the automatic semicolon insertion. There are two issues: five tokens can either begin a statement or be the extension of a complete statement; and five restricted productions, where line breaks are not allowed in certain positions, potentially yielding incorrect parsing. The five problematic tokens are the open parenthesis "(", open bracket "[", slash "/", plus "+", and minus "-". Of these, the open parenthesis is common in the immediately invoked function expression pattern, and open bracket occurs sometimes, while others are quite rare. An example:
https://en.wikipedia.org/wiki/High%20availability
High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period. Modernization has resulted in an increased reliance on these systems. For example, hospitals and data centers require high availability of their systems to perform routine daily activities. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is – from the user's point of view – unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable. Resilience High availability is a property of network resilience, the ability to "provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected. The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network. These services include: supporting distributed processing supportin
https://en.wikipedia.org/wiki/Richard%20M.%20Dudley
Richard Mansfield Dudley (July 28, 1938 – January 19, 2020) was Professor of Mathematics at the Massachusetts Institute of Technology. Education and career Dudley was born in Cleveland, Ohio. He earned his BA at Harvard College and received his PhD at Princeton University in 1962 under the supervision of Edward Nelson and Gilbert Hunt. He was a Putnam Fellow in 1958. He was an instructor and assistant professor at University of California, Berkeley between 1962 and 1967, before moving to MIT as a professor in mathematics, where he stayed from 1967 until 2015, when he retired. He died on January 19, 2020, following a long illness. Research His work mainly concerned fields of probability, mathematical statistics, and machine learning, with highly influential contributions to the theory of Gaussian processes and empirical processes. He published over a hundred papers in peer-reviewed journals and authored several books. His specialty was probability theory and statistics, especially empirical processes. He is often noted for his results on the so-called Dudley entropy integral. In 2012 he became a fellow of the American Mathematical Society. Books References R. S. Wenocur and R. M. Dudley, "Some special Vapnik–Chervonenkis classes," Discrete Mathematics, vol. 33, pp. 313–318, 1981. External links Publications from Google Scholar. A Conversation with Dick Dudley 1938 births 2020 deaths 20th-century American mathematicians 21st-century American mathematicians American statisticians Probability theorists Princeton University alumni Massachusetts Institute of Technology School of Science faculty Fellows of the American Mathematical Society Fellows of the American Statistical Association Putnam Fellows Harvard College alumni Annals of Probability editors Mathematical statisticians
https://en.wikipedia.org/wiki/Air%20gap%20%28networking%29
An air gap, air wall, air gapping or disconnected network is a network security measure employed on one or more computers to ensure that a secure computer network is physically isolated from unsecured networks, such as the public Internet or an unsecured local area network. It means a computer or network has no network interface controllers connected to other networks, with a physical or conceptual air gap, analogous to the air gap used in plumbing to maintain water quality. Use in classified settings An air-gapped computer or network is one that has no network interfaces, either wired or wireless, connected to outside networks. Many computers, even when they are not plugged into a wired network, have a wireless network interface controller (WiFi) and are connected to nearby wireless networks to access the Internet and update software. This represents a security vulnerability, so air-gapped computers either have their wireless interface controller permanently disabled or physically removed. To move data between the outside world and the air-gapped system, it is necessary to write data to a physical medium such as a thumbdrive, and physically move it between computers. Physical access has to be controlled (man identity and storage media itself). It is easier to control than a direct full network interface, which can be attacked from the exterior insecure system and, if malware infects the secure system, can be used to export secure data. That's why some new hardware technologies are also available like unidirectional data diodes or bidirectional diodes (also called electronic airgaps), that physically separate the network and transportation layers and copy and filter the application data. In environments where networks or devices are rated to handle different levels of classified information, the two disconnected devices or networks are referred to as low side and high side, low being unclassified and high referring to classified, or classified at a higher
https://en.wikipedia.org/wiki/Trace%20vector%20decoder
A Trace Vector Decoder (TVD) is computer software that uses the trace facility of its underlying microprocessor to decode encrypted instruction opcodes just-in-time prior to execution and possibly re-encode them afterwards. It can be used to hinder reverse engineering when attempting to prevent software cracking as part of an overall copy protection strategy. Microprocessor tracing Certain microprocessor families (e.g. 680x0, x86) provide the capability to trace instructions to aid in program development. A debugger might use this capability to single step through a program, providing the means for a programmer to monitor the execution of the program under test. By installing a custom handler for the trace exception, it is possible to gain control of the microprocessor between the execution of normal program flow instructions. A typical trace vector decoder exception handler decodes the upcoming instruction located outside the exception, as well as re-encoding the previously decoded instruction. Implementations Motorola 680x0 The Motorola 68000 has an instruction-by-instruction tracing facility. When its trace state is enabled, the processor automatically forces a trace exception after each (non-exception) instruction is executed. The following assembly code snippet is an example of a program initializing a trace exception handler on a 68000 system. InstallHandler: MOVE.L #$4E730000,-(SP) ; Push trace exception handler on to stack MOVE.L #$00000010,-(SP) MOVE.L #$0004DDB9,-(SP) MOVE.L #$BD96BDAE,-(SP) MOVE.L #$B386B586,-(SP) MOVE.L #$D046D246,-(SP) MOVE.L #$0246A71F,-(SP) MOVE.L #$00023C17,-(SP) MOVE.W #$2C6F,-(SP) MOVE.L SP,($24).W ; Set trace exception handler vector ORI.W #$A71F,SR ; Enable trace state NOP ; CPU generates a trace
https://en.wikipedia.org/wiki/Cake%20number
In mathematics, the cake number, denoted by Cn, is the maximum of the number of regions into which a 3-dimensional cube can be partitioned by exactly n planes. The cake number is so-called because one may imagine each partition of the cube by a plane as a slice made by a knife through a cube-shaped cake. It is the 3D analogue of the lazy caterer's sequence. The values of Cn for are given by . General formula If n! denotes the factorial, and we denote the binomial coefficients by and we assume that n planes are available to partition the cube, then the n-th cake number is: Properties The only cake number which is prime is 2, since it requires to have prime factorisation where is some prime. This is impossible for as we know must be even, so it must be equal to , , , or , which correspond to the cases: (which has only complex roots), (i.e. ), , and . The cake numbers are the 3-dimensional analogue of the 2-dimensional lazy caterer's sequence. The difference between successive cake numbers also gives the lazy caterer's sequence. The fourth column of Bernoulli's triangle (k = 3) gives the cake numbers for n cuts, where n ≥ 3. The sequence can be alternatively derived from the sum of up to the first 4 terms of each row of Pascal's triangle: {| class="wikitable" style="text-align:right;" ! !! 0 !! 1 !! 2 !! 3 ! rowspan="11" style="padding:0;"| !! Sum |- ! style="text-align:left;"|1 | 1 || — || — || — || 1 |- ! style="text-align:left;"|2 | 1 || 1 || — || — || 2 |- ! style="text-align:left;"|3 | 1 || 2 || 1 || — || 4 |- ! style="text-align:left;"|4 | 1 || 3 || 3 || 1 || 8 |- ! style="text-align:left;"|5 | 1 || 4 || 6 || 4 || 15 |- ! style="text-align:left;"|6 | 1 || 5 || 10 || 10 || 26 |- ! style="text-align:left;"|7 | 1 || 6 || 15 || 20 || 42 |- ! style="text-align:left;"|8 | 1 || 7 || 21 || 35 || 64 |- ! style="text-align:left;"|9 | 1 || 8 || 28 || 56 || 93 |- ! style="text-align:left;"|10 | 1 || 9 || 36 || 84 || 130 |} Referenc
https://en.wikipedia.org/wiki/Block%20walking
In combinatorial mathematics, block walking is a method useful in thinking about sums of combinations graphically as "walks" on Pascal's triangle. As the name suggests, block walking problems involve counting the number of ways an individual can walk from one corner A of a city block to another corner B of another city block given restrictions on the number of blocks the person may walk, the directions the person may travel, the distance from A to B, et cetera. An example block walking problem Suppose such an individual, say "Fred", must walk exactly k blocks to get to a point B that is exactly k blocks from A. It is convenient to regard Fred's starting point A as the origin, , of a rectangular array of lattice points and B as some lattice point , e units "East" and n units "North" of A, where and both and are nonnegative. Solution by brute force A "brute force" solution to this problem may be obtained by systematically counting the number of ways Fred can reach each point where and without backtracking (i.e. only traveling North or East from one point to another) until a pattern is observed. For example, the number of ways Fred could go from to or is exactly one; to is two; to or is one; to or is three; and so on. Actually, you could receive the number of ways to get to a particular point by adding up the number of ways you can get to the point south of it and the number of ways you can get to the point west of it.(With the starting point being zero and all the points directly north and south of it one.) In general, one soon discovers that the number of paths from A to any such X corresponds to an entry of Pascal's Triangle. Combinatorial solution Since the problem involves counting a finite, discrete number of paths between lattice points, it is reasonable to assume a combinatorial solution exists to the problem. Towards this end, we note that for Fred to still be on a path that will take him from A to B over blocks, at any point X he must ei
https://en.wikipedia.org/wiki/Epistemic%20modal%20logic
Epistemic modal logic is a subfield of modal logic that is concerned with reasoning about knowledge. While epistemology has a long philosophical tradition dating back to Ancient Greece, epistemic logic is a much more recent development with applications in many fields, including philosophy, theoretical computer science, artificial intelligence, economics and linguistics. While philosophers since Aristotle have discussed modal logic, and Medieval philosophers such as Avicenna, Ockham, and Duns Scotus developed many of their observations, it was C. I. Lewis who created the first symbolic and systematic approach to the topic, in 1912. It continued to mature as a field, reaching its modern form in 1963 with the work of Kripke. Historical development Many papers were written in the 1950s that spoke of a logic of knowledge in passing, but the Finnish philosopher G. H. von Wright's 1951 paper titled An Essay in Modal Logic is seen as a founding document. It was not until 1962 that another Finn, Hintikka, would write Knowledge and Belief, the first book-length work to suggest using modalities to capture the semantics of knowledge rather than the alethic statements typically discussed in modal logic. This work laid much of the groundwork for the subject, but a great deal of research has taken place since that time. For example, epistemic logic has been combined recently with some ideas from dynamic logic to create dynamic epistemic logic, which can be used to specify and reason about information change and exchange of information in multi-agent systems. The seminal works in this field are by Plaza, Van Benthem, and Baltag, Moss, and Solecki. Standard possible worlds model Most attempts at modeling knowledge have been based on the possible worlds model. In order to do this, we must divide the set of possible worlds between those that are compatible with an agent's knowledge, and those that are not. This generally conforms with common usage. If I know that it is eithe
https://en.wikipedia.org/wiki/Chicken%20feet
Chicken feet are cooked and eaten in many countries. After an outer layer of hard skin is removed, most of the edible tissue on the feet consists of skin and tendons, with no muscle. This gives the feet a distinct gelatinous texture different from the rest of the chicken meat. Around the world China Chicken feet are used in several regional Chinese cuisines; they can be served as a beer snack, cold dish, soup or main dish. They are interchangeably called Fèng zhuǎ (鳯爪, phoenix claws), Jī zhuǎ (鷄爪, chicken claws), and Jī jiǎo (雞脚, chicken feet). In Guangdong and Hong Kong, they are typically deep fried and steamed first to make them puffy before being stewed and simmered in a sauce flavoured with black fermented beans, bean paste, and sugar; or in abalone sauce. In mainland China, popular snack bars specializing in marinated food such as yabozi (duck's necks) also sell lu ji zhua (鹵雞爪, marinated chicken feet), which are simmered with soy sauce, Sichuanese peppercorn, clove, garlic, star anise, cinnamon and chili flakes. Today, packaged chicken feet are sold in most grocery stores and supermarkets in China as a snack, often seasoned with rice vinegar and chili. Another popular recipe is bai yun feng zhao (), which is marinated in a sauce of rice vinegar, rice wine flavored with sugar, salt, and minced ginger for an extended period of time and served as a cold dish. In southern China, they also cook chicken feet with raw peanuts to make a thin soup. The huge demand in China raises the price of chicken feet, which are often used as fodder in other countries. As of June 2011, 1 kg of raw chicken feet costs around 12 to 16 yuan in China, compared to 11–12 yuan for 1 kg of frozen chicken breast. In 2000, Hong Kong, once the largest entrepôt for shipping chicken feet from over 30 countries, traded a total of 420,000 tons of chicken feet at the value of US$230 million. Two years after joining the WTO in 2001, China approved the direct import of American chicken feet,
https://en.wikipedia.org/wiki/Functional%20spinal%20unit
A functional spinal unit (FSU) (or motion segment) is the smallest physiological motion unit of the spine to exhibit biomechanical characteristics similar to those of the entire spine. A FSU consists of two adjacent vertebrae, the intervertebral disc and all adjoining ligaments between them and excludes other connecting tissues such as muscles. The three-joint complex that results is sometimes referred to as the "articular triad". In vitro studies of isolated or multiple FSU's are often used to measure biomechanical properties of the spine. The typical load-displacement behavior of a cadaveric FSU specimen is nonlinear. Within the total range of passive motion of any FSU, the typical load-displacement curve consists of 2 regions or 'zones' that exhibit very different biomechanical behavior. In the vicinity of the resting neutral position of the FSU, this load-displacement behavior is highly flexible. This is the region known as the 'neutral zone', which is the motion region of the joint where the passive osteoligamentous stability mechanisms exert little or no influence. During passive physiological movement of the FSU, motion occurs in this region against minimal internal resistance. It is a region in which a small load causes a relatively large displacement. The 'elastic zone' is the remaining region of FSU motion that continues from the end of the neutral zone to the point of maximum resistance (provided by the passive osteoligamentous stability mechanism), thus limiting the range of motion. References Physiology Bones of the vertebral column Biomechanics
https://en.wikipedia.org/wiki/The%20Return%20of%20Ishtar
is an action role-playing arcade video game released by Namco in 1986. It runs on Namco System 86 hardware and is the sequel to The Tower of Druaga, which was released two years earlier. The game's story directly starts after the first game, where Ki and Gil must venture down in the Tower of Druaga and escape it. It is the second game in the company's Babylonian Castle Saga series, and was later ported to the MSX, NEC PC-8801, FM-7, and Sharp X68000 platforms. The Return of Ishtar was included in the compilation game Namco Museum Volume 4 for the PlayStation, which is also the first time the game had been released overseas. Gameplay The Return of Ishtar is an adventure game that requires two players. It was also the first game from Namco to have a password feature, to give players the opportunity to continue from where they left off, and their first to not feature a scoring system. Player 1 controls the priestess Ki who fights with magic, while Player 2 controls the sword-wielding Prince Gilgamesh. This sequel starts off directly after Gilgamesh has saved Ki from Druaga, and focuses on their escape from the tower (and its inhabitants) who are after Gilgamesh and Ki to avenge their former master. There are a total of 128 rooms in the sixty-floor tower, and the screen will only scroll according to Ki's location, so the second player will have to stay close to their partner as they traverse the tower. Ki attacks by casting spells at the enemies, while Gilgamesh automatically draws his sword whenever an enemy gets close enough to him, allowing him to attack the enemy by bumping into it with his blade (similar to Adol from the Ys games). However, colliding with enemies will also damage Gilgamesh, and the counter in the bottom-right of the screen will decrease by a preset amount, depending on what enemy type it was. If the counter reaches 0, he will disappear, and the game will be over for both players (which will also happen if Ki is touched by any enemy at all). There
https://en.wikipedia.org/wiki/S-PLUS
S-PLUS is a commercial implementation of the S programming language sold by TIBCO Software Inc. It features object-oriented programming capabilities and advanced analytical algorithms. Due to the increasing popularity of the open source S successor R, TIBCO Software released the TIBCO Enterprise Runtime for R (TERR) as an alternative R interpreter. Historical timeline 1988: S-PLUS is first produced by a Seattle-based start-up company called Statistical Sciences, Inc. The founder and sole owner is R. Douglas Martin, professor of statistics at the University of Washington, Seattle. 1993: Statistical Sciences acquires the exclusive license to distribute S and merges with MathSoft, becoming the firm's Data Analysis Products Division (DAPD). 1995: S-PLUS 3.3 for Windows 95/NT. Matrix library, command history, Trellis graphics 1996: S-PLUS 3.4 for UNIX. Trellis graphics, (non-linear mixed effects) library, hexagonal binning, cluster methods. 1997: S-PLUS 4 for Windows. New GUI, integration with Excel, editable graphics. 1998: S-PLUS 4.5 for Windows. Scatterplot brushing, create S-PLUS graphs from within Excel & SPSS. 1998: S-PLUS is available for Linux & Solaris. 1999: S-PLUS 5 for Solaris, Linux, HP-UX, AIX, IRIX, and DEC Alpha. S-PLUS 2000 for Windows. 3.3, quality control charting, new commands for data manipulation. 2000: S-PLUS 6 for Linux/Unix. Java-based GUI, Graphlets, survival5, missing data library, robust library. 2001: MathSoft sells its Cambridge-based Engineering and Education Products Division (EEPD), changes name to Insightful Corporation, and moves headquarters to Seattle. This move is basically an "Undo" of the previous merger between MathSoft and Statistical Sciences, Inc. 2001: S-PLUS Analytic Server 2.0. S-PLUS 6 for Windows (Excel integration, C++ classes/libraries for connectivity, Graphlets, S version 4, missing data library, robust library). 2002: StatServer 6. Student edition of S-PLUS now free. 2003: S-PLUS 6.2 New reporting, d
https://en.wikipedia.org/wiki/Dot-matrix%20display
A dot-matrix display is a low-cost electronic digital display device that displays information on machines such as clocks, watches, calculators, and many other devices requiring a simple alphanumeric (and/or graphic) display device of limited resolution. The display consists of a dot matrix of lights or mechanical indicators arranged in a rectangular configuration (other shapes are also possible, although not common) such that by switching on or off selected lights, text or graphics can be displayed. These displays are normally created with LCD, OLED, or LED lights and can be found in some Thin Film Transistors. The Thin Film Transistors had an active display which allows the dot matrix to display different pixels with different colors at the same time. A dot matrix controller converts instructions from a processor into signals that turn on or off indicator elements in the matrix so that the required display is produced. History The dot-matrix display is also known by the obsolete term "punktmatrix display" (German for point-matrix) due to the dot matrix being created in Germany by Rudolf Hell in 1925. On September 1977, the US Army wrote up a form to the Westinghouse Research and Development Center requesting a more effective energy source that soldiers could use in their technology in the field. Japan and America were using the LCD matrices to develop Casio TVs from 1984 to 2000 creating and experimenting with different display setups. In the 1980s, dot-matrix displays were introduced into several technologies including computers, the Game Boy, and television screens that were used. The dot matrix displays became a popular public technology in 1991 America when the company Data East created Checkpoint (pinball) machines that interested the public. Dot-matrix displays were added into new pieces of technology as a background part of LCD or OLED displays as the technology improved. Pixel resolutions Common sizes of dot matrix displays: 128×16 (Two-lined) 128×3
https://en.wikipedia.org/wiki/Citrate%20synthase
The enzyme citrate synthase E.C. 2.3.3.1 (previously 4.1.3.7)] exists in nearly all living cells and stands as a pace-making enzyme in the first step of the citric acid cycle (or Krebs cycle). Citrate synthase is localized within eukaryotic cells in the mitochondrial matrix, but is encoded by nuclear DNA rather than mitochondrial. It is synthesized using cytoplasmic ribosomes, then transported into the mitochondrial matrix. Citrate synthase is commonly used as a quantitative enzyme marker for the presence of intact mitochondria. Maximal activity of citrate synthase indicates the mitochondrial content of skeletal muscle. The maximal activity can be increased by endurance training or high-intensity interval training, but maximal activity is further increased with high-intensity interval training. Citrate synthase catalyzes the condensation reaction of the two-carbon acetate residue from acetyl coenzyme A and a molecule of four-carbon oxaloacetate to form the six-carbon citrate: acetyl-CoA + oxaloacetate + H2O → citrate + CoA-SH Oxaloacetate is regenerated after the completion of one round of the Krebs cycle. Oxaloacetate is the first substrate to bind to the enzyme. This induces the enzyme to change its conformation, and creates a binding site for the acetyl-CoA. Only when this citryl-CoA has formed will another conformational change cause thioester hydrolysis and release coenzyme A. This ensures that the energy released from the thioester bond cleavage will drive the condensation. Structure Citrate synthase's 437 amino acid residues are organized into two main subunits, each consisting of 20 alpha-helices. These alpha helices compose approximately 75% of citrate synthase's tertiary structure, while the remaining residues mainly compose irregular extensions of the structure, save a single beta-sheet of 13 residues. Between these two subunits, a single cleft exists containing the active site. Two binding sites can be found therein: one reserved for citrat
https://en.wikipedia.org/wiki/Buffy%20coat
The buffy coat is the fraction of an anticoagulated blood sample that contains most of the white blood cells and platelets following centrifugation. Description After centrifugation, one can distinguish a layer of clear fluid (the plasma), a layer of red fluid containing most of the red blood cells, and a thin layer in between. Composing less than 1% of the total volume of the blood sample, the buffy coat (so-called because it is usually buff in hue), contains most of the white blood cells and platelets. The buffy coat is usually whitish in color, but is sometimes green if the blood sample contains large amounts of neutrophils, which are high in green-colored myeloperoxidase. The layer beneath the buffy coat contains granulocytes and red blood cells. The buffy coat is commonly used for DNA extraction, with white blood cells providing approximately 10 times more concentrated sources of nucleated cells. They are extracted from the blood of mammals because mammalian red blood cells are anucleate and do not contain DNA. A common protocol is to store buffy coat specimens for future DNA isolation and these may remain in frozen storage for many years. Diagnostic uses Quantitative buffy coat (QBC), based on the centrifugal stratification of blood components, is a laboratory test for the detection of malarial parasites, as well as of other blood parasites. The blood is taken in a QBC capillary tube which is coated with acridine orange (a fluorescent dye) and centrifuged; the fluorescing parasitized erythrocytes get concentrated in a layer which can then be observed by fluorescence microscopy, under ultraviolet light at the interface between red blood cells and buffy coat. This test is more sensitive than the conventional thick smear and in > 90% of cases the species of parasite can also be identified. In cases of extremely low white blood cell count, it may be difficult to perform a manual differential of the various types of white cells, and it may be virtually impossi
https://en.wikipedia.org/wiki/Abhyankar%27s%20conjecture
In abstract algebra, Abhyankar's conjecture is a conjecture of Shreeram Abhyankar posed in 1957, on the Galois groups of algebraic function fields of characteristic p. The soluble case was solved by Serre in 1990 and the full conjecture was proved in 1994 by work of Michel Raynaud and David Harbater. Statement The problem involves a finite group G, a prime number p, and the function field K(C) of a nonsingular integral algebraic curve C defined over an algebraically closed field K of characteristic p. The question addresses the existence of a Galois extension L of K(C), with G as Galois group, and with specified ramification. From a geometric point of view, L corresponds to another curve , together with a morphism π : → C. Geometrically, the assertion that π is ramified at a finite set S of points on C means that π restricted to the complement of S in C is an étale morphism. This is in analogy with the case of Riemann surfaces. In Abhyankar's conjecture, S is fixed, and the question is what G can be. This is therefore a special type of inverse Galois problem. Results The subgroup p(G) is defined to be the subgroup generated by all the Sylow subgroups of G for the prime number p. This is a normal subgroup, and the parameter n is defined as the minimum number of generators of G/p(G). Raynaud proved the case where C is the projective line over K, the conjecture states that G can be realised as a Galois group of L, unramified outside S containing s + 1 points, if and only if n ≤ s. The general case was proved by Harbater, in which g isthe genus of C and G can be realised if and only if n ≤ s + 2 g. References External links A layman's perspective of Abhyankar's conjecture from Purdue University Algebraic curves Galois theory Theorems in abstract algebra Conjectures that have been proved
https://en.wikipedia.org/wiki/Modal%20operator
A modal connective (or modal operator) is a logical connective for modal logic. It is an operator which forms propositions from propositions. In general, a modal operator has the "formal" property of being non-truth-functional in the following sense: The truth-value of composite formulae sometimes depend on factors other than the actual truth-value of their components. In the case of alethic modal logic, a modal operator can be said to be truth-functional in another sense, namely, that of being sensitive only to the distribution of truth-values across possible worlds, actual or not. Finally, a modal operator is "intuitively" characterized by expressing a modal attitude (such as necessity, possibility, belief, or knowledge) about the proposition to which the operator is applied. See also Garson, James, "Modal Logic", The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2021/entries/logic-modal/> Syntax for modal operators The syntax rules for modal operators and are very similar to those for universal and existential quantifiers; In fact, any formula with modal operators and , and the usual logical connectives in propositional calculus () can be rewritten to a de dicto normal form, similar to prenex normal form. One major caveat: Whereas the universal and existential quantifiers only binds to the propositional variables or the predicate variables following the quantifiers, since the modal operators and quantifies over accessible possible worlds, they will bind to any formula in their scope. For example, is logically equivalent to , but is not logically equivalent to ; Instead, is logically equivalent to . When there are both modal operators and quantifiers in a formula, different order of an adjacent pair of modal operator and quantifier can lead to different semantic meanings; Also, when multimodal logic is involved, different order of an adjacent pair of modal operators can al
https://en.wikipedia.org/wiki/Actigraphy
Actigraphy is a non-invasive method of monitoring human rest/activity cycles. A small actigraph unit, also called an actimetry sensor, is worn for a week or more to measure gross motor activity. The unit is usually in a wristwatch-like package worn on the wrist. The movements the actigraph unit undergoes are continually recorded and some units also measure light exposure. The data can be later read to a computer and analysed offline; in some brands of sensors the data are transmitted and analysed in real time. Purpose Sleep Sleep actigraphs are generally watch-shaped and worn on the wrist of the non-dominant arm for adults and usually on the ankle for children. They are useful for determining sleep patterns and circadian rhythms and may be worn for several weeks at a time. In the medical setting, traditional polysomnography has long been cited as "the 'gold standard' for sleep assessment." Since the 1990s, however, actigraphy has increasingly been used to assess sleep/wake behavior; especially for young children. Studies have found actigraphy to be helpful for sleep research because it tends to be less expensive and cumbersome than polysomnography. Unlike polysomnography, actigraphy allows the patient to be movable and to continue her or his normal routines while the required data are being recorded in his or her natural sleep environment; this may render the measured data more generally applicable. As sleep actigraphs are more affordable than polysomnographs, their use has advantages, particularly in the case of large field studies. However, actigraphy cannot be considered as a substitute to polysomnography. A full night sleep measured with polysomnography may be required for some sleep disorders. Indeed, actigraphy may be efficient in measuring sleep parameters and sleep quality, however it is not provided with measures for brain activity (EEG), eye movements (EOG), muscle activity (EMG) or heart rhythm (ECG). Actigraphy is useful for assessing daytime sle
https://en.wikipedia.org/wiki/Formal%20derivative
In mathematics, the formal derivative is an operation on elements of a polynomial ring or a ring of formal power series that mimics the form of the derivative from calculus. Though they appear similar, the algebraic advantage of a formal derivative is that it does not rely on the notion of a limit, which is in general impossible to define for a ring. Many of the properties of the derivative are true of the formal derivative, but some, especially those that make numerical statements, are not. Formal differentiation is used in algebra to test for multiple roots of a polynomial. Definition Fix a ring (not necessarily commutative) and let be the ring of polynomials over . (If is not commutative, this is the Free algebra over a single indeterminate variable.) Then the formal derivative is an operation on elements of , where if then its formal derivative is In the above definition, for any nonnegative integer and , is defined as usual in a Ring: (with if ). This definition also works even if does not have a multiplicative identity. Alternative axiomatic definition One may also define the formal derivative axiomatically as the map satisfying the following properties. 1) for all 2) The normalization axiom, 3) The map commutes with the addition operation in the polynomial ring, 4) The map satisfies Leibniz's law with respect to the polynomial ring's multiplication operation, One may prove that this axiomatic definition yields a well-defined map respecting all of the usual ring axioms. The formula above (i.e. the definition of the formal derivative when the coefficient ring is commutative) is a direct consequence of the aforementioned axioms: Properties It can be verified that: Formal differentiation is linear: for any two polynomials f(x),g(x) in R[x] and elements r,s of R we have The formal derivative satisfies the Product rule: Note the order of the factors; when R is not commutative this is important. These two properties make D a d
https://en.wikipedia.org/wiki/Escher%20in%20the%20Palace
Escher in Het Paleis (Escher in The Palace) is a museum in The Hague, Netherlands, featuring the works of the Dutch graphical artist M. C. Escher. It is housed in the Lange Voorhout Palace since November 2002. In 2015 it was revealed that many of the prints on display at the museum were replicas, scanned from original prints and printed onto the same type of paper used by Escher, rather than original Escher prints as they had been labeled. History The museum is housed in the Lange Voorhout Palace, a former royal residence dating back to the eighteenth century. Queen Emma bought the stately house in 1896. She used it as a winter palace from March 1901 until her death in March 1934. It was used by four subsequent Dutch queens for their business offices, until Queen Beatrix moved the office to Paleis Noordeinde. The first and second floors have exhibitions showing the royal period of the palace, highlighting Queen Emma's residence. The museum features a permanent display of a large number of woodcuts and lithographs by M.C. Escher, among them the world-famous prints, Air and Water (birds become fish); Belvedere (the inside out of a Folly); Waterfall (where water seems to flow upwards); Drawing (two hands drawing each other). Escher in Het Paleis shows the early lovely Italian landscapes, the many mirror prints and a choice from the tesselation drawings, also the three versions of the Metamorphosis, from the first small one, to the third, of 7 meters. This one is shown in a circle. It underlines the new vision of the museum on the work of M.C. Escher. The third floor of the museum is dedicated to the Optical Illusion, besides the famous Escher Room in which grownups seem to be smaller than their children, one's eyes will be tricked by multiple interactive displays. Interior In the rooms of the museum are fifteen chandeliers made by the Rotterdam artist Hans van Bentem. The artist designed these especially for the museum, with some references to the work of Escher
https://en.wikipedia.org/wiki/Computer%20%28occupation%29
The term "computer", in use from the early 17th century (the first known written reference dates from 1613), meant "one who computes": a person performing mathematical calculations, before electronic computers became commercially available. Alan Turing described the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail." Teams of people, often women from the late nineteenth century onwards, were used to undertake long and often tedious calculations; the work was divided so that this could be done in parallel. The same calculations were frequently performed independently by separate teams to check the correctness of the results. Since the end of the 20th century, the term "human computer" has also been applied to individuals with prodigious powers of mental arithmetic, also known as mental calculators. Origins in sciences Astronomers in Renaissance times used that term about as often as they called themselves "mathematicians" for their principal work of calculating the positions of planets. They often hired a "computer" to assist them. For some men, such as Johannes Kepler, assisting a scientist in computation was a temporary position until they moved on to greater advancements. Before he died in 1617, John Napier suggested ways by which "the learned, who perchance may have plenty of pupils and computers" might construct an improved logarithm table. Computing became more organized when the Frenchman Alexis Claude Clairaut (1713–1765) divided the computation to determine the time of the return of Halley's Comet with two colleagues, Joseph Lalande and Nicole-Reine Lepaute. Human computers continued plotting the future movements of astronomical objects to create celestial tables for almanacs in the late 1760s. The computers working on the Nautical Almanac for the British Admiralty included William Wales, Israel Lyons and Richard Dunthorne. The project was overseen by Nevil Maskelyne. Maskely
https://en.wikipedia.org/wiki/Pascal%20MicroEngine
Pascal MicroEngine is a series of microcomputer products manufactured by Western Digital from 1979 through the mid-1980s, designed specifically to run the UCSD p-System efficiently. Compared to other microcomputers, which use a machine language p-code interpreter, the Pascal MicroEngine has its interpreter implemented in microcode; p-code is its machine language. The most common programming language used on the p-System is Pascal. The MicroEngine runs a special release III p-System. The enhancements of release III were incorporated into release IV which was made publicly available for other platforms but not for the MicroEngine. Products The MicroEngine series of products was offered at various levels of integration: WD-9000 five chip microprocessor chip set WD-900 single board computer WD-90 packaged system SB-1600 MicroEngine single board computer ME-1600 Modular MicroEngine packaged system The MicroEngine chipset was based on the MCP-1600 chipset, which formed the basis of the DEC LSI-11 low-end minicomputer and the WD16 processor used by Alpha Microsystems (each using different microcode). One of the well regarded systems was the S-100 bus based dual processor cards developed by Digicomp Research of Ithaca, NY. These cards deserve an entry on their own, as they survived the demise of the WD single-board system and delivered reliable performance at up to 2.5Mhz. A typical configuration was a Digicomp dual processor board set, containing a Zilog Z80 and a bipolar memory mapper harnessed to a microengine chipset on the second board, linked by a direct cable. The sole configuration known to be still running in 2018 and documented on the web is described by Marcus Wigan and contains 312 kB of memory, RAM disc support through a modified Z80 BIOS (written by Tom Evans) taking advantage of the memory mapping chip on the Z80 board, and using the UCSD Pascal III version of the operating system tuned specifically for the WD chipset - once the Microengine had boo
https://en.wikipedia.org/wiki/Sabayon%20Linux
Sabayon Linux or Sabayon (formerly RR4 Linux and RR64 Linux), was an Italian Gentoo-based Linux distribution created by Fabio Erculiani and the Sabayon development team. Sabayon followed the "out of the box" philosophy, aiming to give the user a wide number of applications ready to use and a self-configured operating system. Sabayon Linux featured a rolling release cycle, its own software repository and a package management system called Entropy. Sabayon was available in both x86 and AMD64 distributions and there was support for ARMv7 in development for the BeagleBone. It was named after an Italian dessert, zabaione, which is made from eggs. Sabayon's logo was an impression of a chicken foot. In November 2020 it was announced that future Sabayon Linux versions would base on Funtoo instead of Gentoo Linux. Sabayon Linux would hence be rebranded to MocaccinoOS. Editions Since version 4.1, Sabayon had been released in two different flavors featuring either the GNOME or KDE desktop environments, with the ultralight Fluxbox environment included as well. (In the previous versions all three environments were included in a DVD ISO image). Since Sabayon's initial release, additional versions of Sabayon have added other X environments, including Xfce and LXDE. A CoreCD edition which featured a minimal install of Sabayon was released to allow the creation of spins of the Sabayon operating system; however, this was later discontinued and replaced by CoreCDX (fluxbox window manager) and Spinbase (no X environment) first and by "Sabayon Minimal" later. A ServerBase edition was released which featured a server-optimized kernel and a small footprint, but this was later discontinued and integrated into the "Sabayon Minimal". Daily build images were available to Sabayon testers, but were released weekly to the public on the system mirrors containing stable releases. Official releases were simply DAILY versions which had received deeper testing. The adoption of Molecule led the
https://en.wikipedia.org/wiki/Comparability
In mathematics, two elements x and y of a set P are said to be comparable with respect to a binary relation ≤ if at least one of x ≤ y or y ≤ x is true. They are called incomparable if they are not comparable. Rigorous definition A binary relation on a set is by definition any subset of Given is written if and only if in which case is said to be to by An element is said to be , or (), to an element if or Often, a symbol indicating comparison, such as (or and many others) is used instead of in which case is written in place of which is why the term "comparable" is used. Comparability with respect to induces a canonical binary relation on ; specifically, the induced by is defined to be the set of all pairs such that is comparable to ; that is, such that at least one of and is true. Similarly, the on induced by is defined to be the set of all pairs such that is incomparable to that is, such that neither nor is true. If the symbol is used in place of then comparability with respect to is sometimes denoted by the symbol , and incomparability by the symbol . Thus, for any two elements and of a partially ordered set, exactly one of and is true. Example A totally ordered set is a partially ordered set in which any two elements are comparable. The Szpilrajn extension theorem states that every partial order is contained in a total order. Intuitively, the theorem says that any method of comparing elements that leaves some pairs incomparable can be extended in such a way that every pair becomes comparable. Properties Both of the relations and are symmetric, that is is comparable to if and only if is comparable to and likewise for incomparability. Comparability graphs The comparability graph of a partially ordered set has as vertices the elements of and has as edges precisely those pairs of elements for which . Classification When classifying mathematical objects (e.g., topological spaces), two are sa
https://en.wikipedia.org/wiki/WS-Discovery
Web Services Dynamic Discovery (WS-Discovery) is a technical specification that defines a multicast discovery protocol to locate services on a local network. It operates over TCP and UDP port 3702 and uses IP multicast address or . As the name suggests, the actual communication between nodes is done using web services standards, notably SOAP-over-UDP. Various components in Microsoft's Windows Vista operating system use WS-Discovery, e.g. "People near me". The component WSDMON in Windows 7 and later uses WS-Discovery to automatically discover WSD-enabled network printers, which show in Network in Windows Explorer, and can be installed by double-clicking on them. In Windows 8 or later installation is automatic. WS-Discovery is enabled by default in networked HP printers since 2008. WS-Discovery is an integral part of Windows Rally technologies and Devices Profile for Web Services. The protocol was originally developed by BEA Systems, Canon, Intel, Microsoft, and WebMethods. On July 1, 2009 it was approved as a standard by OASIS. See also Avahi Bonjour DHCP Jini List of Web service specifications LLMNR OSGi Alliance SSDP Universal Plug and Play (UPnP) Web Services Discovery Web Services for Devices Zero-configuration networking (Zeroconf) References External links Where to find Web Services on the Web WS-Discovery specification version 1.0 (2005/04) WS-Discovery specification version 1.1 Draft (2008/09) WS-Discovery specification version 1.1 Final (2009/01) Windows communication and services Web service specifications Web services XML-based standards
https://en.wikipedia.org/wiki/OpenVAS
OpenVAS (Open Vulnerability Assessment Scanner, originally known as GNessUs) is the scanner component of Greenbone Vulnerability Management (GVM), a software framework of several services and tools offering vulnerability scanning and vulnerability management. All Greenbone Vulnerability Management products are free software, and most components are licensed under the GNU General Public License (GPL). Plugins for Greenbone Vulnerability Management are written in the Nessus Attack Scripting Language, NASL. History Greenbone Vulnerability Manager began under the name of OpenVAS, and before that the name GNessUs, as a fork of the previously open source Nessus scanning tool, after its developers Tenable Network Security changed it to a proprietary (closed source) license in October 2005. OpenVAS was originally proposed by pentesters at SecuritySpace, discussed with pentesters at Portcullis Computer Security and then announced by Tim Brown on Slashdot. Greenbone Vulnerability Manager is a member project of Software in the Public Interest. Structure There is a daily updated feed of Network Vulnerability Tests (NVTs). , there were over 50,000 NVTs. Documentation The OpenVAS protocol structure aims to be well-documented to assist developers. The OpenVAS Compendium is a publication of the OpenVAS Project that delivers documentation on OpenVAS. See also Aircrack-ng BackBox BackTrack Kali Linux Kismet (software) List of free and open-source software packages Metasploit Project Nmap ZMap (software) References External links 2005 software Free security software Network analyzers Pentesting software toolkits
https://en.wikipedia.org/wiki/Logical%20Disk%20Manager
The Logical Disk Manager (LDM) is an implementation of a logical volume manager for Microsoft Windows NT, developed by Microsoft and Veritas Software. It was introduced with the Windows 2000 operating system, and is supported in Windows XP, Windows Server 2003, Windows Vista, Windows 7, Windows 8, Windows 10 and Windows 11. The MMC-based Disk Management snap-in () hosts the Logical Disk Manager. On Windows 8 and Windows Server 2012, Microsoft deprecated LDM in favor of Storage Spaces. Logical Disk Manager enables disk volumes to be dynamic, in contrast to the standard basic volume format. Basic volumes and dynamic volumes differ in their ability to extend storage beyond one physical disk. Basic partitions are restricted to a fixed size on one physical disk. Dynamic volumes can be enlarged to include more free space - either from the same disk or another physical disk. (For more information on the difference, see Basic and dynamic disks and volumes, below.) Overview Basic storage involves dividing a disk into primary and extended partitions. This is the route that all versions of Windows that were reliant on DOS-handled storage took, and disks formatted in this manner are known as basic disks. Dynamic storage involves the use of a single partition that covers the entire disk, and the disk itself is divided into volumes or combined with other disks to form volumes that are greater in size than one disk itself. Volumes can use any supported file system. Basic disks can be upgraded to dynamic disks; however, when this is done the disk cannot easily be downgraded to a basic disk again. To perform a downgrade, data on the dynamic disk must first be backed up onto some other storage device. Second, the dynamic disk must be re-formatted as a basic disk (erasing all data). Finally, data from the backup must be copied back over to the newly re-formatted basic disk. Dynamic disks provide the capability for software implementations of RAID. The main disadvantage of dynami
https://en.wikipedia.org/wiki/CANopen
CANopen is a communication protocol and device profile specification for embedded systems used in automation. In terms of the OSI model, CANopen implements the layers above and including the network layer. The CANopen standard consists of an addressing scheme, several small communication protocols and an application layer defined by a device profile. The communication protocols have support for network management, device monitoring and communication between nodes, including a simple transport layer for message segmentation/desegmentation. The lower level protocol implementing the data link and physical layers is usually Controller Area Network (CAN), although devices using some other means of communication (such as Ethernet Powerlink, EtherCAT) can also implement the CANopen device profile. The basic CANopen device and communication profiles are given in the CiA 301 specification released by CAN in Automation. Profiles for more specialized devices are built on top of this basic profile, and are specified in numerous other standards released by CAN in Automation, such as CiA 401 for I/O-modules and CiA 402 for motion control. Device model Every CANopen device has to implement certain standard features in its controlling software. A communication unit implements the protocols for messaging with the other nodes in the network. Starting and resetting the device is controlled via a state machine. It must contain the states Initialization, Pre-operational, Operational and Stopped. The transitions between states are made by issuing a network management (NMT) communication object to the device. The object dictionary is an array of variables with a 16-bit index. Additionally, each variable can have an 8-bit subindex. The variables can be used to configure the device and reflect its environment, i.e. contain measurement data. The application part of the device actually performs the desired function of the device, after the state machine is set to the operational state
https://en.wikipedia.org/wiki/Trilinear%20coordinates
In geometry, the trilinear coordinates of a point relative to a given triangle describe the relative directed distances from the three sidelines of the triangle. Trilinear coordinates are an example of homogeneous coordinates. The ratio is the ratio of the perpendicular distances from the point to the sides (extended if necessary) opposite vertices and respectively; the ratio is the ratio of the perpendicular distances from the point to the sidelines opposite vertices and respectively; and likewise for and vertices and . In the diagram at right, the trilinear coordinates of the indicated interior point are the actual distances (, , ), or equivalently in ratio form, for any positive constant . If a point is on a sideline of the reference triangle, its corresponding trilinear coordinate is 0. If an exterior point is on the opposite side of a sideline from the interior of the triangle, its trilinear coordinate associated with that sideline is negative. It is impossible for all three trilinear coordinates to be non-positive. Notation The ratio notation for trilinear coordinates is often used in preference to the ordered triple notation with the latter reserved for triples of directed distances relative to a specific triangle. The trilinear coordinates can be rescaled by any arbitrary value without affecting their ratio. The bracketed, comma-separated triple notation can cause confusion because conventionally this represents a different triple than e.g. but these equivalent ratios represent the same point. Examples The trilinear coordinates of the incenter of a triangle are ; that is, the (directed) distances from the incenter to the sidelines are proportional to the actual distances denoted by , where is the inradius of . Given side lengths we have: Note that, in general, the incenter is not the same as the centroid; the centroid has barycentric coordinates (these being proportional to actual signed areas of the triangles , where = centroi
https://en.wikipedia.org/wiki/Metasploit
The Metasploit Project is a computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS signature development. It is owned by Boston, Massachusetts-based security company Rapid7. Its best-known sub-project is the open-source Metasploit Framework, a tool for developing and executing exploit code against a remote target machine. Other important sub-projects include the Opcode Database, shellcode archive and related research. The Metasploit Project includes anti-forensic and evasion tools, some of which are built into the Metasploit Framework. Metasploit is pre-installed in the Kali Linux operating system. History Metasploit was created by H. D. Moore in 2003 as a portable network tool using Perl. By 2007, the Metasploit Framework had been completely rewritten in Ruby. On October 21, 2009, the Metasploit Project announced that it had been acquired by Rapid7, a security company that provides unified vulnerability management solutions. Like comparable commercial products such as Immunity's Canvas or Core Security Technologies' Core Impact, Metasploit can be used to test the vulnerability of computer systems or to break into remote systems. Like many information security tools, Metasploit can be used for both legitimate and unauthorized activities. Since the acquisition of the Metasploit Framework, Rapid7 has added an open core proprietary edition called Metasploit Pro. Metasploit's emerging position as the de facto exploit development framework led to the release of software vulnerability advisories often accompanied by a third party Metasploit exploit module that highlights the exploitability, risk and remediation of that particular bug. Metasploit 3.0 began to include fuzzing tools, used to discover software vulnerabilities, rather than just exploits for known bugs. This avenue can be seen with the integration of the lorcon wireless (802.11) toolset into Metasploit 3.0 in November 2006. Framework Th
https://en.wikipedia.org/wiki/Solaris%20IP%20network%20multipathing
The IP network multipathing or IPMP is a facility provided by Solaris to provide fault-tolerance and load spreading for network interface cards (NICs). With IPMP, two or more NICs are dedicated for each network to which the host connects. Each interface can be assigned a static "test" IP address, which is used to assess the operational state of the interface. Each virtual IP address is assigned to an interface, though there may be more interfaces than virtual IP addresses, some of the interfaces being purely for standby purposes. When the failure of an interface is detected its virtual IP addresses are swapped to an operational interface in the group. The IPMP load spreading feature increases the machine's bandwidth by spreading the outbound load between all the cards in the same IPMP group. in.mpathd is the daemon in the Solaris OS responsible for IPMP functionality. See also Multihoming Multipath routing Multipath TCP Common Address Redundancy Protocol External links Enterprise Networking Article, February 2, 2006 Introducing IPMP - Oracle Solaris 11 IPMP section from Sun Solaris 10 System Administration Guide Networking standards Sun Microsystems software
https://en.wikipedia.org/wiki/Bismuth%20telluride
Bismuth telluride () is a gray powder that is a compound of bismuth and tellurium also known as bismuth(III) telluride. It is a semiconductor, which, when alloyed with antimony or selenium, is an efficient thermoelectric material for refrigeration or portable power generation. is a topological insulator, and thus exhibits thickness-dependent physical properties. Properties as a thermoelectric material Bismuth telluride is a narrow-gap layered semiconductor with a trigonal unit cell. The valence and conduction band structure can be described as a many-ellipsoidal model with 6 constant-energy ellipsoids that are centered on the reflection planes. cleaves easily along the trigonal axis due to Van der Waals bonding between neighboring tellurium atoms. Due to this, bismuth-telluride-based materials used for power generation or cooling applications must be polycrystalline. Furthermore, the Seebeck coefficient of bulk becomes compensated around room temperature, forcing the materials used in power-generation devices to be an alloy of bismuth, antimony, tellurium, and selenium. Recently, researchers have attempted to improve the efficiency of -based materials by creating structures where one or more dimensions are reduced, such as nanowires or thin films. In one such instance n-type bismuth telluride was shown to have an improved Seebeck coefficient (voltage per unit temperature difference) of −287 μV/K at 54 °C, However, one must realize that Seebeck coefficient and electrical conductivity have a tradeoff: a higher Seebeck coefficient results in decreased carrier concentration and decreased electrical conductivity. In another case, researchers report that bismuth telluride has high electrical conductivity of 1.1×105 S·m/m2 with its very low lattice thermal conductivity of 1.20 W/(m·K), similar to ordinary glass. Properties as a topological insulator Bismuth telluride is a well-studied topological insulator. Its physical properties have been shown to change at highly
https://en.wikipedia.org/wiki/Fermentek
Fermentek Ltd. is a biotechnological company in the Atarot industrial zone of Jerusalem, Israel. It specializes in the research, development and manufacture of biologically active, natural products isolated from microorganisms as well as from other natural sources such as plants and algae. The main microorganisms used are nonpathogenic actinomycetes, Nocardia and Streptomycetes. The fungi used are: Penicillium, Aspergillus, Fusarium and the like. None of these is a human pathogen. Fermentek does not sell to individuals. Most of its products are marketed through major international distributors specializing in chemicals, under their own brand names. Nevertheless, Fermentek has specific impact on the biochemical market, especially in the field of mycotoxins. Mycotoxins are toxic compounds produced by molds in human food and farm animal feeds, thus being economically important factors. Fermentek manufactures an extensive line of pure mycotoxins used as standards in food analysis. In some cases, such as Aflatoxin M2, Fermentek supplies the entire world's requirements. In 2009, Fermentek announced a product family of highly standardized calibrant solutions of main mycotoxins. These are marketed under the brand name FermaSol. In 2010, it obtained ISO 13485 accreditation in connection with the production of starting materials for experimental drug production, and with manufacturing of reference standards of food contaminants. None of Fermentek's products have been invented by it. Fermentek's aim is to make known compounds affordable to the scientific community. Fermentek was founded by Dr. Yosef Behrend in 1994. It moved in 2004 to its new building, quadrupling its working space and greatly enlarging its manufacturing capacities. Technology Fermentek operates fermentors ranging in size from 10 to 15,000 liters, with filter presses and centrifuges of matching capacity. According to the company policy as declared at its official website, Fermentek uses only the "Cla
https://en.wikipedia.org/wiki/Constant%20fraction%20discriminator
A constant fraction discriminator (CFD) is an electronic signal processing device, designed to mimic the mathematical operation of finding a maximum of a pulse by finding the zero of its slope. Some signals do not have a sharp maximum, but short rise times . Typical input signals for CFDs are pulses from plastic scintillation counters, such as those used for lifetime measurement in positron annihilation experiments. The scintillator pulses have identical rise times that are much longer than the desired temporal resolution. This forbids simple threshold triggering, which causes a dependence of the trigger time on the signal's peak height, an effect called time walk (see diagram). Identical rise times and peak shapes permit triggering not on a fixed threshold but on a constant fraction of the total peak height, yielding trigger times independent from peak heights. From another point of view A time-to-digital converter assigns timestamps. The time-to-digital converter needs fast rising edges with normed height. The plastic scintillation counter delivers fast rising edge with varying heights. Theoretically, the signal could be split into two parts. One part would be delayed and the other low pass filtered, inverted and then used in a variable-gain amplifier to amplify the original signal to the desired height. Practically, it is difficult to achieve a high dynamic range for the variable-gain amplifier, and analog computers have problems with the inverse value. Principle of operation The incoming signal is split into three components. One component is delayed by a time , with it may be multiplied by a small factor to put emphasis on the leading edge of the pulse and connected to the noninverting input of a comparator. One component is connected to the inverting input of this comparator. One component is connected to the noninverting input of another comparator. A threshold value is connected to the inverting input of the other comparator. The output of both compara
https://en.wikipedia.org/wiki/Phase%20converter
A phase converter is a device that converts electric power provided as single phase to multiple phase or vice versa. The majority of phase converters are used to produce three-phase electric power from a single-phase source, thus allowing the operation of three-phase equipment at a site that only has single-phase electrical service. Phase converters are used where three-phase service is not available from the utility provider or is too costly to install. A utility provider will generally charge a higher fee for a three-phase service because of the extra equipment, including transformers, metering, and distribution wire required to complete a functional installation. Types of Phase Converters Three-phase induction motors may operate adequately on an unbalanced supply if not heavily loaded. This allows various imperfect techniques to be used. A single-phase motor can drive a three-phase generator, which will produce a high-quality three-phase source but at a high cost to the longevity of the system. While there are multiple phase conversion systems in place, the most common types are: Rotary phase converters constructed from a three-phase electric motor or generator "idler" and a simple on/off circuit. Rotary phase converters are known to drive up operations costs, due to the continued draw of power while idling that is not common in other phase converters. Rotary phase converters are considered a two-motor solution; one motor is not connected to a load and produces the three-phase power, the second motor driving the load runs on the power produced. A digital phase converter uses a rectifier and inverter to create a third leg of power, which is added to the two legs of the single-phase source to create three-phase power. Unlike a phase-converting VFD, it cannot vary the frequency and motor speed, since it generates only one leg. Digital phase Converters use a Digitial Signal Processor (DSP) to ensure the generated third leg matches the voltage and frequency of t
https://en.wikipedia.org/wiki/Nokia%20Business%20Center
Nokia Business Center (NBC) was a mobile email solution by Nokia, providing push e-mail and (through a paid-for client upgrade) calendar and contact availability to mobile devices. The server runs on Red Hat Enterprise Linux. It was discontinued in 2014. External links Press Release about support for IBM Lotus Notes and Domino addition to NBC Nokia services Mobile web
https://en.wikipedia.org/wiki/Indian%20rivers%20interlinking%20project
The Indian Rivers Inter-link is a proposed large-scale civil engineering project that aims to effectively manage water resources in India by linking Indian rivers by a network of reservoirs and canals to enhance irrigation and groundwater recharge, reduce persistent floods in some parts and water shortages in other parts of India. India accounts for 18% of the world population and about 4% of the world’s water resources. One of the solutions to solve the country’s water woes is to link rivers and lakes. The Inter-link project has been split into three parts: a northern Himalayan rivers inter-link component, a southern Peninsular component and starting 2005, an intrastate rivers linking component. The project is being managed by India's National Water Development Agency Ministry of Jal Shakti. NWDA has studied and prepared reports on 14 inter-link projects for Himalayan component, 16 inter-link projects for Peninsular component and 37 intrastate river linking projects. The average rainfall in India is about 4,000 billion cubic metres, but most of India's rainfall comes over a 4-month period – June through September. Furthermore, the rain across the very large nation is not uniform, the east and north gets most of the rain, while the west and south get less. India also sees years of excess monsoons and floods, followed by below average or late monsoons with droughts. This geographical and time variance in availability of natural water versus the year round demand for irrigation, drinking and industrial water creates a demand-supply gap, that has been worsening with India's rising population. Proponents of the rivers inter-linking projects claim the answers to India's water problem is to conserve the abundant monsoon water bounty, store it in reservoirs, and deliver this water – using rivers inter-linking project – to areas and over times when water becomes scarce. Beyond water security, the project is also seen to offer potential benefits to transport infrastructur
https://en.wikipedia.org/wiki/Network%20on%20a%20chip
A network on a chip or network-on-chip (NoC or ) is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules. NoC technology applies the theory and methods of computer networking to on-chip communication and brings notable improvements over conventional bus and crossbar communication architectures. Networks-on-chip come in many network topologies, many of which are still experimental as of 2018. In 2000s researchers had started to propose a type of on-chip interconnection in the form of packet switching networks in order to address the scalability issues of bus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires. Then, the concept of "network on chips" was proposed in 2002. NoCs improve the scalability of systems-on-chip and the power efficiency of complex SoCs compared to other communication subsystem designs. They are an emerging technology, with projections for large growth in the near future as multicore computer architectures become more common. Structure NoCs can span synchronous and asynchronous clock domains, known as clock domain crossing, or use unclocked asynchronous logic. NoCs support globally asynchronous, locally synchronous electronics architectures, allowing each processor core or functional unit on the System-on-Chip to have its own clock domain. Architectures NoC architectures typically model sparse small-world networks (SWNs) and scale-free networks (SFNs) to limit the number, length, area and power consumption of interconnection wires and point-to-point connections. Topology The topology is the first fundamental aspect of NoC design
https://en.wikipedia.org/wiki/Strong%20prime
In mathematics, a strong prime is a prime number with certain special properties. The definitions of strong primes are different in cryptography and number theory. Definition in number theory In number theory, a strong prime is a prime number that is greater than the arithmetic mean of the nearest prime above and below (in other words, it's closer to the following than to the preceding prime). Or to put it algebraically, writing the sequence of prime numbers as (p, p, p, ...) = (2, 3, 5, ...), p is a strong prime if . For example, 17 is the seventh prime: the sixth and eighth primes, 13 and 19, add up to 32, and half that is 16; 17 is greater than 16, so 17 is a strong prime. The first few strong primes are 11, 17, 29, 37, 41, 59, 67, 71, 79, 97, 101, 107, 127, 137, 149, 163, 179, 191, 197, 223, 227, 239, 251, 269, 277, 281, 307, 311, 331, 347, 367, 379, 397, 419, 431, 439, 457, 461, 479, 487, 499 . In a twin prime pair (p, p + 2) with p > 5, p is always a strong prime, since 3 must divide p − 2, which cannot be prime. Definition in cryptography In cryptography, a prime number p is said to be "strong" if the following conditions are satisfied. p is sufficiently large to be useful in cryptography; typically this requires p to be too large for plausible computational resources to enable a cryptanalyst to factorise products of p with other strong primes. p − 1 has large prime factors. That is, p = aq + 1 for some integer a and large prime q. q − 1 has large prime factors. That is, q = aq + 1 for some integer a and large prime q. p + 1 has large prime factors. That is, p = aq − 1 for some integer a and large prime q. It is possible for a prime to be a strong prime both in the cryptographic sense and the number theoretic sense. For the sake of illustration, 439351292910452432574786963588089477522344331 is a strong prime in the number theoretic sense because the arithmetic mean of its two neighboring primes is 62 less. Without the aid of a computer, this nu
https://en.wikipedia.org/wiki/International%20Shark%20Attack%20File
The International Shark Attack File is a global database of shark attacks. The file reportedly contains information on over 6,800 shark attacks spanning from the early 1500s to the present day, and includes detailed, often privileged, information including autopsy reports and photos. It is accessible only to scientists whose access is permitted by a review board. History The database originated when the Office of Naval Research formed the Shark Research Panel in June 1958, which funded it until 1967. This group comprised 34 renowned scientists with expertise in sharks, tasked with exploring research strategies to enhance protection for Navy personnel against shark attacks. The file was temporarily housed at the Mote Marine Laboratory in Sarasota, Florida. In the 1980s, it was transferred to the National Underwater Accident Data Center at the University of Rhode Island before it was transferred to the Florida Museum of Natural History at the University of Florida under the direction of George H. Burgess. It is currently under the direction of Dr. Gavin Naylor and members of the American Elasmobranch Society, which has assumed the task of preserving, expanding, and analyzing shark attack data. References External links International Shark Attack File Online databases Databases in the United States Sharks Shark attacks
https://en.wikipedia.org/wiki/Windows%20Workflow%20Foundation
Windows Workflow Foundation (WF) is a Microsoft technology that provides an API, an in-process workflow engine, and a rehostable designer to implement long-running processes as workflows within .NET applications. The latest version of WF was released as part of the .NET Framework version 4.5 and is referred to as (WF45). A workflow, as defined here, is a series of distinct programming steps or phases. Each step is modeled in WF as an Activity. The .NET Framework provides a library of activities (such as WriteLine, an activity that writes text to the console or other form of output). Custom activities can also be developed for additional functionality. Activities can be assembled visually into workflows using the Workflow Designer, a design surface that runs within Visual Studio. The designer can also be hosted in other applications. Encapsulating programming functionality into the activities allows the developer to create more manageable applications; each component of execution can be developed as a Common Language Runtime object whose execution will be managed by the workflow runtime. Workflow Foundation versions Workflow Foundation was first released in Version 3 of the .NET Framework, and primarily uses the System.WorkflowActivities, System.Workflow.ComponentModel, and System.WorkflowRuntime namespaces. Workflows in version 3 were created using either the Sequential model (in which activities are executed in order, with the completion of one activity leading to the next), or the State Machine model (in which activities are executed in response to external events). Microsoft SharePoint 2007 uses WF 3. In .NET Framework 3.5, messaging activities were introduced that integrated Workflow with Windows Communication Foundation (WCF). With the new ReceiveActivity, workflows could respond to incoming WCF messages. The new features of Workflow in version 3.5 use the System.ServiceModel namespace. Microsoft SharePoint 2010 uses WF 3.5. In .NET Framework 4, Windows W
https://en.wikipedia.org/wiki/Thom%20conjecture
In mathematics, a smooth algebraic curve in the complex projective plane, of degree , has genus given by the genus–degree formula . The Thom conjecture, named after French mathematician René Thom, states that if is any smoothly embedded connected curve representing the same class in homology as , then the genus of satisfies the inequality . In particular, C is known as a genus minimizing representative of its homology class. It was first proved by Peter Kronheimer and Tomasz Mrowka in October 1994, using the then-new Seiberg–Witten invariants. Assuming that has nonnegative self intersection number this was generalized to Kähler manifolds (an example being the complex projective plane) by John Morgan, Zoltán Szabó, and Clifford Taubes, also using the Seiberg–Witten invariants. There is at least one generalization of this conjecture, known as the symplectic Thom conjecture (which is now a theorem, as proved for example by Peter Ozsváth and Szabó in 2000). It states that a symplectic surface of a symplectic 4-manifold is genus minimizing within its homology class. This would imply the previous result because algebraic curves (complex dimension 1, real dimension 2) are symplectic surfaces within the complex projective plane, which is a symplectic 4-manifold. See also Adjunction formula Milnor conjecture (topology) References Four-dimensional geometry 4-manifolds Algebraic surfaces Conjectures that have been proved Theorems in geometry
https://en.wikipedia.org/wiki/Samuel%20Francis%20Boys
Samuel Francis (Frank) Boys (20 December 1911 – 16 October 1972) was a British theoretical chemist. Education Boys was born in Pudsey, Yorkshire, England. He was educated at the Grammar School in Pudsey and then at Imperial College London. He graduated in Chemistry in 1932. He was awarded a PhD in 1937 from Cambridge for research conducted at Trinity College, supervised first by Martin Lowry, and then, after Lowry's 1936 death, by John Lennard-Jones. His thesis was "The Quantum Theory of Optical Rotation". Career In 1938, Boys was appointed an Assistant Lecturer in Mathematical Physics at Queen's University Belfast. He spent the whole of the Second World War working on explosives research with the Ministry of Supply at the Royal Arsenal, Woolwich, with Lennard-Jones as his supervisor. After the war, Boys accepted an ICI Fellowship at Imperial College, London. In 1949, he was appointed to a Lectureship in theoretical chemistry at the University of Cambridge. He remained at Cambridge until his death. He was only elected to a Cambridge College Fellowship at University College, now Wolfson College, Cambridge, shortly before his death. Boys is best known for the introduction of Gaussian orbitals into ab initio quantum chemistry. Almost all basis sets used in computational chemistry now employ these orbitals. Frank Boys was also one of the first scientists to use digital computers for calculations on polyatomic molecules. An International Conference, entitled "Molecular Quantum Mechanics: Methods and Applications" was held in memory of S. Francis Boys and in honour of Isaiah Shavitt in September 1995 at St Catharine's College, Cambridge. Awards and honours Boys was a member of the International Academy of Quantum Molecular Science. He was elected a Fellow of the Royal Society (FRS) in 1972, a few months before his death. References External links 1911 births 1972 deaths People from Pudsey Alumni of Imperial College London Alumni of Trinity College, Cambridge
https://en.wikipedia.org/wiki/Computational%20electromagnetics
Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment using computers. It typically involves using computer programs to compute approximate solutions to Maxwell's equations to calculate antenna performance, electromagnetic compatibility, radar cross section and electromagnetic wave propagation when not in free space. A large subfield is antenna modeling computer programs, which calculate the radiation pattern and electrical properties of radio antennas, and are widely used to design antennas for specific applications. Background Several real-world electromagnetic problems like electromagnetic scattering, electromagnetic radiation, modeling of waveguides etc., are not analytically calculable, for the multitude of irregular geometries found in actual devices. Computational numerical techniques can overcome the inability to derive closed form solutions of Maxwell's equations under various constitutive relations of media, and boundary conditions. This makes computational electromagnetics (CEM) important to the design, and modeling of antenna, radar, satellite and other communication systems, nanophotonic devices and high speed silicon electronics, medical imaging, cell-phone antenna design, among other applications. CEM typically solves the problem of computing the E (electric) and H (magnetic) fields across the problem domain (e.g., to calculate antenna radiation pattern for an arbitrarily shaped antenna structure). Also calculating power flow direction (Poynting vector), a waveguide's normal modes, media-generated wave dispersion, and scattering can be computed from the E and H fields. CEM models may or may not assume symmetry, simplifying real world structures to idealized cylinders, spheres, and other regular geometrical objects. CEM models extensively make use of symmetry, and solve for reduced dimensionality
https://en.wikipedia.org/wiki/Generalized%20Helmholtz%20theorem
The generalized Helmholtz theorem is the multi-dimensional generalization of the Helmholtz theorem which is valid only in one dimension. The generalized Helmholtz theorem reads as follows. Let be the canonical coordinates of a s-dimensional Hamiltonian system, and let be the Hamiltonian function, where , is the kinetic energy and is the potential energy which depends on a parameter . Let the hyper-surfaces of constant energy in the 2s-dimensional phase space of the system be metrically indecomposable and let denote time average. Define the quantities , , , , as follows: , , , Then: Remarks The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics. This fact shows that thermodynamic-like relations exist between certain mechanical quantities in multidimensional ergodic systems. This in turn allows to define the "thermodynamic state" of a multi-dimensional ergodic mechanical system, without the requirement that the system be composed of a large number of degrees of freedom. In particular the temperature is given by twice the time average of the kinetic energy per degree of freedom, and the entropy by the logarithm of the phase space volume enclosed by the constant energy surface (i.e. the so-called volume entropy). References Further reading Helmholtz, H., von (1884a). Principien der Statik monocyklischer Systeme. Borchardt-Crelle’s Journal für die reine und angewandte Mathematik, 97, 111–140 (also in Wiedemann G. (Ed.) (1895) Wissenschafltliche Abhandlungen. Vol. 3 (pp. 142–162, 179–202). Leipzig: Johann Ambrosious Barth). Helmholtz, H., von (1884b). Studien zur Statik monocyklischer Systeme. Sitzungsberichte der Kö niglich Preussischen Akademie der Wissenschaften zu Berlin, I, 159–177 (also in Wiedemann G. (Ed.) (1895) Wissenschafltliche Abhandlungen. Vol. 3 (pp. 163–178). Leipzig: Johann Ambrosious Barth). Boltzmann, L. (1884). Über die Eigenschaften monocyklischer und anderer damit verwandter Systeme.C
https://en.wikipedia.org/wiki/Video%20game%20conversion
In video gaming parlance, a conversion is the production of a game on one computer or console that was originally written for another system. Over the years, video game conversion has taken form in a number of different ways, both in their style and the method in which they were converted. In the arcade video game industry, the term conversion has a different usage, in reference to game conversion kits for arcade cabinets. Types of conversions Direct conversions Direct conversions, also referred to as "straight conversions", are conversions in which the source code of the original game is used with relatively few modifications. Direct conversions were fairly rare until the second half of the 1990s. In the case of arcade conversions, this was because arcade systems were usually much more advanced than their contemporary home-based systems, which thus could not accurately recreate the speed, graphics, audio, and in some cases even the gameplay algorithms of arcade games. In the case of personal computer conversions, most games pre-1995 were produced in assembly language, and source-based conversions could not be reproduced on systems with other processors, rendering the original source code useless. Also, while most third-party developers had access to the original graphics and audio, they could not be faithfully reproduced on older home computers such as the ZX Spectrum and developers were forced to recreate the graphics and audio from scratch. In the early 2000s, source-based conversions of games became more feasible and one-to-one pixel perfect conversions became commonplace. Imitations/clones Imitations of popular arcade games were common, particularly in the early days of video gaming when copyright violations were treated less severely. While the game was fundamentally the same, the title, names, graphics and audio were usually changed to avoid legal challenges. Developers have created "clones" of their own games. Escape (now Westone) produced a clone of W
https://en.wikipedia.org/wiki/One-way%20compression%20function
In cryptography, a one-way compression function is a function that transforms two fixed-length inputs into a fixed-length output. The transformation is "one-way", meaning that it is difficult given a particular output to compute inputs which compress to that output. One-way compression functions are not related to conventional data compression algorithms, which instead can be inverted exactly (lossless compression) or approximately (lossy compression) to the original data. One-way compression functions are for instance used in the Merkle–Damgård construction inside cryptographic hash functions. One-way compression functions are often built from block ciphers. Some methods to turn any normal block cipher into a one-way compression function are Davies–Meyer, Matyas–Meyer–Oseas, Miyaguchi–Preneel (single-block-length compression functions) and MDC-2/Meyer–Schilling, MDC-4, Hirose (double-block-length compression functions). These methods are described in detail further down. (MDC-2 is also the name of a hash function patented by IBM.) Another method is 2BOW (or NBOW in general), which is a "high-rate multi-block-length hash function based on block ciphers" and typically achieves (asymptotic) rates between 1 and 2 independent of the hash size (only with small constant overhead). This method has not yet seen any serious security analysis, so should be handled with care. Compression A compression function mixes two fixed length inputs and produces a single fixed length output of the same size as one of the inputs. This can also be seen as that the compression function transforms one large fixed-length input into a shorter, fixed-length output. For instance, input A might be 128 bits, input B 128 bits and they are compressed together to a single output of 128 bits. This is equivalent to having a single 256-bit input compressed to a single output of 128 bits. Some compression functions do not compress by half, but instead by some other factor. For example, input A mi
https://en.wikipedia.org/wiki/Word%20problem%20%28mathematics%29
In computational mathematics, a word problem is the problem of deciding whether two given expressions are equivalent with respect to a set of rewriting identities. A prototypical example is the word problem for groups, but there are many other instances as well. A deep result of computational theory is that answering this question is in many important cases undecidable. Background and motivation In computer algebra one often wishes to encode mathematical expressions using an expression tree. But there are often multiple equivalent expression trees. The question naturally arises of whether there is an algorithm which, given as input two expressions, decides whether they represent the same element. Such an algorithm is called a solution to the word problem. For example, imagine that are symbols representing real numbers - then a relevant solution to the word problem would, given the input , produce the output EQUAL, and similarly produce NOT_EQUAL from . The most direct solution to a word problem takes the form of a normal form theorem and algorithm which maps every element in an equivalence class of expressions to a single encoding known as the normal form - the word problem is then solved by comparing these normal forms via syntactic equality. For example one might decide that is the normal form of , , and , and devise a transformation system to rewrite those expressions to that form, in the process proving that all equivalent expressions will be rewritten to the same normal form. But not all solutions to the word problem use a normal form theorem - there are algebraic properties which indirectly imply the existence of an algorithm. While the word problem asks whether two terms containing constants are equal, a proper extension of the word problem known as the unification problem asks whether two terms containing variables have instances that are equal, or in other words whether the equation has any solutions. As a common example, is a word problem in the
https://en.wikipedia.org/wiki/Altova
Altova is a commercial software development company with headquarters in Beverly, MA, United States and Vienna, Austria, that produces integrated XML, JSON, database, UML, and data management software development tools. Company Altova was founded in 1992 as an XML development software company. Its software is used by more than 4 million users and more than 100,000 companies globally. The first product was XMLSpy, and around the year 2000, Altova began to develop new tools to augment XMLSpy and expand into new areas of software development. The CEO and president of Altova is Alexander Falk, who has explained that the development of Altova software has occurred through the inclusion of features most requested by the users of previous program incarnations. Falk is also the inventor behind Altova's patents. Altova software attempts to increase the efficiency of program use in order to reduce the amount of time needed for users to learn database software and other tasks such as query execution. Examples of Altova software includes the XML editor XMLSpy, and MapForce, a data mapping tool. Altova has also added XBRL capable programs to its XML software line, including development tools. In addition, they have included Web Services Description Language, project management and Unified Modeling Language capabilities to their software. Most recently, the company has introduced a mobile development environment called MobileTogether for developing cross-platform enterprise mobile solutions. At the beginning of 2014, the company claimed to have more than 4.6 million users of its software. Programs XMLSpy—XML editor for modeling, editing, transforming, and debugging XML technologies MapForce—any-to-any graphical data mapping, conversion, and integration tool MapForce FlexText—graphical utility for parsing flat files StyleVision—multipurpose visual XSLT stylesheet design, multi-channel publishing, and report building tool UModel—UML modeling tool DatabaseSpy—multi-database
https://en.wikipedia.org/wiki/Cryptovirology
Cryptovirology refers to the use of cryptography to devise particularly powerful malware, such as ransomware and asymmetric backdoors. Traditionally, cryptography and its applications are defensive in nature, and provide privacy, authentication, and security to users. Cryptovirology employs a twist on cryptography, showing that it can also be used offensively. It can be used to mount extortion based attacks that cause loss of access to information, loss of confidentiality, and information leakage, tasks which cryptography typically prevents. The field was born with the observation that public-key cryptography can be used to break the symmetry between what an antivirus analyst sees regarding malware and what the attacker sees. The antivirus analyst sees a public key contained in the malware, whereas the attacker sees the public key contained in the malware as well as the corresponding private key (outside the malware) since the attacker created the key pair for the attack. The public key allows the malware to perform trapdoor one-way operations on the victim's computer that only the attacker can undo. Overview The field encompasses covert malware attacks in which the attacker securely steals private information such as symmetric keys, private keys, PRNG state, and the victim's data. Examples of such covert attacks are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can be used only by the attacker, even after it is found. This contrasts with the traditional backdoor that is symmetric, i.e., anyone that finds it can use it. Kleptography, a subfield of cryptovirology, is the study of asymmetric backdoors in key generation algorithms, digital signature algorithms, key exchanges, pseudorandom number generators, encryption algorithms, and other cryptographic algorithms. The NIST Dual EC DRBG random bit generator has an asymmetric backdoor in it. The EC-DRBG algorithm utilizes the discrete-log kleptogram from kleptography, which