source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/KT%20%28energy%29
kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to . More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system by k. In physical chemistry, as kT often appears in the denominator of fractions (usually because of Boltzmann distribution), sometimes β = 1/kT is used instead of kT, turning into . RT RT is the product of the molar gas constant, R, and the temperature, T. This product is used in physics and chemistry as a scaling factor for energy values in macroscopic scale (sometimes it is used as a pseudo-unit of energy), as many processes and phenomena depend not on the energy alone, but on the ratio of energy and RT, i.e. E/RT. The SI units for RT are joules per mole (J/mol). It differs from kT only by a factor of the Avogadro constant, NA. Its dimension is energy or ML2T−2, expressed in SI units as joules (J): kT = RT/NA References Thermodynamics Statistical mechanics
https://en.wikipedia.org/wiki/Tseng%20Labs%20ET4000
The Tseng Labs ET4000 was a line of SVGA graphics controller chips during the early 1990s, commonly found in many 386/486 and compatible systems, with some models, notably the ET4000/W32 and later chips, offering graphics acceleration. Offering above average host interface throughput coupled with a moderate price, Tseng Labs' ET4000 chipset family were well regarded for their performance, and were integrated into many companies' lineups, notably with Hercules' Dynamite series, the Diamond Stealth 32 and several Speedstar cards, and on many generic boards. Models ET4000AX The ET4000AX was a major advancement over Tseng Labs' earlier ET3000 SVGA chipset, featuring a new 16-bit host interface controller with deep FIFO buffering and caching capabilities, and an enhanced, variable-width memory interface with support for up to 1MB of memory with a ~16-bit VRAM or ~32-bit DRAM memory data bus width. The FIFO buffers and cache functions had the effect of greatly improving host interface throughput, and therefore offering substantially improved redraw performance compared to the ET3000 and most of its contemporaries. The interface controller also offered support for IBM's MCA bus, in addition to an 8 or 16-bit ISA bus. The ET4000AX could also support the emerging VESA Local Bus standard with some additional external logic, albeit with a 16-bit host bus width. Neither the ET4000AX or its succeeding family members offered an integrated RAMDAC, which hampered the line's cost/performance competitiveness later on. ET4000/W32 Hardware acceleration via dedicated BitBLT hardware and a hardware cursor sprite was introduced in the ET4000/W32. The W32 offered improved local bus support along with further increased host interface performance, but by the time PCI Windows accelerators became commonplace, high host throughput was no longer a distinguishing feature. Nevertheless, as a mid-priced Windows accelerator, the W32 benchmarked favorably against competing mid-range S3 and
https://en.wikipedia.org/wiki/Syslog
In computing, syslog is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the type of system generating the message, and is assigned a severity level. Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems. When operating over a network, syslog uses a client-server architecture where a syslog server listens for and logs messages coming from clients. History Syslog was developed in the 1980s by Eric Allman as part of the Sendmail project. It was readily adopted by other applications and has since become the standard logging solution on Unix-like systems. A variety of implementations also exist on other operating systems and it is commonly found in network devices, such as routers. Syslog originally functioned as a de facto standard, without any authoritative published specification, and many implementations existed, some of which were incompatible. The Internet Engineering Task Force documented the status quo in RFC 3164 in August of 2001. It was standardized by RFC 5424 in March of 2009. Various companies have attempted to claim patents for specific aspects of syslog implementations. This has had little effect on the use and standardization of the protocol. Message components The information provided by the originator of a syslog message includes the facility code and the severity level. The syslog software adds information to the information header before passing the entry to the syslog receiver. Such comp
https://en.wikipedia.org/wiki/Quasiperiodic%20motion
In mathematics and theoretical physics, quasiperiodic motion is in rough terms the type of motion executed by a dynamical system containing a finite number (two or more) of incommensurable frequencies. That is, if we imagine that the phase space is modelled by a torus T (that is, the variables are periodic like angles), the trajectory of the system is modelled by a curve on T that wraps around the torus without ever exactly coming back on itself. A quasiperiodic function on the real line is the type of function (continuous, say) obtained from a function on T, by means of a curve R → T which is linear (when lifted from T to its covering Euclidean space), by composition. It is therefore oscillating, with a finite number of underlying frequencies. (NB the sense in which theta functions and the Weierstrass zeta function in complex analysis are said to have quasi-periods with respect to a period lattice is something distinct from this.) The theory of almost periodic functions is, roughly speaking, for the same situation but allowing T to be a torus with an infinite number of dimensions. References See also Quasiperiodicity Dynamical systems
https://en.wikipedia.org/wiki/Method%20of%20distinguished%20element
In the mathematical field of enumerative combinatorics, identities are sometimes established by arguments that rely on singling out one "distinguished element" of a set. Definition Let be a family of subsets of the set and let be a distinguished element of set . Then suppose there is a predicate that relates a subset to . Denote to be the set of subsets from for which is true and to be the set of subsets from for which is false, Then and are disjoint sets, so by the method of summation, the cardinalities are additive Thus the distinguished element allows for a decomposition according to a predicate that is a simple form of a divide and conquer algorithm. In combinatorics, this allows for the construction of recurrence relations. Examples are in the next section. Examples The binomial coefficient is the number of size-k subsets of a size-n set. A basic identity—one of whose consequences is that the binomial coefficients are precisely the numbers appearing in Pascal's triangle—states that: Proof: In a size-(n + 1) set, choose one distinguished element. The set of all size-k subsets contains: (1) all size-k subsets that do contain the distinguished element, and (2) all size-k subsets that do not contain the distinguished element. If a size-k subset of a size-(n + 1) set does contain the distinguished element, then its other k − 1 elements are chosen from among the other n elements of our size-(n + 1) set. The number of ways to choose those is therefore . If a size-k subset does not contain the distinguished element, then all of its k members are chosen from among the other n "non-distinguished" elements. The number of ways to choose those is therefore . The number of subsets of any size-n set is 2n. Proof: We use mathematical induction. The basis for induction is the truth of this proposition in case n = 0. The empty set has 0 members and 1 subset, and 20 = 1. The induction hypothesis is the proposition in case n; we use it to prove ca
https://en.wikipedia.org/wiki/Kripke%20structure%20%28model%20checking%29
This article describes Kripke structures as used in model checking. For a more general description, see Kripke semantics. A Kripke structure is a variation of the transition system, originally proposed by Saul Kripke, used in model checking to represent the behavior of a system. It consists of a graph whose nodes represent the reachable states of the system and whose edges represent state transitions, together with a labelling function which maps each node to a set of properties that hold in the corresponding state. Temporal logics are traditionally interpreted in terms of Kripke structures. Formal definition Let be a set of atomic propositions, i.e. boolean-valued expressions formed from variables, constants and predicate symbols. Clarke et al. define a Kripke structure over as a 4-tuple consisting of a finite set of states . a set of initial states . a transition relation such that is left-total, i.e., such that . a labeling (or interpretation) function . Since is left-total, it is always possible to construct an infinite path through the Kripke structure. A deadlock state can be modeled by a single outgoing edge back to itself. The labeling function defines for each state the set of all atomic propositions that are valid in . A path of the structure is a sequence of states such that for each , holds. The word on the path is the sequence of sets of the atomic propositions , which is an ω-word over alphabet . With this definition, a Kripke structure (say, having only one initial state may be identified with a Moore machine with a singleton input alphabet, and with the output function being its labeling function. Example Let the set of atomic propositions . and can model arbitrary boolean properties of the system that the Kripke structure is modelling. The figure at right illustrates a Kripke structure , where . . . . may produce a path and is the execution word over the path . can produce execution words belonging to the la
https://en.wikipedia.org/wiki/Java%20Cryptography%20Architecture
In computing, the Java Cryptography Architecture (JCA) is a framework for working with cryptography using the Java programming language. It forms part of the Java security API, and was first introduced in JDK 1.1 in the package. The JCA uses a "provider"-based architecture and contains a set of APIs for various purposes, such as encryption, key generation and management, secure random-number generation, certificate validation, etc. These APIs provide an easy way for developers to integrate security into application code. See also Java Cryptography Extension Bouncy Castle (cryptography) External links Official JCA guides: JavaSE6, JavaSE7, JavaSE8, JavaSE9, JavaSE10, JavaSE11 Java platform Cryptographic software
https://en.wikipedia.org/wiki/Stressor
A stressor is a chemical or biological agent, environmental condition, external stimulus or an event seen as causing stress to an organism. Psychologically speaking, a stressor can be events or environments that individuals might consider demanding, challenging, and/or threatening individual safety. Events or objects that may trigger a stress response may include: environmental stressors (hypo or hyper-thermic temperatures, elevated sound levels, over-illumination, overcrowding) daily "stress" events (e.g., traffic, lost keys, money, quality and quantity of physical activity) life changes (e.g., divorce, bereavement) workplace stressors (e.g., high job demand vs. low job control, repeated or sustained exertions, forceful exertions, extreme postures, office clutter) chemical stressors (e.g., tobacco, alcohol, drugs) social stressor (e.g., societal and family demands) Stressors can cause physical, chemical and mental responses internally. Physical stressors produce mechanical stresses on skin, bones, ligaments, tendons, muscles and nerves that cause tissue deformation and (in extreme cases) tissue failure. Chemical stresses also produce biomechanical responses associated with metabolism and tissue repair. Physical stressors may produce pain and impair work performance. Chronic pain and impairment requiring medical attention may result from extreme physical stressors or if there is not sufficient recovery time between successive exposures. A recent study shows that physical office clutter could be an example of physical stressors in a workplace setting. Stressors may also affect mental function and performance. One possible mechanism involves stimulation of the hypothalamus, CRF (corticotropin release factor) -> pituitary gland releases ACTH (adrenocorticotropic hormone) -> adrenal cortex secretes various stress hormones (e.g., cortisol) -> stress hormones (30 varieties) travel in the blood stream to relevant organs, e.g., glands, heart, intestines -> flight
https://en.wikipedia.org/wiki/Chromatoidal%20bodies
Chromatoidal bodies are aggregations of ribosomes found in cysts of some amoebae including Entamoeba histolytica and Entamoeba coli. They exist in the cytoplasm and are dark staining. In the early cystic stages of E. histolytica, chromatid bodies arise from aggregation of ribosomes forming polycrystalline masses. As the cyst matures, the masses fragment into separate particles and the chromatoidal body disappears. It is thought that the chromatoidal body formation is a manifestation of parasite-host adaptive conditions. Ribonucleoprotein is synthesized under favorable conditions, crystallized in the resistant cyst stage and dispersed in the newly excysted amoebae when the amoeba is able to establish itself in a new host. References Cell biology
https://en.wikipedia.org/wiki/The%20Dan%20Patrick%20Show
The Dan Patrick Show is a syndicated radio and television sports talk show, hosted by former ESPN personality Dan Patrick. It is currently produced by Patrick and is syndicated to radio stations by Premiere Radio Networks, within and independently of their Fox Sports Radio package. The three-hour program debuted on October 1, 2007. It is broadcast weekdays live beginning at 9:00 a.m. Eastern. The current show is a successor to the original Dan Patrick Show, which aired from 1999 to 2007 on ESPN Radio weekdays at 1:00 p.m. Eastern/10:00 a.m. Pacific. The show was televised on three networks: on DirecTV's Audience Network (formerly the 101 Network) since August 3, 2009; on three AT&T SportsNet affiliates since October 25, 2010; and on B/R Live as of March 1, 2019. It can also be heard on Sirius XM Radio channel 211, and is distributed as a podcast by PodcastOne. On January 10, 2020, Patrick announced on his show that the relationship with AT&T Sports for the live video broadcast would end in its current form, shortly after Super Bowl LIV. AT&T's Audience Network, which had simulcast the program since 2009, was ceasing operations, and the show would also end streaming via B/R Live, following a short-run that began in 2019. The final show under AT&T aired on February 28. On March 2, the live show began airing on The Dan Patrick Show YouTube channel with the radio show still being nationally syndicated via multiple platforms. On August 10, 2020, it was announced that the show would move to Peacock on August 24, 2020. Highlights of the show continue to appear on the YouTube channel. On July 19, 2023, Patrick announced that the show's run will end on December 24, 2027. Guests The show mainly features guests involved with American football and sometimes other sports, whether current or former athletes, coaches, commissioners or agents. Less often, guests who are not affiliated with sports will come on the show, although it is common for Patrick to ask at least one spor
https://en.wikipedia.org/wiki/Ion%20beam
An ion beam is a type of charged particle beam consisting of ions. Ion beams have many uses in electronics manufacturing (principally ion implantation) and other industries. A variety of ion beam sources exists, some derived from the mercury vapor thrusters developed by NASA in the 1960s. The most common ion beams are of singly-charged ions. Units Ion current density is typically measured in mA/cm^2, and ion energy in eV. The use of eV is convenient for converting between voltage and energy, especially when dealing with singly-charged ion beams, as well as converting between energy and temperature (1 eV = 11600 K). Broad-beam ion sources Most commercial applications use two popular types of ion source, gridded and gridless, which differ in current and power characteristics and the ability to control ion trajectories. In both cases electrons are needed to generate an ion beam. The most common electron emitters are hot filament and hollow cathode. Gridded ion source In a gridded ion source, DC or RF discharge are used to generate ions, which are then accelerated and decimated using grids and apertures. Here, the DC discharge current or the RF discharge power are used to control the beam current. The ion current density that can be accelerated using a gridded ion source is limited by the space charge effect, which is described by Child's law: where is the voltage between the grids, is the distance between the grids, and is the ion mass. The grids are placed as closely as possible to increase the current density, typically . The ions used have a significant impact on the maximum ion beam current, since . Everything else being equal, the maximum ion beam current with krypton is only 69% the maximum ion current of an argon beam, and with xenon the ratio drops to 55%. Gridless ion sources In a gridless ion source, ions are generated by a flow of electrons (no grids). The most common gridless ion source is the end-Hall ion source. Here, the discharge current
https://en.wikipedia.org/wiki/Ion%20beam%20deposition
Ion beam deposition (IBD) is a process of applying materials to a target through the application of an ion beam. An ion beam deposition apparatus typically consists of an ion source, ion optics, and the deposition target. Optionally a mass analyzer can be incorporated. In the ion source source materials in the form of a gas, an evaporated solid, or a solution (liquid) are ionized. For atomic IBD, electron ionization, field ionization (Penning ion source) or cathodic arc sources are employed. Cathodic arc sources are used particularly for carbon ion deposition. Molecular ion beam deposition employs electrospray ionization or MALDI sources. The ions are then accelerated, focused or deflected using high voltages or magnetic fields. Optional deceleration at the substrate can be employed to define the deposition energy. This energy usually ranges from a few eV up to a few keV. At low energy molecular ion beams are deposited intact (ion soft landing), while at a high deposition energy molecular ions fragment and atomic ions can penetrate further into the material, a process known as ion implantation. Ion optics (such as radio frequency quadrupoles) can be mass selective. In IBD they are used to select a single, or a range of ion species for deposition in order to avoid contamination. For organic materials in particular, this process is often monitored by a mass spectrometer. The ion beam current, which is quantitative measure for the deposited amount of material, can be monitored during the deposition process. Switching of the selected mass range can be used to define a stoichiometry. See also Cathodic arc deposition Sputter deposition Ion beam assisted deposition Ion beam induced deposition Electrospray ionization MALDI Thin film deposition
https://en.wikipedia.org/wiki/Treemapping
In information visualization and computing, treemapping is a method for displaying hierarchical data using nested figures, usually rectangles. Treemaps display hierarchical (tree-structured) data as a set of nested rectangles. Each branch of the tree is given a rectangle, which is then tiled with smaller rectangles representing sub-branches. A leaf node's rectangle has an area proportional to a specified dimension of the data. Often the leaf nodes are colored to show a separate dimension of the data. When the color and size dimensions are correlated in some way with the tree structure, one can often easily see patterns that would be difficult to spot in other ways, such as whether a certain color is particularly relevant. A second advantage of treemaps is that, by construction, they make efficient use of space. As a result, they can legibly display thousands of items on the screen simultaneously. Tiling algorithms To create a treemap, one must define a tiling algorithm, that is, a way to divide a region into sub-regions of specified areas. Ideally, a treemap algorithm would create regions that satisfy the following criteria: A small aspect ratio—ideally close to one. Regions with a small aspect ratio (i.e., fat objects) are easier to perceive. Preserve some sense of the ordering in the input data (ordered). Change to reflect changes in the underlying data (high stability). Unfortunately, these properties have an inverse relationship. As the aspect ratio is optimized, the order of placement becomes less predictable. As the order becomes more stable, the aspect ratio is degraded. Rectangular treemaps To date, fifteen primary rectangular treemap algorithms have been developed: Convex treemaps Rectangular treemaps have the disadvantage that their aspect ratio might be arbitrarily high in the worst case. As a simple example, if the tree root has only two children, one with weight and one with weight , then the aspect ratio of the smaller child will be , whi
https://en.wikipedia.org/wiki/Sharp%20NEC%20Display%20Solutions
Sharp NEC Display Solutions (Sharp/NEC; formerly NEC Display Solutions or NDS and NEC-Mitsubishi Electric Visual Systems or NEC-Mitsubishi or NM Visual) is a manufacturer of computer monitors and large-screen public-information displays, and has sold and marketed products under the NEC brand globally for more than twenty years. The company sells display products to the consumer, business, professional (e.g. financial, graphic design, CAD/CAM), digital signage and medical markets. The company again became a joint venture of Sharp and NEC Corporation when NEC sold 66% to Sharp on March 25, 2020. Prior to that date, it was a wholly owned subsidiary of Japan-based NEC Corporation since March 31, 2005. Originally, the company was known as NEC-Mitsubishi, a 50/50 joint venture between NEC Corporation and Mitsubishi Electric that began in 2000, and sold display products under both the NEC and Mitsubishi brands. The company is no longer affiliated with Mitsubishi. Brands NEC MultiSync - line of LCD and CRT monitors and large format public displays designed for business applications, lifestyle and gaming. NEC AccuSync - line of LCD and CRT monitors designed for home and office applications. NEC SpectraView - line of LCD monitors designed for color sensitive graphics applications. NEC SpectraView Reference - a line of LCD monitors designed for color critical professional applications NEC MD Series - line of LCD monitors designed for medical diagnostic imaging applications. NEC MULTEOS - line of LCD monitors designed for public demonstrations. See also Cromaclear Diamondtron References External links Global Sharp NEC Display Solutions of America Sharp NEC Display Solutions Europe Sharp NEC Display Solutions Asia Sharp NEC Display Solutions Japan NEC subsidiaries Display technology companies
https://en.wikipedia.org/wiki/List%20of%20Usenet%20newsreaders
Usenet is a worldwide, distributed discussion system that uses the Network News Transfer Protocol (NNTP). Programs called newsreaders are used to read and post messages (called articles or posts, and collectively termed news) to one or more newsgroups. Users must have access to a news server to use a newsreader. This is a list of such newsreaders. Types of clients Text newsreader – designed primarily for reading/posting text posts; unable to download binary attachments Traditional newsreader – a newsreader with text support that can also handle binary attachments, though less efficiently than more specialized clients Binary grabber/plucker – designed specifically for easy and efficient downloading of multi-part binary post attachments; limited or nonexistent reading/posting ability. These generally offer multi-server and multi-connection support. Most now support NZBs, and several either support or plan to support automatic Par2 processing. Some additionally support video and audio streaming. NZB downloader – binary grabber client without header support – cannot browse groups or read/post text messages; can only load 3rd-party NZBs to download binary post attachments. Some incorporate an interface for accessing selected NZB search websites. Binary posting client – designed specifically and exclusively for posting multi-part binary files Combination client – Jack-of-all-trades supporting text reading/posting, as well as multi-segment binary downloading and automatic Par2 processing Web-Based Client - Client designed for access through a web browser and does not require any additional software to access Usenet. Active Commercial software BinTube Forté Agent NewsBin NewsLeecher Novell GroupWise Postbox Turnpike Usenet Explorer Freeware GrabIt Opera Mail Xnews – MS Windows Free/Open-source software Claws Mail is a GTK+-based email and news client for Linux, BSD, Solaris, and Windows. GNOME Evolution Gnus, is an email and news client, and feed
https://en.wikipedia.org/wiki/The%20Aleph%20%28short%20story%29
"The Aleph" (original Spanish title: "El Aleph") is a short story by the Argentine writer and poet Jorge Luis Borges. First published in September 1945, it was reprinted in the short story collection, The Aleph and Other Stories, in 1949, and revised by the author in 1974. Plot summary In Borges' story, the Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping, or confusion. The story traces the theme of infinity found in several of Borges' other works, such as "The Book of Sand". Borges has stated that the inspiration for this story came from H.G. Wells's short story "The Door in the Wall". As in many of Borges' short stories, the protagonist is a fictionalized version of the author. At the beginning of the story, he is mourning the recent death of Beatriz Viterbo, a woman he loved, and he resolves to stop by the house of her family to pay his respects. Over time, he comes to know her first cousin, Carlos Argentino Daneri, a mediocre poet with a vastly exaggerated view of his own talent who has made it his lifelong quest to write an epic poem that describes every single location on the planet in excruciatingly fine detail. Later in the story, a business attempts to tear down Daneri's house in the course of its expansion. Daneri becomes enraged, explaining to the narrator that he must keep the house in order to finish his poem, because the cellar contains an Aleph which he is using to write the poem. Though by now he believes Daneri to be insane, the narrator proposes to come to the house and see the Aleph for himself. Left alone in the darkness of the cellar, the narrator begins to fear that Daneri is conspiring to kill him, and then he sees the Aleph for himself: Though staggered by the experience of seeing the Aleph, the narrator pretends to have seen nothing in order to get revenge on Daneri, whom he dislikes, by giving Daneri
https://en.wikipedia.org/wiki/THEOS
THEOS, which translates from Greek as "God", is an operating system which started out as OASIS, a microcomputer operating system for small computers that use the Z80 processor. When the operating system was launched for the IBM Personal Computer/AT in 1982, the decision was taken to change the name from OASIS to THEOS, short for THE Operating System. History OASIS The OASIS operating system was originally developed and distributed in 1977 by Phase One Systems of Oakland, California (President Howard Sidorsky). OASIS was developed for the Z80 processor and was the first multi-user operating system for 8-bit microprocessor based computers (Z-80 from Zilog). "OASIS" was a backronym for "Online Application System Interactive Software". OASIS consisted of a multi-user operating system, a powerful Business Basic/Interpreter, C compiler and a powerful text editor. Timothy Williams developed OASIS and was employed at Phase One. The market asked for 16-bit systems but there was no real 16-bit multi-user OS for 16-bit systems. Every month Phase One announced OASIS-16 but it did not come. One day Timothy Williams claimed that he owned OASIS and started a court case against Phase One and claimed several million U.S. dollars. Sidorsky had no choice and claimed Chapter 11. The court case took two years and finally the ruling was that Timothy Williams was allowed to develop the 16-bit version of OASIS but he was not allowed to use the OASIS name anymore. David Shirley presented an alternative history at the Computer Information Centre, an OASIS distributor for the UK in the early 1980s. He said Timothy Williams developed the OASIS operating system and contracted with Phase One Systems to market and sell the product. Development of the 16-bit product was underway, but the product was prematurely announced by POS. This led to pressure to release OASIS early, when it was still not properly debugged or optimised. (OASIS 8-bit was quite well optimised by that point, with many parts
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten%20model
In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group (or supergroup), and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra (or Lie superalgebra). By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra. Action Definition For a Riemann surface, a Lie group, and a (generally complex) number, let us define the -WZW model on at the level . The model is a nonlinear sigma model whose action is a functional of a field : Here, is equipped with a flat Euclidean metric, is the partial derivative, and is the Killing form on the Lie algebra of . The Wess–Zumino term of the action is Here is the completely anti-symmetric tensor, and is the Lie bracket. The Wess–Zumino term is an integral over a three-dimensional manifold whose boundary is . Topological properties of the Wess–Zumino term For the Wess–Zumino term to make sense, we need the field to have an extension to . This requires the homotopy group to be trivial, which is the case in particular for any compact Lie group . The extension of a given to is in general not unique. For the WZW model to be well-defined, should not depend on the choice of the extension. The Wess–Zumino term is invariant under small deformations of , and only depends on its homotopy class. Possible homotopy classes are controlled by the homotopy group . For any compact, connected simple Lie group , we have , and different extensions of lead to values of that differ by integers. Therefore, they lead to the same value of provided the level obeys Integer values of the level also play an important role in the representation theory of the model's symmetry algebra, which is an affine
https://en.wikipedia.org/wiki/Moisture%20sensitivity%20level
Moisture sensitivity level (MSL) is a rating that shows a device's susceptibility to damage due to absorbed moisture when subjected to reflow soldering as defined in J-STD-020. It relates to the packaging and handling precautions for some semiconductors. The MSL is an electronic standard for the time period in which a moisture sensitive device can be exposed to ambient room conditions (30 °C/85%RH at Level 1; 30 °C/60%RH at all other levels). Increasingly, semiconductors have been manufactured in smaller sizes. Components such as thin fine-pitch devices and ball grid arrays could be damaged during SMT reflow when moisture trapped inside the component expands. The expansion of trapped moisture can result in internal separation (delamination) of the plastic from the die or lead-frame, wire bond damage, die damage, and internal cracks. Most of this damage is not visible on the component surface. In extreme cases, cracks will extend to the component surface. In the most severe cases, the component will bulge and pop. This is known as the "popcorn" effect. This occurs when part temperature rises rapidly to a high maximum during the soldering (assembly) process. This does not occur when part temperature rises slowly and to a low maximum during a baking (preheating) process. Moisture sensitive devices are packaged in a moisture barrier antistatic bag with a desiccant and a moisture indicator card which is sealed. Moisture sensitivity levels are specified in technical standard IPC/JEDEC Moisture/reflow Sensitivity Classification for Nonhermetic Surface-Mount Devices. The times indicate how long components can be outside of dry storage before they have to be baked to remove any absorbed moisture. MSL 6 – Mandatory bake before use MSL 5A – 24 hours MSL 5 – 48 hours MSL 4 – 72 hours MSL 3 – 168 hours MSL 2A – 4 weeks MSL 2 – 1 year MSL 1 – Unlimited floor life Practical MSL-specified parts must be baked before assembly if their exposure has exceeded the r
https://en.wikipedia.org/wiki/Addition%20principle
In combinatorics, the addition principle or rule of sum is a basic counting principle. Stated simply, it is the intuitive idea that if we have A number of ways of doing something and B number of ways of doing another thing and we can not do both at the same time, then there are ways to choose one of the actions. In mathematical terms, the addition principle states that, for disjoint sets A and B, we have . The rule of sum is a fact about set theory. The addition principle can be extended to several sets. If are pairwise disjoint sets, then we have:This statement can be proven from the addition principle by induction on n. Simple example A person has decided to shop at one store today, either in the north part of town or the south part of town. If they visit the north part of town, they will shop at either a mall, a furniture store, or a jewelry store (3 ways). If they visit the south part of town then they will shop at either a clothing store or a shoe store (2 ways). Thus there are possible shops the person could end up shopping at today. Inclusion–exclusion principle The inclusion–exclusion principle (also known as the sieve principle) can be thought of as a generalization of the rule of sum in that it too enumerates the number of elements in the union of some sets (but does not require the sets to be disjoint). It states that if A1, ..., An are finite sets, then Subtraction principle Similarly, for a given finite set S, and given another set A, if , then . To prove this, notice that by the addition principle. Applications The addition principle can be used to prove Pascal's rule combinatorially. To calculate , one can view it as the number of ways to choose k people from a room containing n children and 1 teacher. Then there are ways to choose people without choosing the teacher, and ways to choose people that includes the teacher. Thus . The addition principle can also be used to prove the multiplication principle. References Bibliography
https://en.wikipedia.org/wiki/Rule%20of%20product
In combinatorics, the rule of product or multiplication principle is a basic counting principle (a.k.a. the fundamental principle of counting). Stated simply, it is the intuitive idea that if there are a ways of doing something and b ways of doing another thing, then there are a · b ways of performing both actions. Examples In this example, the rule says: multiply 3 by 2, getting 6. The sets {A, B, C} and {X, Y} in this example are disjoint sets, but that is not necessary. The number of ways to choose a member of {A, B, C}, and then to do so again, in effect choosing an ordered pair each of whose components are in {A, B, C}, is 3 × 3 = 9. As another example, when you decide to order pizza, you must first choose the type of crust: thin or deep dish (2 choices). Next, you choose one topping: cheese, pepperoni, or sausage (3 choices). Using the rule of product, you know that there are 2 × 3 = 6 possible combinations of ordering a pizza. Applications In set theory, this multiplication principle is often taken to be the definition of the product of cardinal numbers. We have where is the Cartesian product operator. These sets need not be finite, nor is it necessary to have only finitely many factors in the product. An extension of the rule of product considers there are different types of objects, say sweets, to be associated with objects, say people. How many different ways can the people receive their sweets? Each person may receive any of the sweets available, and there are people, so there are ways to do this. Related concepts The rule of sum is another basic counting principle. Stated simply, it is the idea that if we have a ways of doing something and b ways of doing another thing and we can not do both at the same time, then there are a + b ways to choose one of the actions. See also Combinatorial principles References Combinatorics Mathematical principles fi:Todennäköisyysteoria#Tuloperiaate ja summaperiaate
https://en.wikipedia.org/wiki/Cahiers%20de%20Topologie%20et%20G%C3%A9om%C3%A9trie%20Diff%C3%A9rentielle%20Cat%C3%A9goriques
The Cahiers de Topologie et Géométrie Différentielle Catégoriques (French: Notebooks of categorical topology and categorical differential geometry) is a French mathematical scientific journal established by Charles Ehresmann in 1957. It concentrates on category theory "and its applications, [e]specially in topology and differential geometry". Its older papers (two years or more after publication) are freely available on the internet through the French NUMDAM service. It was originally published by the Institut Henri Poincaré under the name Cahiers de Topologie; after the first volume, Ehresmann changed the publisher to the Institut Henri Poincaré and later Dunod/Bordas. In the eighth volume he changed the name to Cahiers de Topologie et Géométrie Différentielle. After Ehresmann's death in 1979 the editorship passed to his wife Andrée Ehresmann; in 1984, at the suggestion of René Guitart, the name was changed again, to add "Catégoriques". References External links Official website as of January 2018 ; previous official website Archive at Numdam: Volumes 1 (1957) - 7 (1965) : Séminaire Ehresmann. Topologie et géométrie différentielle; Volumes 8 (1966) - 52 (2011) : Cahiers de Topologie et Géométrie Différentielle Catégoriques Table of Contents for Volumes 38 (1997) through 57 (2016) maintained at the electronic journal Theory and Applications of Categories Mathematics journals Academic journals established in 1957 Quarterly journals Multilingual journals
https://en.wikipedia.org/wiki/Key%20signature%20%28cryptography%29
In cryptography, a key signature is the result of a third-party applying a cryptographic signature to a representation of a cryptographic key. This is usually done as a form of assurance or verification: If "Alice" has signed "Bob's" key, it can serve as an assurance to another party, say "Eve", that the key actually belongs to Bob, and that Alice has personally checked and attested to this. The representation of the key that is signed is usually shorter than the key itself, because most public-key signature schemes can only encrypt or sign short lengths of data. Some derivative of the public key fingerprint may be used, i.e. via hash functions. See also Key (cryptography) Public key certificate Key management
https://en.wikipedia.org/wiki/Cantor%E2%80%93Dedekind%20axiom
In mathematical logic, the Cantor–Dedekind axiom is the thesis that the real numbers are order-isomorphic to the linear continuum of geometry. In other words, the axiom states that there is a one-to-one correspondence between real numbers and points on a line. This axiom became a theorem proved by Emil Artin in his book Geometric Algebra. More precisely, Euclidean spaces defined over the field of real numbers satisfy the axioms of Euclidean geometry, and, from the axioms of Euclidean geometry, one can construct a field that is isomorphic to the real numbers. Analytic geometry was developped from the Cartesian coordinate system introduced by René Descartes. It implicitly assumed this axiom by blending the distinct concepts of real numbers and points on a line, sometimes referred to as the real number line. Artin's proof, not only makes this blend explicity, but also that analytic geometry is strictly equivalent with the traditional synthetic geometry, in the sense that exactly the same theorems can be proved in the two frameworks. Another consequence is that Alfred Tarski's proof of the decidability of first-order theories of the real numbers could be seen as an algorithm to solve any first-order problem in Euclidean geometry. References Ehrlich, P. (1994). "General introduction". Real Numbers, Generalizations of the Reals, and Theories of Continua, vi–xxxii. Edited by P. Ehrlich, Kluwer Academic Publishers, Dordrecht Bruce E. Meserve (1953) B.E. Meserve (1955) Real numbers Mathematical axioms
https://en.wikipedia.org/wiki/Complement%20membrane%20attack%20complex
The membrane attack complex (MAC) or terminal complement complex (TCC) is a complex of proteins typically formed on the surface of pathogen cell membranes as a result of the activation of the host's complement system, and as such is an effector of the immune system. Antibody-mediated complement activation leads to MAC deposition on the surface of infected cells. Assembly of the MAC leads to pores that disrupt the cell membrane of target cells, leading to cell lysis and death. The MAC is composed of the complement components C5b, C6, C7, C8 and several C9 molecules. A number of proteins participate in the assembly of the MAC. Freshly activated C5b binds to C6 to form a C5b-6 complex, then to C7 forming the C5b-6-7 complex. The C5b-6-7 complex binds to C8, which is composed of three chains (alpha, beta, and gamma), thus forming the C5b-6-7-8 complex. C5b-6-7-8 subsequently binds to C9 and acts as a catalyst in the polymerization of C9. Structure and function MAC is composed of a complex of four complement proteins (C5b, C6, C7, and C8) that bind to the outer surface of the plasma membrane, and many copies of a fifth protein (C9) that hook up to one another, forming a ring in the membrane. C6-C9 all contain a common MACPF domain. This region is homologous to cholesterol-dependent cytolysins from Gram-positive bacteria. The ring structure formed by C9 is a pore in the membrane that allows free diffusion of molecules in and out of the cell. If enough pores form, the cell is no longer able to survive. If the pre-MAC complexes of C5b-7, C5b-8 or C5b-9 do not insert into a membrane, they can form inactive complexes with Protein S (sC5b-7, sC5b-8 and sC5b-9). These fluid phase complexes do not bind to cell membranes and are ultimately scavenged by clusterin and vitronectin, two regulators of complement. Initiation: C5-C7 The membrane attack complex is initiated when the complement protein C5 convertase cleaves C5 into C5a and C5b. All three pathways of the compleme
https://en.wikipedia.org/wiki/Habitat
In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate. The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body. Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents. Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo
https://en.wikipedia.org/wiki/Battle%20of%20the%20sexes%20%28game%20theory%29
In game theory, the battle of the sexes is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by R. Duncan Luce and Howard Raiffa in their classic book, Games and Decisions. Some authors prefer to avoid assigning sexes to the players and instead use Players 1 and 2, and some refer to the game as Bach or Stravinsky, using two concerts as the two events. The game description here follows Luce and Raiffa's original story. Imagine that a man and a woman hope to meet this evening, but have a choice between two events to attend: a prize fight and a ballet. The man would prefer to go to prize fight. The woman would prefer the ballet. Both would prefer to go to the same event rather than different ones. If they cannot communicate, where should they go? The payoff matrix labeled "Battle of the Sexes (1)" shows the payoffs when the man chooses a row and the woman chooses a column. In each cell, the first number represents the man's payoff and the second number the woman's. This standard representation does not account for the additional harm that might come from not only going to different locations, but going to the wrong one as well (e.g. the man goes to the ballet while the woman goes to the prize fight, satisfying neither). To account for this, the game would be represented in "Battle of the Sexes (2)", where the players each have a payoff of 2 because they at least get to attend their favored events. Equilibrium analysis This game has two pure strategy Nash equilibria, one where both players go to the prize fight, and another where both go to the ballet. There is also a mixed strategy Nash equilibrium, in which the players randomize using specific probabilities. For the payoffs listed in Battle of the Sexes (1), in the mixed strategy equilibrium the man goes to the prize fight with probability 3/5 and the woman to the ballet with probability 3/5, so they end up together at the prize fight with probability 6/25
https://en.wikipedia.org/wiki/Proof%20procedure
In logic, and in particular proof theory, a proof procedure for a given logic is a systematic method for producing proofs in some proof calculus of (provable) statements. Types of proof calculi used There are several types of proof calculi. The most popular are natural deduction, sequent calculi (i.e., Gentzen-type systems), Hilbert systems, and semantic tableaux or trees. A given proof procedure will target a specific proof calculus, but can often be reformulated so as to produce proofs in other proof styles. Completeness A proof procedure for a logic is complete if it produces a proof for each provable statement. The theorems of logical systems are typically recursively enumerable, which implies the existence of a complete but usually extremely inefficient proof procedure; however, a proof procedure is only of interest if it is reasonably efficient. Faced with an unprovable statement, a complete proof procedure may sometimes succeed in detecting and signalling its unprovability. In the general case, where provability is only a semidecidable property, this is not possible, and instead the procedure will diverge (not terminate). See also Automated theorem proving Proof complexity Proof tableaux Deductive system Proof (truth) References Willard Quine 1982 (1950). Methods of Logic. Harvard Univ. Press. Proof theory
https://en.wikipedia.org/wiki/CPC%20Binary%20Barcode
CPC Binary Barcode is Canada Post's proprietary symbology used in its automated mail sortation operations. This barcode is used on regular-size pieces of mail, especially mail sent using Canada Post's Lettermail service. This barcode is printed on the lower-right-hand corner of each faced envelope, using a unique ultraviolet-fluorescent ink. Symbology description The applied barcode uses printed and non-printed bars spaced 3 mm apart, and consists of two fields. The rightmost field, which is 27 bars in width, encodes the destination postal code. The leftmost field is 9 bars in width and applied right below the printed destination address. It is currently unclear what this field is used for. In the postal code field, the rightmost bar is always printed, to allow the sortation equipment to properly lock onto the barcode and scan it. The leftmost bar, a parity field, is printed only when necessary to give the postal code field an odd number of printed bars. The remaining 25 bars represent the actual destination postal code. To eliminate any possibility of ambiguity during the scanning process, run-length restrictions are used within the postal code field. No more than five consecutive non-printed bars, or spaces, are permitted, and no more than six consecutive printed bars are allowed. The actual representation of the postal code is split into four subfields of the barcode, each with their own separate encoding table. The first and last subfields, which share a common encoding table, are always eight bars in width, and encode the first two characters and the last two characters of the postal code respectively. The second subfield, which encodes the third character of the postal code, is always five bars in width, and the third subfield, which encodes the fourth character, is always four bars wide. Generating barcodes Disregarding the space, divide the postal code into four subfields (e.g. K1-A-0-B1). Locate the contents of each subfield in the encoding t
https://en.wikipedia.org/wiki/Kleptoplasty
Kleptoplasty or kleptoplastidy is a process in symbiotic relationships whereby plastids, notably chloroplasts from algae, are sequestered by the host. The word is derived from Kleptes (κλέπτης) which is Greek for thief. The alga is eaten normally and partially digested, leaving the plastid intact. The plastids are maintained within the host, temporarily continuing photosynthesis and benefiting the host. Etymology The word kleptoplasty is derived from Greek kleptes (κλέπτης), thief, and plastós (πλαστός), originally meaning formed or moulded, and used in biology to mean a plastid. Process Kleptoplasty is a process in symbiotic relationships whereby plastids, notably chloroplasts from algae, are sequestered by the host. The alga is eaten normally and partially digested, leaving the plastid intact. The plastids are maintained within the host, temporarily continuing photosynthesis and benefiting the host. The term was coined in 1990 to describe chloroplast symbiosis. Taxonomic range Kleptoplasty occurs in two major clades of eukaryotes, namely micro-organisms of the SAR Supergroup, and some marine animals. SAR Supergroup Foraminifera Some species of the foraminiferan genera Bulimina, Elphidium, Haynesina, Nonion, Nonionella, Nonionellina, Reophax, and Stainforthia sequester diatom chloroplasts. Alveolata Dinoflagellates The stability of transient plastids varies considerably across plastid-retaining species. In the dinoflagellates Gymnodinium spp. and Pfisteria piscicida, kleptoplastids are photosynthetically active for only a few days, while kleptoplastids in Dinophysis spp. can be stable for 2 months. In other dinoflagellates, kleptoplasty has been hypothesized to represent either a mechanism permitting functional flexibility, or perhaps an early evolutionary stage in the permanent acquisition of chloroplasts. Ciliates Mesodinium rubrum is a ciliate that steals chloroplasts from the cryptomonad Geminigera cryophila. M. rubrum participates in additi
https://en.wikipedia.org/wiki/Categorification
In mathematics, categorification is the process of replacing set-theoretic theorems with category-theoretic analogues. Categorification, when done successfully, replaces sets with categories, functions with functors, and equations with natural isomorphisms of functors satisfying additional properties. The term was coined by Louis Crane. The reverse of categorification is the process of decategorification. Decategorification is a systematic process by which isomorphic objects in a category are identified as equal. Whereas decategorification is a straightforward process, categorification is usually much less straightforward. In the representation theory of Lie algebras, modules over specific algebras are the principal objects of study, and there are several frameworks for what a categorification of such a module should be, e.g., so called (weak) abelian categorifications. Categorification and decategorification are not precise mathematical procedures, but rather a class of possible analogues. They are used in a similar way to the words like 'generalization', and not like 'sheafification'. Examples One form of categorification takes a structure described in terms of sets, and interprets the sets as isomorphism classes of objects in a category. For example, the set of natural numbers can be seen as the set of cardinalities of finite sets (and any two sets with the same cardinality are isomorphic). In this case, operations on the set of natural numbers, such as addition and multiplication, can be seen as carrying information about coproducts and products of the category of finite sets. Less abstractly, the idea here is that manipulating sets of actual objects, and taking coproducts (combining two sets in a union) or products (building arrays of things to keep track of large numbers of them) came first. Later, the concrete structure of sets was abstracted away – taken "only up to isomorphism", to produce the abstract theory of arithmetic. This is a "decategorific
https://en.wikipedia.org/wiki/Audio%20Home%20Recording%20Act
The Audio Home Recording Act of 1992 (AHRA) amended the United States copyright law by adding Chapter 10, "Digital Audio Recording Devices and Media". The act enabled the release of recordable digital formats such as Sony and Philips' Digital Audio Tape without fear of contributory infringement lawsuits. The RIAA and music publishers, concerned that consumers' ability to make perfect digital copies of music would destroy the market for audio recordings, had threatened to sue companies and had lobbied Congress to pass legislation imposing mandatory copy protection technology and royalties on devices and media. The AHRA establishes a number of important precedents in US copyright law that defined the debate between device makers and the content industry for the ensuing two decades. These include: the first government technology mandate in the copyright law, requiring all digital audio recording devices sold, manufactured or imported in the US (excluding professional audio equipment) to include the Serial Copy Management System (SCMS). the first anti-circumvention provisions in copyright law, later applied on a much broader scale by the Digital Millennium Copyright Act. the first government-imposed royalties on devices and media, a portion of which is paid to the record industry directly. The act also includes blanket protection from infringement actions for private, non-commercial analog audio copying, and for digital audio copies made with certain kinds of digital audio recording technology. History and legislative background By the late 1980s, several manufacturers were prepared to introduce read/write digital audio formats to the United States. These new formats were a significant improvement over the newly introduced read-only (at the time) digital format of the compact disc, allowing consumers to make perfect, multi-generation copies of digital audio recordings. Most prominent among these formats was Digital Audio Tape (DAT), followed in the early 1990s b
https://en.wikipedia.org/wiki/Comparison%20of%20instant%20messaging%20protocols
The following is a comparison of instant messaging protocols. It contains basic general information about the protocols. Table of instant messaging protocols See also Comparison of cross-platform instant messaging clients Comparison of Internet Relay Chat clients Comparison of LAN messengers Comparison of software and protocols for distributed social networking LAN messenger Secure instant messaging Comparison of user features of messaging platforms References Instant messaging protocols Instant messaging Instant messaging protocols Instant messaging protocols
https://en.wikipedia.org/wiki/The%20Literary%20Encyclopedia
The Literary Encyclopedia is an online reference work first published in October 2000. It was founded as an innovative project designed to bring the benefits of information technology to what at the time was still a largely conservative literary field. From its inception it was developed as a not-for-profit publication aimed to ensure that those who contribute to it are properly rewarded for the time and knowledge they invest - as such, its authors and editors are also shareholders in the Company. The Literary Encyclopedia offers both freely available content and content and services for subscribers (individual and institutional, consisting mainly of higher education institutions and higher level secondary schools). Articles are solicited by invitation from specialist scholars, then refereed and approved by subject editors, which makes the LE both authoritative and reliable. It contains general profiles of literary writers, but also of major cultural, historical and scientific figures; articles on individual works of literature from all over the world (often containing succinct critical commentary and sections on critical reception); entries on hundreds of literary terms, concepts and movements, as well as extended essays on topics of historical and cultural importance. The Literary Encyclopedia offers free access, upon request, to its entire database to all educational institutions in countries where the GDP is below the world average. It also offers a number of research grants to young and emerging scholars in its subscribing institutions, funded by royalties donated by the publication's contributors and editors. The encyclopedia's founding editors were Robert Clark (University of East Anglia), Emory Elliott (University of California at Riverside) and Janet Todd (University of Cambridge), and its current editorial board numbers over 100 distinguished scholars from higher education institutions all over the world. Written and owned by a global network of schol
https://en.wikipedia.org/wiki/Media%20Dispatch%20Protocol
The Media Dispatch Protocol (MDP) was developed by the Pro-MPEG Media Dispatch Group to provide an open standard for secure, automated, and tapeless delivery of audio, video and associated data files. Such files typically range from low-resolution content for the web to HDTV and high-resolution digital intermediate files for cinema production. MDP is essentially a middleware protocol that decouples the technical details of how delivery occurs from the business logic that requires delivery. For example, a TV post-production company might have a contract to deliver a programme to a broadcaster. An MDP agent allows users be able to deal with company and programme names, rather than with filenames and network endpoints. It can also provide a delivery service as part of a service oriented architecture. MDP acts as a communication layer between business logic and low-level file transfer mechanisms, providing a way to securely communicate and negotiate transfer-specific metadata about file packages, delivery routing, deadlines, and security information, and to manage and coordinate file transfers in progress, whilst hooking all this information to project, company and job identifiers. MDP works by implementing a 'dispatch transaction' layer by which means agents negotiate and agree the details of the individual file transfers required for the delivery, and control, monitor and report on the progress of the transfers. At the heart of the protocol is the 'Manifest' - an XML document that encapsulates the information about the transaction. MDP is based on existing open technologies such as XML, HTTP and TLS. The protocol is specified in a layered way to allow the adoption of new technologies (e.g. Web Services protocols such as SOAP and WSDL) as required. Since early 2005, multiple implementations based on draft versions of the Media Dispatch Protocol have been in use, both for technical testing, and, since April 2005, for real-world production work. The experience with
https://en.wikipedia.org/wiki/Linear%20predictive%20analysis
Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a tangent to the graph and extending the line. One use of this is in linear predictive coding which can be used as a method of reducing the amount of data needed to approximately encode a series. Suppose it is desired to store or transmit a series of values representing voice. The value at each sampling point could be transmitted (if 256 values are possible then 8 bits of data for each point are required, if the precision of 65536 levels are desired then 16 bits per sample are required). If it is known that the value rarely changes more than +/- 15 values between successive samples (-15 to +15 is 31 steps, counting the zero) then we could encode the change in 5 bits. As long as the change is less than +/- 15 values in successive steps the value will exactly reproduce the desired sequence. When the rate of change exceeds +/-15 then the reconstructed values will temporarily differ from the desired value; provided fast changes that exceed the limit are rare it may be acceptable to use the approximation in order to attain the improved coding density. See also Linear prediction References Interpolation Asymptotic analysis
https://en.wikipedia.org/wiki/Pickover%20stalk
Pickover stalks are certain kinds of details to be found empirically in the Mandelbrot set, in the study of fractal geometry. They are so named after the researcher Clifford Pickover, whose "epsilon cross" method was instrumental in their discovery. An "epsilon cross" is a cross-shaped orbit trap. According to Vepstas (1997) "Pickover hit on the novel concept of looking to see how closely the orbits of interior points come to the x and y axes. In these pictures, the closer that the point approaches, the higher up the color scale, with red denoting the closest approach. The logarithm of the distance is taken to accentuate the details". Biomorphs Biomorphs are biological-looking Pickover Stalks. At the end of the 1980s, Pickover developed biological feedback organisms similar to Julia sets and the fractal Mandelbrot set. According to Pickover (1999) in summary, he "described an algorithm which could be used for the creation of diverse and complicated forms resembling invertebrate organisms. The shapes are complicated and difficult to predict before actually experimenting with the mappings. He hoped these techniques would encourage others to explore further and discover new forms, by accident, that are on the edge of science and art". Pickover developed an algorithm (which uses neither random perturbations nor natural laws) to create very complicated forms resembling invertebrate organisms. The iteration, or recursion, of mathematical transformations is used to generate biological morphologies. He called them "biomorphs." At the same time he coined "biomorph" for these patterns, the famous evolutionary biologist Richard Dawkins used the word to refer to his own set of biological shapes that were arrived at by a very different procedure. More rigorously, Pickover's "biomorphs" encompass the class of organismic morphologies created by small changes to traditional convergence tests in the field of "Julia set" theory. Pickover's biomorphs show a self-similarity at d
https://en.wikipedia.org/wiki/Digit%20sum
In mathematics, the digit sum of a natural number in a given number base is the sum of all its digits. For example, the digit sum of the decimal number would be Definition Let be a natural number. We define the digit sum for base , to be the following: where is one less than the number of digits in the number in base , and is the value of each digit of the number. For example, in base 10, the digit sum of 84001 is For any two bases and for sufficiently large natural numbers The sum of the base 10 digits of the integers 0, 1, 2, ... is given by in the On-Line Encyclopedia of Integer Sequences. use the generating function of this integer sequence (and of the analogous sequence for binary digit sums) to derive several rapidly converging series with rational and transcendental sums. Extension to negative integers The digit sum can be extended to the negative integers by use of a signed-digit representation to represent each integer. Applications The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a single digit. The decimal digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. For divisibility by 9, this test is called the rule of nines and is the basis of the casting out nines technique for checking calculations. Digit sums are also a common ingredient in checksum algorithms to check the arithmetic operations of early computers. Earlier, in an era of hand calculation, suggested using sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit
https://en.wikipedia.org/wiki/Tripod%20%28web%20hosting%29
Tripod.com is a web hosting service owned by Lycos. Originally aiming its services to college students and young adults, it was one of several sites trying to build online communities during the 1990s. As such, Tripod formed part of the first wave of user-generated content. Free webpages are no longer available and have been replaced by paid services. Services Tripod offers web hosting with two paid plans, "personal" and "professional", which differ in features and storage space, but are both powered by the web authoring system "Lycos Publish". This tool has completely replaced the former offering of more general web hosting and removed free plans altogether. Tripod offered free and paid web hosting services, including 20 megabytes of storage space and the ability to run Common Gateway Interface (CGI) scripts in Perl. In addition to basic hosting, Tripod also offered a blogging tool, a photo album manager, and the Trellix site builder for WYSIWYG page editing. Tripod's for-pay services included additional disk space, a shopping cart, domain names, web and POP/IMAP email. History Tripod originated in 1992 with two Williams College classmates, Bo Peabody and Brett Hershey, along with Dick Sabot, an economics professor at the school. The company was headquartered in Williamstown, Massachusetts, with Peabody as CEO. Although it would eventually focus on the Internet, Tripod also published a magazine, Tools for Life, that was distributed with textbooks, and offered a discount card for students. Website launch The domain name Tripod.com was created on September 29, 1994 and the site officially launched in 1995 after operating in "sneak-preview mode" for a period. Billed as a "hip Web site and pay service for and by college students", it offered how-to advice on practical issues that might concern young people when first living away from home. It planned to charge a minimal fee and make money primarily on commissions from partners who would sell products on the site. O
https://en.wikipedia.org/wiki/E6B
The E6B flight computer is a form of circular slide rule used in aviation. It is an instance of an analog calculating device still being used the 21st century. They are mostly used in flight training, because these flight computers have been replaced with electronic planning tools or software and websites that make these calculations for the pilots. These flight computers are used during flight planning (on the ground before takeoff) to aid in calculating fuel burn, wind correction, time en route, and other items. In the air, the flight computer can be used to calculate ground speed, estimated fuel burn and updated estimated time of arrival. The back is designed for wind vector solutions, i.e., determining how much the wind is affecting one's speed and course. They are frequently referred to by the nickname "whiz wheel". Construction Flight computers are usually made out of aluminum, plastic or cardboard, or combinations of these materials. One side is used for wind triangle calculations using a rotating scale and a sliding panel. The other side is a circular slide rule. Extra marks and windows facilitate calculations specifically needed in aviation. Electronic versions are also produced, resembling calculators, rather than manual slide rules. Aviation remains one of the few places that the slide rule is still in widespread use. Manual E6Bs/CRP-1s remain popular with some users and in some environments rather than the electronic ones because they are lighter, smaller, less prone to break, easy to use one-handed, quicker and do not require electrical power. In flight training for a private pilot or instrument rating, mechanical flight computers are still often used to teach the fundamental computations. This is in part also due to the complex nature of some trigonometric calculations which would be comparably difficult to perform on a conventional scientific calculator. The graphic nature of the flight computer also helps in catching many errors which in part ex
https://en.wikipedia.org/wiki/Self-replicating%20machine
A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell. A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. Although suggested earlier than in the late 1940's by Von Neumann, no self-replicating machine has been seen until today. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macros
https://en.wikipedia.org/wiki/Discrete%20modelling
Discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, formulae are fit to discrete data—data that could potentially take on only a countable set of values, such as the integers, and which are not infinitely divisible. A common method in this form of modelling is to use recurrence relations. Applied mathematics
https://en.wikipedia.org/wiki/Dirk%20Meyer
Derrick R. "Dirk" Meyer (born November 24, 1961) is a former Chief Executive Officer of Advanced Micro Devices, serving in the position from July 18, 2008 to January 10, 2011. Education He received a bachelor's degree in computer engineering from the University of Illinois at Urbana-Champaign and a master's degree in business administration from Boston University Graduate School of Management. Career He was a co-architect of the Alpha 21064 and Alpha 21264 microprocessors during his employment at DEC and also worked at Intel in its microprocessor design group. Meyer joined AMD in 1996, where he personally led the team that designed and developed the Athlon processor. Dirk Meyer was formerly president and chief executive officer of AMD. At one time, he was the chief operating officer. In this role, he shared leadership and management of AMD with former Chief Executive Officer and Chairman of AMD Hector Ruiz. Prior to this role, Meyer served as president and chief operating officer of AMD’s Microprocessor Solutions Sector where he had overall responsibility for AMD’s microprocessor business, including product development, manufacturing, operations and product marketing. On July 18, 2008, Dirk Meyer replaced Hector Ruiz as the CEO of AMD. Meyer focused the company onto the PC and data center server market. AMD announced 2010 that more than ever notebooks are hitting the market in 2010. Meyer justifies the focus by stating that an increasing mobile and consumer electronics market does not shrink the traditional market. He wanted to address these markets later, when AMD is prepared to dedicate sufficient resources to address them well. This strategy led to Bulldozer architecture for the traditional PC market and Bobcat addressing the netbook and tablet market nearly exclusively dominated by Intel, both hitting the market in 2011 and 2012. In the beginning of 2011, Meyer was removed as CEO after discussions with the AMD Board. His departure is widely believed to
https://en.wikipedia.org/wiki/Swiss%20coordinate%20system
The Swiss coordinate system (or Swiss grid) is a geographic coordinate system used in Switzerland and Liechtenstein for maps and surveying by the Swiss Federal Office of Topography (Swisstopo). A first coordinate system was introduced in 1903 under the name LV03 (Landesvermessung 1903, German for “land survey 1903”), based on the Mercator projection and the Bessel ellipsoid. With the advent of GPS technology, a new coordinate system was introduced in 1995 under the name LV95 (Landesvermessung 1995, German for “land survey 1995”) after a 7-year measurement campaign. LV is translated as MN in English. LV03 Introduced in 1903, this first geographic coordinate system rested upon the two dominant methodological pillars of geodesy and cartography at the time: the Bessel ellipsoid and the Mercator projection. Its measurements used the Bessel ellipsoid as an approximation of the Earth's shape, and its maps used the Mercator projection as a projection technique. Although not ideal, these approximations still offered a high level of precision in the case of Switzerland, due to the small size of its territory (41,285 km2 with max. 350km lengthways and 220km from North to South). The fundamental reference point of the LV03 coordinate system was the old observatory of Bern, nowadays the location of the Institute of Exact Sciences of Bern University, in downtown Bern (Sidlerstrasse 5 - 46°57'3.9" N, 7°26'19.1" E). The coordinates of this reference point were arbitrarily fixed at 600'000 m E / 200'000 m N – with the East coordinate (E) noted before the North coordinate (N), unlike in the traditional latitude / longitude coordinate system. In selecting the values of these reference coordinates, the intention of the Swiss Federal Office of Topography was to guarantee that every point of the Swiss territory be identified by positive coordinates. The reference coordinates thus needed to be large enough to allow for positive coordinates to be allocated to the southernmost and west
https://en.wikipedia.org/wiki/Solution%20concept
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium. Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games. Formal definition Let be the class of all games and, for each game , let be the set of strategy profiles of . A solution concept is an element of the direct product i.e., a function such that for all Rationalizability and iterated dominance In this solution concept, players are assumed to be rational and so strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. A strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. (Strictly dominated strategies are also important in minimax game-tree search.) For example, in the (single period) prisoners' dilemma (shown below), cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does. Nash equilibrium A Nash equilibrium is a strategy profile (a strategy profile specifies a strategy for every player, e.g. in the above prisoners' dilemma game (cooperate, defect) specifies that prisoner 1 plays cooperate and prisoner 2 plays defect) in which every strategy is a best response to every other strategy played. A strategy by a player is a best response to another player's strategy if there is no othe
https://en.wikipedia.org/wiki/Mojiky%C5%8D
(), also known by its full name , is a character encoding scheme. The , which published the character set, also published computer software and TrueType fonts to accompany it. The Mojikyō Institute, chaired by , originally had its character set and related software and data redistributed on CD-ROMs sold in Kinokuniya stores. Conceptualized in 1996, the first version of the CD-ROM was released in July 1997. For a time, the Mojikyō Institute also offered a web subscription, termed " WEB" (), which had more up-to-date characters. , Mojikyō encoded 174,975 characters. Among those, 150,366 characters (86%) then belonged to the extended Chinese–Japanese–Korean–Vietnamese (CJKV) family. Many of Mojikyō's characters are considered obsolete or obscure, and are not encoded by any other character set, including the most widely used international text encoding standard, Unicode. Originally a paid proprietary software product, as of 2015, the Mojikyō Institute began to upload its latest releases to Internet Archive as freeware, as a memorial to honor one of its developers, , who died that year. On December 15, 2018, version 4.0 was released. The next day, Ishikawa announced that without Furuya this would be the final release of Mojikyō. Premise The encoding was created to provide a complete index of Chinese, Korean, and Japanese characters. It also encodes a large number of characters in ancient scripts, such as the oracle bone script, the seal script, and Sanskrit (Siddhaṃ). For many characters, it is the only character encoding to encode them, and its data is often used as a starting point for Unicode proposals. However, has much looser standards than Unicode for encoding, which leads to have many encoded glyphs of dubious, or even unintentionally fictional, origin. As such, while many non-Unicode characters are suitable for addition to Unicode, not all can become Unicode characters, due to the differing standards of evidence required by each. Composition The font
https://en.wikipedia.org/wiki/Intensional%20logic
Intensional logic is an approach to predicate logic that extends first-order logic, which has quantifiers that range over the individuals of a universe (extensions), by additional quantifiers that range over terms that may have such individuals as their value (intensions). The distinction between intensional and extensional entities is parallel to the distinction between sense and reference. Overview Logic is the study of proof and deduction as manifested in language (abstracting from any underlying psychological or biological processes). Logic is not a closed, completed science, and presumably, it will never stop developing: the logical analysis can penetrate into varying depths of the language (sentences regarded as atomic, or splitting them to predicates applied to individual terms, or even revealing such fine logical structures like modal, temporal, dynamic, epistemic ones). In order to achieve its special goal, logic was forced to develop its own formal tools, most notably its own grammar, detached from simply making direct use of the underlying natural language. Functors (also known as function words) belong to the most important categories in logical grammar (along with basic categories like sentence and individual name): a functor can be regarded as an "incomplete" expression with argument places to fill in. If we fill them in with appropriate subexpressions, then the resulting entirely completed expression can be regarded as a result, an output. Thus, a functor acts like a function sign, taking on input expressions, resulting in a new, output expression. Semantics links expressions of language to the outside world. Also logical semantics has developed its own structure. Semantic values can be attributed to expressions in basic categories: the reference of an individual name (the "designated" object named by that) is called its extension; and as for sentences, their truth value is their extension. As for functors, some of them are simpler than others:
https://en.wikipedia.org/wiki/Solution%20in%20radicals
A solution in radicals or algebraic solution is a closed-form expression, and more specifically a closed-form algebraic expression, that is the solution of a polynomial equation, and relies only on addition, subtraction, multiplication, division, raising to integer powers, and the extraction of th roots (square roots, cube roots, and other integer roots). A well-known example is the solution of the quadratic equation There exist more complicated algebraic solutions for cubic equations and quartic equations. The Abel–Ruffini theorem, and, more generally Galois theory, state that some quintic equations, such as do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation can be solved as The eight other solutions are nonreal complex numbers, which are also algebraic and have the form where is a fifth root of unity, which can be expressed with two nested square roots. See also for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. Algebraic solutions form a subset of closed-form expressions, because the latter permit transcendental functions (non-algebraic functions) such as the exponential function, the logarithmic function, and the trigonometric functions and their inverses. See also Solvable quintics Solvable sextics Solvable septics References Algebra Equations
https://en.wikipedia.org/wiki/Organic%20field-effect%20transistor
An organic field-effect transistor (OFET) is a field-effect transistor using an organic semiconductor in its channel. OFETs can be prepared either by vacuum evaporation of small molecules, by solution-casting of polymers or small molecules, or by mechanical transfer of a peeled single-crystalline organic layer onto a substrate. These devices have been developed to realize low-cost, large-area electronic products and biodegradable electronics. OFETs have been fabricated with various device geometries. The most commonly used device geometry is bottom gate with top drain and source electrodes, because this geometry is similar to the thin-film silicon transistor (TFT) using thermally grown SiO2 as gate dielectric. Organic polymers, such as poly(methyl-methacrylate) (PMMA), can also be used as dielectric. One of the benefits of OFETs, especially compared with inorganic TFTs, is their unprecedented physical flexibility, which leads to biocompatible applications, for instance in the future health care industry of personalized biomedicines and bioelectronics. In May 2007, Sony reported the first full-color, video-rate, flexible, all plastic display, in which both the thin-film transistors and the light-emitting pixels were made of organic materials. History of OFETs The concept of a field-effect transistor (FET) was first proposed by Julius Edgar Lilienfeld, who received a patent for his idea in 1930. He proposed that a field-effect transistor behaves as a capacitor with a conducting channel between a source and a drain electrode. Applied voltage on the gate electrode controls the amount of charge carriers flowing through the system. The first insulated-gate field-effect transistor was designed and prepared by Mohamed Atalla and Dawon Kahng at Bell Labs using a metal–oxide–semiconductor: the MOSFET (metal–oxide–semiconductor field-effect transistor). It was invented in 1959, and presented in 1960. Also known as the MOS transistor, the MOSFET is the most widely manufact
https://en.wikipedia.org/wiki/NSA%20Suite%20B%20Cryptography
NSA Suite B Cryptography was a set of cryptographic algorithms promulgated by the National Security Agency as part of its Cryptographic Modernization Program. It was to serve as an interoperable cryptographic base for both unclassified information and most classified information. Suite B was announced on 16 February 2005. A corresponding set of unpublished algorithms, Suite A, is "used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI)." In 2018, NSA replaced Suite B with the Commercial National Security Algorithm Suite (CNSA). Suite B's components were: Advanced Encryption Standard (AES) with key sizes of 128 and 256 bits. For traffic flow, AES should be used with either the Counter Mode (CTR) for low bandwidth traffic or the Galois/Counter Mode (GCM) mode of operation for high bandwidth traffic (see Block cipher modes of operation) symmetric encryption Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures Elliptic Curve Diffie–Hellman (ECDH) key agreement Secure Hash Algorithm 2 (SHA-256 and SHA-384) message digest General information NIST, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography, Special Publication 800-56A Suite B Cryptography Standards , Suite B Certificate and Certificate Revocation List (CRL) Profile , Suite B Cryptographic Suites for Secure Shell (SSH) , Suite B Cryptographic Suites for IPsec , Suite B Profile for Transport Layer Security (TLS) These RFC have been downgraded to historic references per . History In December 2006, NSA submitted an Internet Draft on implementing Suite B as part of IPsec. This draft had been accepted for publication by IETF as RFC 4869, later made obsolete by RFC 6379. Certicom Corporation of Ontario, Canada, which was purchased by BlackBerry Limited in 2009, holds some elliptic curve patents, wh
https://en.wikipedia.org/wiki/Extensive-form%20game
In game theory, an extensive-form game is a specification of a game allowing (as the name suggests) for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the (possibly imperfect) information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix. Finite extensive-form games Some authors, particularly in introductory textbooks, initially define the extensive-form game as being just a game tree with payoffs (no imperfect or incomplete information), and add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as (ultimately) constructed here. This general definition was introduced by Harold W. Kuhn in 1953, who extended an earlier definition of von Neumann from 1928. Following the presentation from , an n-player extensive-form game thus consists of the following: A finite set of n (rational) players A rooted tree, called the game tree Each terminal (leaf) node of the game tree has an n-tuple of payoffs, meaning there is one payoff for each player at the end of every possible play A partition of the non-terminal nodes of the game tree in n+1 subsets, one for each (rational) player, and with a special subset for a fictitious player called Chance (or Nature). Each player's subset of nodes is referred to as the "nodes of the player". (A game of complete information thus has an empty set of Chance nodes.) Each node of the Chance player
https://en.wikipedia.org/wiki/Information%20set%20%28game%20theory%29
The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory, an information set is the set of all possible actions in the game for a given player, built on their observations and a set for a particular player that, given what that player has observed, shows the decision vertices available to the player which are indistinguishable to them at the current point in the game. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now. Information sets are used in extensive form games and are often depicted in game trees. Game trees show the path from the start of a game and the subsequent paths that can be made depending on each player's next move. For non-perfect information game problems, there is hidden information. That is, each player does not have complete knowledge of the opponent's information, such as cards that do not appear in a poker game. Therefore, when constructing a game tree, it is difficult to determine precisely where a node is located based on known information alone, as we do not know certain information about our opponent. We can only be sure that we are at one of a range of possible nodes. This inability to distinguish the set of nodes in a particular player's game tree is known as the 'information set'. Information sets can be easily depicted in game trees to display each player's possible moves typically using dotted lines, circles or even by just label
https://en.wikipedia.org/wiki/Complete%20information
In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game. Inversely, in a game with incomplete information, players do not possess full information about their opponents. Some players possess private information, a fact that the others should take into account when forming expectations about how those players will behave. A typical example is an auction: each player knows his own utility function (valuation for the item), but does not know the utility function of the other players. Applications Games of incomplete information arise frequently in social science. For instance, John Harsanyi was motivated by consideration of arms control negotiations, where the players may be uncertain both of the capabilities of their opponents and of their desires and beliefs. It is often assumed that the players have some statistical information about the other players, e.g. in an auction, each player knows that the valuations of the other players are drawn from some probability distribution. In this case, the game is called a Bayesian game. In games that have a varying degree of complete information and game type, there are different methods available to the player to solve the game based on this information. In games with static, complete information, the approach to solve is to use Nash equilibrium to find viable strategies. In dynamic games with complete information, backward induction is the solution concept, which eliminates non-credible
https://en.wikipedia.org/wiki/Mittag-Leffler%20function
In mathematics, the Mittag-Leffler function is a special function, a complex function which depends on two complex parameters and . It may be defined by the following series when the real part of is strictly positive: where is the gamma function. When , it is abbreviated as . For , the series above equals the Taylor expansion of the geometric series and consequently . In the case and are real and positive, the series converges for all values of the argument , so the Mittag-Leffler function is an entire function. This function is named after Gösta Mittag-Leffler. This class of functions are important in the theory of the fractional calculus. For , the Mittag-Leffler function is an entire function of order , and is in some sense the simplest entire function of its order. The Mittag-Leffler function satisfies the recurrence property (Theorem 5.1 of ) from which the following Poincaré asymptotic expansion holds : for and real such that then for all , we can show the following asymptotic expansions (Section 6. of ): -as : , -and as : , where we used the notation . Special cases For we find: (Section 2 of ) Error function: The sum of a geometric progression: Exponential function: Hyperbolic cosine: For , we have For , the integral gives, respectively: , , . Mittag-Leffler's integral representation The integral representation of the Mittag-Leffler function is (Section 6 of ) where the contour starts and ends at and circles around the singularities and branch points of the integrand. Related to the Laplace transform and Mittag-Leffler summation is the expression (Eq (7.5) of with ) Applications of Mittag-Leffler function One of the applications of the Mittag-Leffler function is in modeling fractional order viscoelastic materials. Experimental investigations into the time-dependent relaxation behavior of viscoelastic materials are characterized by a very fast decrease of the stress at the beginning of the relaxation process and an extr
https://en.wikipedia.org/wiki/Perfect%20Bayesian%20equilibrium
In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games. Any perfect Bayesian equilibrium has two components -- strategies and beliefs: The strategy of a player in a given information set specifies his choice of action in that information set, which may depend on the history (on actions taken previously in the game). This is similar to a sequential game. The belief of a player in a given information set determines what node in that information set he believes the game has reached. The belief may be a probability distribution over the nodes in the information set, and is typically a probability distribution over the possible types of the other players. Formally, a belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1. The strategies and beliefs also must satisfy the following conditions: Sequential rationality: each strategy should be optimal in expectation, given the beliefs. Consistency: each belief should be updated according to the equilibrium strategies, the observed actions, and Bayes' rule on every path reached in equilibrium with positive probability. On paths of zero probability, known as
https://en.wikipedia.org/wiki/Structured%20cabling
In telecommunications, structured cabling is building or campus cabling infrastructure that consists of a number of standardized smaller elements (hence structured) called subsystems. Structured cabling components include twisted pair and optical cabling, patch panels and patch cables. Overview Structured cabling is the design and installation of a cabling system that will support multiple hardware uses and be suitable for today's needs and those of the future. With a correctly installed system, current and future requirements can be met, and hardware that is added in the future will be supported Structured cabling design and installation is governed by a set of standards that specify wiring data centers, offices, and apartment buildings for data or voice communications using various kinds of cable, most commonly category 5e (Cat 5e), category 6 (Cat 6), and fiber optic cabling and modular connectors. These standards define how to lay the cabling in various topologies in order to meet the needs of the customer, typically using a central patch panel (which is normally 19-inch rack-mounted), from where each modular connection can be used as needed. Each outlet is then patched into a network switch (normally also rack-mounted) for network use or into an IP or PBX (private branch exchange) telephone system patch panel. Lines patched as data ports into a network switch require simple straight-through patch cables at each end to connect a computer. Voice patches to PBXs in most countries require an adapter at the remote end to translate the configuration on 8P8C modular connectors into the local standard telephone wall socket. No adapter is needed in North America as the 6P2C and 6P4C plugs most commonly used with RJ11 and RJ14 telephone connections are physically and electrically compatible with the larger 8P8C socket. RJ25 and RJ61 connections are physically but not electrically compatible, and cannot be used. In the United Kingdom, an adapter must be present at
https://en.wikipedia.org/wiki/Adrien%20Douady
Adrien Douady (; 25 September 1935 – 2 November 2006) was a French mathematician born in La Tronche, Isère. He was the son of Daniel Douady and Guilhen Douady. Douady was a student of Henri Cartan at the École normale supérieure, and initially worked in homological algebra. His thesis concerned deformations of complex analytic spaces. Subsequently, he became more interested in the work of Pierre Fatou and Gaston Julia and made significant contributions to the fields of analytic geometry and dynamical systems. Together with his former student John H. Hubbard, he launched a new subject, and a new school, studying properties of iterated quadratic complex mappings. They made important mathematical contributions in this field of complex dynamics, including a study of the Mandelbrot set. One of their most fundamental results is that the Mandelbrot set is connected; perhaps most important is their theory of renormalization of (polynomial-like) maps. The Douady rabbit, a quadratic filled Julia set, is named after him. Douady taught at the University of Nice and was a professor at the Paris-Sud 11 University, Orsay. He was a member of Bourbaki and an invited speaker at the International Congress of Mathematicians in 1966 at Moscow and again in 1986 in Berkeley. He was elected to the Académie des Sciences in 1997, and was featured in the French animation project Dimensions. He died after diving into the cold Mediterranean from a favourite spot near his vacation home in the Var. His son, Raphael Douady, is also a noted mathematician and an economist. See also Beltrami equation Jessen's icosahedron Kuiper's theorem References External links http://picard.ups-tlse.fr/~cheritat/Adrien70/index.php http://www.math.jacobs-university.de/adrien with a guest book to share your memories Pictures with Adrien Douady 1935 births 2006 deaths 20th-century French mathematicians Dynamical systems theorists Academic staff of Côte d'Azur University Academic staff of Paris-Sud Universi
https://en.wikipedia.org/wiki/Salt
In common usage, salt is a mineral composed primarily of sodium chloride (NaCl). When used in food, especially at table in ground form in dispensers, it is more formally called table salt. In the form of a natural crystalline mineral, salt is also known as rock salt or halite. Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and is known to uniformly improve the taste perception of food, including otherwise unpalatable food. Salting, brining, and pickling are also ancient and important methods of food preservation. Some of the earliest evidence of salt processing dates to around 6000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, Greeks, Romans, Byzantines, Hittites, Egyptians, and Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance. Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. The greatest single use for salt (sodium chloride) is as a feedstock for the production of chemicals. It is used to produce caustic soda and chlorine; it is also used in the manufacturing processes of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around three hundred million tonnes of salt, only a small percentage is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea s
https://en.wikipedia.org/wiki/Reverse%20proxy
In computer networks, a reverse proxy is an application that sits in front of back-end applications and forwards client (e.g. browser) requests to those applications. Reverse proxies help increase scalability, performance, resilience and security. The resources returned to the client appear as if they originated from the web server itself. Large websites and content delivery networks use reverse proxies, together with other techniques, to balance the load between internal servers. Reverse proxies can keep a cache of static content, which further reduces the load on these internal servers and the internal network. It is also common for reverse proxies to add features such as compression or TLS encryption to the communication channel between the client and the reverse proxy. Reverse proxies are typically owned or managed by the web service, and they are accessed by clients from the public Internet. In contrast, a forward proxy is typically managed by a client (or their company) who is restricted to a private, internal network, except that the client can ask the forward proxy to retrieve resources from the public Internet on behalf of the client. Reverse proxy servers are implemented in popular open-source web servers such as Apache, Nginx, and Caddy. This software can inspect HTTP headers, which, for example, allows it to present a single IP address to the Internet while relaying requests to different internal servers based on the URL of the HTTP request. Dedicated reverse proxy servers such as the open source software HAProxy and Squid are used by some of the biggest websites on the Internet. Uses Reverse proxies can hide the existence and characteristics of origin servers. This can make it more difficult to determine the actual location of the origin server / website and, for instance, more challenging to initiate legal action such as takedowns or block access to the website, as the IP address of the website may not be immediately apparent. Additionally, th
https://en.wikipedia.org/wiki/Infinity%20symbol
The infinity symbol () is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate, after the lemniscate curves of a similar shape studied in algebraic geometry, or "lazy eight", in the terminology of livestock branding. This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag. Both the infinity symbol itself and several variations of the symbol are available in various character encodings. History The lemniscate has been a common decorative motif since ancient times; for instance it is commonly seen on Viking Age combs. The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis. Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame. Another proposes instead that it was based on the notation CIↃ used to represent 1,000. Instead of a Roman numeral, it may alternatively be derived from a variant the lower-case form of omega, the last letter in the Greek alphabet. Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning. Leonhard Euler used an open letterform more closely resembling a reflected and sidewa
https://en.wikipedia.org/wiki/Bart%20Kosko
Bart Andrew Kosko (born February 7, 1960) is a writer and professor of electrical engineering and law at the University of Southern California (USC). He is a researcher and popularizer of fuzzy logic, neural networks, and noise, and the author of several trade books and textbooks on these and related subjects of machine intelligence. He was awarded the 2022 Donald O. Hebb Award for neural learning by the International Neural Network Society. Personal background Kosko holds bachelor's degrees in philosophy and in economics from USC (1982), a master's degree in applied mathematics from UC San Diego (1983), a PhD in electrical engineering from UC Irvine (1987) under Allen Stubberud, and a J.D. from Concord Law School. He is an attorney licensed in California and federal court, and worked part-time as a law clerk for the Los Angeles District Attorney's Office. Kosko is a political and religious skeptic. He is a contributing editor of the libertarian periodical Liberty, where he has published essays on "Palestinian vouchers". Writing Kosko's most popular book to date was the international best-seller Fuzzy Thinking, about man and machines thinking in shades of gray, and his most recent book was Noise. He has also published short fiction and the cyber-thriller novel Nanotime, about a possible World War III that takes place in two days of the year 2030. The novel's title coins the term "nanotime" to describe the time speed-up that occurs when fast computer chips, rather than slow brains, house minds. Kosko has a minimalist prose style, not even using commas in his book Noise. Research Kosko's technical contributions have been in three main areas: fuzzy logic, neural networks, and noise. In fuzzy logic, he introduced fuzzy cognitive maps, fuzzy subsethood, additive fuzzy systems, fuzzy approximation theorems, optimal fuzzy rules, fuzzy associative memories, various neural-based adaptive fuzzy systems, ratio measures of fuzziness, the shape of fuzzy sets, the conditi
https://en.wikipedia.org/wiki/Fundamental%20pair%20of%20periods
In mathematics, a fundamental pair of periods is an ordered pair of complex numbers that defines a lattice in the complex plane. This type of lattice is the underlying object with which elliptic functions and modular forms are defined. Definition A fundamental pair of periods is a pair of complex numbers such that their ratio is not real. If considered as vectors in , the two are not collinear. The lattice generated by and is This lattice is also sometimes denoted as to make clear that it depends on and It is also sometimes denoted by or or simply by The two generators and are called the lattice basis. The parallelogram with vertices is called the fundamental parallelogram. While a fundamental pair generates a lattice, a lattice does not have any unique fundamental pair; in fact, an infinite number of fundamental pairs correspond to the same lattice. Algebraic properties A number of properties, listed below, can be seen. Equivalence Two pairs of complex numbers and are called equivalent if they generate the same lattice: that is, if No interior points The fundamental parallelogram contains no further lattice points in its interior or boundary. Conversely, any pair of lattice points with this property constitute a fundamental pair, and furthermore, they generate the same lattice. Modular symmetry Two pairs and are equivalent if and only if there exists a matrix with integer entries and and determinant such that that is, so that This matrix belongs to the modular group This equivalence of lattices can be thought of as underlying many of the properties of elliptic functions (especially the Weierstrass elliptic function) and modular forms. Topological properties The abelian group maps the complex plane into the fundamental parallelogram. That is, every point can be written as for integers with a point in the fundamental parallelogram. Since this mapping identifies opposite sides of the parallelogram as being the same, the f
https://en.wikipedia.org/wiki/Technological%20change
Technological change (TC) or technological development is the overall process of invention, innovation and diffusion of technology or processes.<ref name="Econ refsdon't" >From [[The New Palgrave Dictionary of technical change" by S. Metcalfe.  • "biased and biased technological change" by Peter L. Rousseau.  • "skill-biased technical change" by Giovanni L. Violante.</ref> In essence, technological change covers the invention of technologies (including processes) and their commercialization or release as open source via research and development (producing emerging technologies), the continual improvement of technologies (in which they often become less expensive), and the diffusion of technologies throughout industry or society (which sometimes involves disruption and convergence). In short, technological change is based on both better and more technology. Modeling technological change In its earlier days, technological change was illustrated with the 'Linear Model of Innovation', which has now been largely discarded to be replaced with a model of technological change that involves innovation at all stages of research, development, diffusion, and use. When speaking about "modeling technological change," this often means the process of innovation. This process of continuous improvement is often modeled as a curve depicting decreasing costs over time (for instance fuel cell which have become cheaper every year). TC is also often modelled using a learning curve, ex.: Ct=C0 * Xt^-b Technological change itself is often included in other models (e.g. climate change models) and was often taken as an exogenous factor. These days TC is more often included as an endogenous factor. This means that it is taken as something you can influence. Today, there are sectors that maintain the policy which can influence the speed and direction of technological change. For example, proponents of the Induced Technological Change hypothesis state that policymakers can steer the directio
https://en.wikipedia.org/wiki/Stackelberg%20competition
The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially (hence, it is sometimes described as the "leader-follower game"). It is named after the German economist Heinrich Freiherr von Stackelberg who published Marktform und Gleichgewicht [Market Structure and Equilibrium] in 1934, which described the model. In game theory terms, the players of this game are a leader and a follower and they compete on quantity. The Stackelberg leader is sometimes referred to as the Market Leader. There are some further constraints upon the sustaining of a Stackelberg equilibrium. The leader must know ex ante that the follower observes its action. The follower must have no means of committing to a future non-Stackelberg leader's action and the leader must know this. Indeed, if the 'follower' could commit to a Stackelberg leader action and the 'leader' knew this, the leader's best response would be to play a Stackelberg follower action. Firms may engage in Stackelberg competition if one has some sort of advantage enabling it to move first. More generally, the leader must have commitment power. Moving observably first is the most obvious means of commitment: once the leader has made its move, it cannot undo it—it is committed to that action. Moving first may be possible if the leader was the incumbent monopoly of the industry and the follower is a new entrant. Holding excess capacity is another means of commitment. Subgame perfect Nash equilibrium The Stackelberg model can be solved to find the subgame perfect Nash equilibrium or equilibria (SPNE), i.e. the strategy profile that serves best each player, given the strategies of the other player and that entails every player playing in a Nash equilibrium in every subgame. In very general terms, let the price function for the (duopoly) industry be ; price is simply a function of total (industry) output, so is where the subscript represents t
https://en.wikipedia.org/wiki/Pinwheel%20%28cryptography%29
In cryptography, a pinwheel was a device for producing a short pseudorandom sequence of bits (determined by the machine's initial settings), as a component in a cipher machine. A pinwheel consisted of a rotating wheel with a certain number of positions on its periphery. Each position had a "pin", "cam" or "lug" which could be either "set" or "unset". As the wheel rotated, each of these pins would in turn affect other parts of the machine, producing a series of "on" or "off" pulses which would repeat after one full rotation of the wheel. If the machine contained more than one wheel, usually their periods would be relatively prime to maximize the combined period. Pinwheels might be turned through a purely mechanical action (as in the M-209) or electromechanically (as in the Lorenz SZ 40/42). Development The Swedish engineer Boris Caesar Wilhelm Hagelin is credited with having invented the first pinwheel device in 1925. He developed the machine while employed by Emanuel Nobel to oversee the Nobel interests in Aktiebolaget Cryptograph. He was the nephew of the founder of the Nobel Prize. The device was later introduced in France and Hagelin was awarded the French order of merit, Legion d'Honneur, for his work. One of the earliest cipher machines that Hagaelin developed was the C-38 and was later improved into the more portable Hagelin m-209. The M-209 is composed of a set of pinwheels and a rotating cage. Other cipher machines which used pinwheels include the C-52, the CD-57 and the Siemens and Halske T52. Pinwheels can be viewed as a predecessor to the electronic linear-feedback shift register (LFSR), used in later cryptosystems. See also Rotor machine References Encryption devices Cryptographic hardware
https://en.wikipedia.org/wiki/Kroll%20process
The Kroll process is a pyrometallurgical industrial process used to produce metallic titanium from titanium tetrachloride. The Kroll process replaced the Hunter process for almost all commercial production. Process In the Kroll process, the TiCl4 is reduced by liquid magnesium to give titanium metal: TiCl4 + 2Mg ->[825^oC]Ti + 2MgCl2 The reduction is conducted at 800–850 °C in a stainless steel retort. Complications result from partial reduction of the TiCl4, giving to the lower chlorides TiCl2 and TiCl3. The MgCl2 can be further refined back to magnesium. The resulting porous metallic titanium sponge is purified by leaching or vacuum distillation. The sponge is crushed, and pressed before it is melted in a consumable carbon electrode vacuum arc furnace. The melted ingot is allowed to solidify under vacuum. It is often remelted to remove inclusions and ensure uniformity. These melting steps add to the cost of the product. Titanium is about six times as expensive as stainless steel. In the earlier Hunter process, which ceased to be commercial in the 1990s, the TiCl4 from the chloride process is reduced to the metal by sodium. History and subsequent developments The Kroll process was invented in 1940 by William J. Kroll in Luxembourg. After moving to the United States, Kroll further developed the method for the production of zirconium. Many methods had been applied to the production of titanium metal, beginning with a report in 1887 by Nilsen and Pettersen using sodium, which was optimized into the commercial Hunter process. In the 1920s van Arkel had described the thermal decomposition of titanium tetraiodide to give highly pure titanium. Titanium tetrachloride was found to reduce with hydrogen at high temperatures to give hydrides that can be thermally processed to the pure metal. With this background, Kroll developed both new reductants and new apparatus for the reduction of titanium tetrachloride. Its high reactivity toward trace amounts of water a
https://en.wikipedia.org/wiki/Semiconductor%20Research%20Corporation
Semiconductor Research Corporation (SRC), commonly known as SRC, is a high-technology research consortium active in the semiconductor industry. It is a leading semiconductor research consortium. Todd Younkin is the incumbent president and chief executive officer of the company. The consortium comprises more than twenty-five companies and government agencies with more than a hundred universities under contract performing research. History SRC was founded in 1982 as a consortium to fund research by semiconductor companies. In the past, it has funded university research projects in hardware and software co-design, new architectures, circuit design, transistors, memories, interconnects, and materials and has sponsored over 15,000 PhD students. Research SRC has funded research in areas such as automotive, advanced memory technologies, logic and processing, advanced packaging, edge intelligence, and communications. Programs Joint University Microelectronics Program It is a long-term research program, in collaboration with DARPA and industry, that focuses on energy-efficient electronics, including actuation and sensing, signal processing, computing, and intelligent storage. Global Research Collaboration Program It is an industry-led international research program with eight sub-topics including artificial intelligence hardware; analog mixed-signal circuits; computer-aided design and test; environment safety and health; hardware security; logic and memory devices; nanomanufacturing materials and processes; and packaging. Undergraduate Research Program Undergraduate Research Program (URP) is a one-year academic program for students. The program combines an undergraduate research curriculum with industry experience. The program helps sustain a high retention rate of students who are interested in semiconductor research. It gives networking and job opportunities through an annual event. Recognition In 2005, SRC received the National Medal of Technology and Innov
https://en.wikipedia.org/wiki/Coast%20Guard%20One
Coast Guard One is the call sign of any United States Coast Guard aircraft carrying the president of the United States. Similarly, any Coast Guard aircraft carrying the vice president is designated Coast Guard Two. , there has never been a Coast Guard One flight. Coast Guard Two was activated on September 25, 2009, when then-Vice President Joe Biden took a flight on CG 6019, an HH-60 Jayhawk helicopter, over the recently flooded Atlanta area. Other executive travel The Commandant of the Coast Guard often travels aboard a Gulfstream C-37A aircraft whose standard callsign is "Coast Guard Zero One". The aircraft is stationed at Coast Guard Station Washington, D.C. See also Transportation of the president of the United States References Presidential aircraft United States Coast Guard Aviation Call signs Transportation of the president of the United States Vehicles of the United States
https://en.wikipedia.org/wiki/FM%20broadcasting
FM broadcasting is a method of radio broadcasting that uses frequency modulation (FM) of the radio broadcast carrier wave. Invented in 1933 by American engineer Edwin Armstrong, wide-band FM is used worldwide to transmit high-fidelity sound over broadcast radio. FM broadcasting offers higher fidelity—more accurate reproduction of the original program sound—than other broadcasting techniques, such as AM broadcasting. It is also less susceptible to common forms of interference, having less static and popping sounds than are often heard on AM. Therefore, FM is used for most broadcasts of music and general audio (in the audio spectrum). FM radio stations use the very high frequency range of radio frequencies. Broadcast bands Throughout the world, the FM broadcast band falls within the VHF part of the radio spectrum. Usually 87.5 to 108.0 MHz is used, or some portion of it, with few exceptions: In the former Soviet republics, and some former Eastern Bloc countries, the older 65.8–74 MHz band is also used. Assigned frequencies are at intervals of 30 kHz. This band, sometimes referred to as the OIRT band, is slowly phased out. Where the OIRT band is used, the 87.5–108.0 MHz band is referred to as the CCIR band. In Japan, the band 76–95 MHz is used. In Brazil, until the late 2010s, FM broadcast stations only used the 88-108 MHz Band, but with the phasing out of analog television, the 76-88 MHz band (old band channels 5 and 6 in VHF television) are allocated for old local MW stations who have moved to FM in agreement with ANATEL. The frequency of an FM broadcast station (more strictly its assigned nominal center frequency) is usually a multiple of 100 kHz. In most of South Korea, the Americas, the Philippines, and the Caribbean, only odd multiples are used. Some other countries follow this plan because of the import of vehicles, principally from the United States, with radios that can only tune to these frequencies. In some parts of Europe, Greenland, and Africa, only
https://en.wikipedia.org/wiki/History%20of%20ecology
Ecology is a new science and considered as an important branch of biological science, having only become prominent during the second half of the 20th century. Ecological thought is derivative of established currents in philosophy, particularly from ethics and politics. Its history stems all the way back to the 4th century. One of the first ecologists whose writings survive may have been Aristotle or perhaps his student, Theophrastus, both of whom had interest in many species of animals and plants. Theophrastus described interrelationships between animals and their environment as early as the 4th century BC. Ecology developed substantially in the 18th and 19th century. It began with Carl Linnaeus and his work with the economy of nature. Soon after came Alexander von Humboldt and his work with botanical geography. Alexander von Humboldt and Karl Möbius then contributed with the notion of biocoenosis. Eugenius Warming's work with ecological plant geography led to the founding of ecology as a discipline. Charles Darwin's work also contributed to the science of ecology, and Darwin is often attributed with progressing the discipline more than anyone else in its young history. Ecological thought expanded even more in the early 20th century. Major contributions included: Eduard Suess’ and Vladimir Vernadsky's work with the biosphere, Arthur Tansley's ecosystem, Charles Elton's Animal Ecology, and Henry Cowles ecological succession. Ecology influenced the social sciences and humanities. Human ecology began in the early 20th century and it recognized humans as an ecological factor. Later James Lovelock advanced views on earth as a macro-organism with the Gaia hypothesis. Conservation stemmed from the science of ecology. Important figures and movements include Shelford and the ESA, National Environmental Policy act, George Perkins Marsh, Theodore Roosevelt, Stephen A. Forbes, and post-Dust Bowl conservation. Later in the 20th century world governments collaborated on man’s e
https://en.wikipedia.org/wiki/Experimenter%27s%20regress
In science, experimenter's regress refers to a loop of dependence between theory and evidence. In order to judge whether evidence is erroneous we must rely on theory-based expectations, and to judge the value of competing theories we rely on evidence. Cognitive bias affects experiments, and experiments determine which theory is valid. This issue is particularly important in new fields of science where there is no community consensus regarding the relative values of various competing theories, and where sources of experimental error are not well known. In a true scientific process, no consensus does exist and no consensus can exist as the process is conducted scientifically in the pursuit of knowledge. If any party involved in the process stands to personally lose or gain from the result, the process will be flawed and unscientific. In a true scientific process, a theory is formed after a scientist - amateur or professional - has observed a phenomenon and has asked "why?" as a result. The theory is the answer the scientist creates using logic and reason to explain the phenomenon. The scientist then focuses on how to conduct experiments to test the theory incrementally and the theory is either proven to be true or false through repeatable and legitimate experimentation. Legitimate scientific experiments conducted by the person who formulated the theory seek to prove the theory false rather than prove it true specifically to counter the effects of bias. If experimenter's regress acts a positive feedback system, it can be a source of pathological science. An experimenter's strong belief in a new theory produces confirmation bias, and any biased evidence they obtain then strengthens their belief in that particular theory. Neither individual researchers nor entire scientific communities are immune to this effect: see N-rays and polywater. Experimenter's regress is a typical relativistic phenomenon in the Empirical Programme of Relativism (EPOR). EPOR is very much co
https://en.wikipedia.org/wiki/Isogamy
Isogamy is a form of sexual reproduction that involves gametes of the same morphology (indistinguishable in shape and size), found in most unicellular eukaryotes. Because both gametes look alike, they generally cannot be classified as male or female. Instead, organisms undergoing isogamy are said to have different mating types, most commonly noted as "+" and "−" strains. Etymology The literal meaning of isogamy is "equal marriage" which refers to equal contribution of resources by both gametes to a zygote. The term isogamous was first used in the year 1887. Characteristics of isogamous species Isogamous species often have two mating types. Some isogamous species have more than two mating types, but the number is usually lower than ten. In some extremely rare cases a species can have thousands of mating types. In all cases, fertilization occurs when gametes of two different mating types fuse to form a zygote. Evolution It is generally accepted that isogamy is an ancestral state for anisogamy and that isogamy was the first stage in the evolution of sexual reproduction. Isogamous reproduction evolved independently in several lineages of plants and animals to anisogamous species with gametes of male and female types and subsequently to oogamous species in which the female gamete is much larger than the male and has no ability to move. This pattern may have been driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Isogamy is the norm in unicellular eukaryote species, although it is possible that isogamy is evolutionarily stable in multicellular species. Occurrence Almost all unicellular eukaryotes are isogamous. Among multicellular organisms, isogamy is restricted to fungi such as baker's yeast and algae. Many species of green algae are isogamous. It is typical in the genera Ulva, Hydrodictyon, Tetraspora, Zygnema, Spirogyra, Ulothrix, and Chlamydomonas. Many fungi are isogamous. See also
https://en.wikipedia.org/wiki/WJTC
WJTC (channel 44) is an independent television station licensed to Pensacola, Florida, United States, serving northwest Florida and southwest Alabama. It is owned by Deerfield Media alongside Mobile, Alabama–licensed NBC affiliate WPMI-TV (channel 15); Deerfield maintains a local marketing agreement (LMA) with Sinclair Broadcast Group, owner of Pensacola-licensed ABC affiliate WEAR-TV (channel 3) and Fort Walton Beach–licensed MyNetworkTV affiliate WFGX (channel 35), for the provision of certain services. WJTC and WPMI-TV share studios on Azalea Road (near I-10) in Mobile; master control and some internal operations are based at the shared facilities of WEAR-TV and WFGX on Mobile Highway (US 90) in unincorporated Escambia County, Florida (with a Pensacola mailing address). WJTC's transmitter is located near Robertsdale, Alabama. History WJTC signed on the air on December 24, 1984 as an independent station; the station was founded by its original owners, the Mercury Broadcasting Company. The station—which was branded as "C44"—maintained a general entertainment programming format consisting of old movies, cartoons, westerns, dramas, and a few classic sitcoms. In 1991, Clear Channel Communications, then-owner of WPMI, entered into a local marketing agreement with WJTC. Mercury Broadcasting retained the license, but Clear Channel owned local rights to programming broadcast on the station. WJTC affiliated with the United Paramount Network (UPN) upon its debut on January 16, 1995. Clear Channel acquired WJTC outright in 2001, creating a duopoly with WPMI. On January 24, 2006, CBS Corporation and Time Warner announced the shutdown of both UPN and The WB effective that fall. A new "fifth" network—"The CW" (named after its corporate parents), to be jointly owned by both companies, would launch in their place, with a lineup primarily featuring the most popular programs from both networks. On February 22, 2006, News Corporation announced that it would launch a network of i
https://en.wikipedia.org/wiki/Paned%20window%20%28computing%29
A paned window is a windows (or build-ups) in a graphical user interface that has multiple parts, layers, or sections. Examples of this include a code browser in a typical integrated development environment; a file browser with multiple panels; a tiling window manager; or a web page that contains multiple frames. Simple console applications use an edit pane for accepting input and an output pane for displaying output. The term task pane is used by Microsoft to identify any area cordoned off from the main screen area of an application and used for a specific function, such as changing the displayed font in a word processor. Three-pane interface A Three-pane interface is a category of graphical user interface in which the screen or window is divided into three panes displaying information. This information typically falls into a hierarchal relationship of master-detail with an embedded inspector window. Microsoft's Outlook Express email client popularized a mailboxes / mailbox contents / email text layout that became the norm until web-based user interfaces rose in popularity during the mid-2000s. Even today, many webmail scripts emulate this interface style. References Microsoft Windows Graphical user interface elements
https://en.wikipedia.org/wiki/Thoughtworks
Thoughtworks is a publicly-traded, global technology company with 49 offices in 18 countries. It provides software design and delivery, and tools and consulting services. The company is closely associated with the movement for agile software development, and has contributed to open source products. Thoughtworks' business includes Digital Product Development Services, Digital Experience and Distributed Agile software development. History 1980s to 1990s In the late 1980s, Roy Singham founded Singham Business Services as a management consulting company servicing the equipment leasing industry in a Chicago basement. According to Singham, after two-to-three years, Singham started recruiting additional staff and came up with the name Thoughtworks in 1990. The company was incorporated under the new name in 1993 and focused on building software applications. Over time, Thoughtworks' technology shifted from C++ and Forte 4GL in the mid-1990s to include Java in the late 1990s. 1990s–2010s Martin Fowler joined the company in 1999 and became its chief scientist in 2000. In 2001, Thoughtworks agreed to settle a lawsuit by Microsoft for $480,000 for deploying unlicensed copies of office productivity software to employees. Also in 2001, Fowler, Jim Highsmith, and other key software figures authored the Agile Manifesto. The company began using agile techniques while working on a leasing project. Thoughtworks' technical expertise expanded with the .NET Framework in 2002, C# in 2004, Ruby and the Rails platform in 2006. In 2002, Thoughtworks chief scientist Martin Fowler wrote "Patterns of Enterprise Application Architecture" with contributions by ThoughtWorkers David Rice and Matthew Foemmel, as well as outside contributors Edward Hieatt, Robert Mee, and Randy Stafford. Thoughtworks Studios was launched as its product division in 2006 and shut down in 2020. The division created, supported and sold agile project management and software development and deployment tools includi
https://en.wikipedia.org/wiki/Phoenix%20Labs%20%28software%29
Phoenix Labs (formerly Methlabs) was a software developing community founded by Tim Leonard and Ken McClelland and best known for PeerGuardian, an open-source software program optimized for use as a personal firewall on file sharing networks. History The group was originally organized to work on PeerGuardian, a program Leonard created in 2003 in response to the shutdown of the Audiogalaxy file sharing site. The name Methlabs came from Leonard's handle, 'method'. In late 2004, the group added Blocklist.org, a website designed to allow users to interactively manage and block the IP addresses of certain organisations and companies. Phoenix Labs was formed in September 2005 following a dispute between members of the Methlabs team. According to a Phoenix Labs press release, a "series of threats and incidents" forced most of the group out of the methlabs.org website. The press release attributed this action to a team member with responsibility for finances and server control. Based on information from other participants, internet news sources identified this person as using the nickname "Cerberius", but Cerberius reportedly disputed their charges of domain hijacking and described the situation as a "revolt". Method and a few members of the group decided they could not get the methlabs.com domain back, and decided took suggestions for a new name on their IRC channel. A user named 'bizzyb0t' came up with the name "phoenixlabs.org" to represent the group being reborn "rising from the ashes". Products PeerGuardian and PeerGuardian 2 are free and open source programs capable of blocking incoming and outgoing connections based on IP blocklists. Their purpose is to block several organizations, including the RIAA and MPAA while using file sharing networks such as FastTrack and BitTorrent. The system is also advertising, spyware, government and educational ranges, depending upon user preferences. Other programs developed by Phoenix Labs are XS, a file-sharing hub and client
https://en.wikipedia.org/wiki/PeerGuardian
PeerGuardian is a free and open source program developed by Phoenix Labs (software). It is capable of blocking incoming and outgoing connections based on IP blacklists. The aim of its use was to block peers on the same torrent download from any visibility of your own peer connection using IP lists. The system is also capable of blocking custom ranges, depending upon user preferences. The Windows version of this program has been discontinued in favor of other applications (Phoenix Labs encourage current PeerGuardian users to migrate to PeerBlock which is based on PeerGuardian 2). History Development on PeerGuardian started in late 2002, led by programmer Tim Leonard. The first public version was released in 2003, at a time when the music industry started to sue individual file sharing users (a change from its previous stance that it would not target consumers with copyright infringement lawsuits). Version 1 The original PeerGuardian (1.0) was programmed in Visual Basic and quickly became popular among P2P users despite blocking only the common TCP protocol and being known for high RAM and CPU usage when connected to P2P networks. By December 2003, it had been downloaded 1 million times. The original version was released for free and the source code was made available under an open source license. Due to Version 1.0 only blocking TCP ports PeerGuardian.net then shifted to bluetack.co.uk where Protowall, The blocklist Manager, B.I.M.S and the Hosts Manager were developed. Version 2 After 7 months of development, in February 2005 Version 2 of PeerGuardian was released as a beta. The development of version 2.0 was led by Cory Nelson, and aimed to resolve many of the shortcomings of Version 1. Version 2 enabled support for more protocols (TCP, UDP, ICMP, etc.), multiple block lists, and automatic updates. The installation procedure was also simplified, no longer requiring a system restart and driver installation. Speed and resource inefficiencies were fixed by re-
https://en.wikipedia.org/wiki/Signedness
In computing, signedness is a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers). As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because roughly half the possible values are non-positive values, whereas the respective unsigned type can dedicate all the possible values to the positive number range. For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is negative (0 for positive or zero, 1 for negative). In programming languages For most architectures, there is no signed–unsigned type distinction in the machine language. Nevertheless, arithmetic instructions usually set different CPU flags such as the carry flag for unsigned arithmetic and the overflow flag for signed. Those values can be taken into account by subsequent branch or arithmetic commands. The C programming language, along with its derivatives, implements a signedness for all integer data types, as well as for "character". For Integers, the modifier defines the type to be unsigned. The default integer signedness is signed, but can be set explicitly with modifier. By contrast, the C standard declares , , and , to be three distinct types, but specifies that all three must have the same size and alignment. Further, must have the same numeric range as either or , but the choice of which depends on the platform. Integer literals can be made unsigned with suffix. For example, gives −1, but gives 4,294,967,295 for 32-bit code. Compilers often issue a warning when comparisons are made b
https://en.wikipedia.org/wiki/Vi%C3%A8te%27s%20formula
In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant : It can also be represented as: The formula is named after François Viète, who published it in 1593. As the first formula of European mathematics to represent an infinite process, it can be given a rigorous meaning as a limit expression, and marks the beginning of mathematical analysis. It has linear convergence, and can be used for calculations of , but other methods before and since have led to greater accuracy. It has also been used in calculations of the behavior of systems of springs and masses, and as a motivating example for the concept of statistical independence. The formula can be derived as a telescoping product of either the areas or perimeters of nested polygons converging to a circle. Alternatively, repeated use of the half-angle formula from trigonometry leads to a generalized formula, discovered by Leonhard Euler, that has Viète's formula as a special case. Many similar formulas involving nested roots or infinite products are now known. Significance François Viète (1540–1603) was a French lawyer, privy councillor to two French kings, and amateur mathematician. He published this formula in 1593 in his work Variorum de rebus mathematicis responsorum, liber VIII. At this time, methods for approximating to (in principle) arbitrary accuracy had long been known. Viète's own method can be interpreted as a variation of an idea of Archimedes of approximating the circumference of a circle by the perimeter of a many-sided polygon, used by Archimedes to find the approximation By publishing his method as a mathematical formula, Viète formulated the first instance of an infinite product known in mathematics, and the first example of an explicit formula for the exact value of . As the first representation in European mathematics of a number as the result of an infinite process rather than of a finite calculation,
https://en.wikipedia.org/wiki/5040%20%28number%29
5040 is a factorial (7!), a superior highly composite number, abundant number, colossally abundant number and the number of permutations of 4 items out of 10 choices (10 × 9 × 8 × 7 = 5040). It is also one less than a square, making (7, 71) a Brown number pair. Philosophy Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a city-state or polis) into lesser parts, making it an ideal number for the number of citizens (heads of families) making up a polis. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that Benjamin Jowett, in the introduction to his translation of Laws, wrote, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation." Jean-Pierre Kahane has suggested that Plato's use of the number 5040 marks the first appearance of the concept of a highly composite number, a number with more divisors than any smaller number. Number theoretical If is the divisor function and is the Euler–Mascheroni constant, then 5040 is the largest of 27 known numbers for which this inequality holds: . This is somewhat unusual, since in the limit we have: Guy Robin showed in 1984 that the inequality fails for all larger numbers if and only if the Riemann hypothesis is true. Interesting notes 5040 has exactly 60 divisors, counting itself and 1. 5040 is the largest factorial (7! = 5040) that is also a high
https://en.wikipedia.org/wiki/Forward-confirmed%20reverse%20DNS
Forward-confirmed reverse DNS (FCrDNS), also known as full-circle reverse DNS, double-reverse DNS, or iprev, is a networking parameter configuration in which a given IP address has both forward (name-to-address) and reverse (address-to-name) Domain Name System (DNS) entries that match each other. This is the standard configuration expected by the Internet standards supporting many DNS-reliant protocols. David Barr published an opinion in RFC 1912 (Informational) recommending it as best practice for DNS administrators, but there are no formal requirements for it codified within the DNS standard itself. A FCrDNS verification can create a weak form of authentication that there is a valid relationship between the owner of a domain name and the owner of the network that has been given an IP address. While weak, this authentication is strong enough that it can be used for whitelisting purposes because spammers and phishers cannot usually by-pass this verification when they use zombie computers for email spoofing. That is, the reverse DNS might verify, but it will usually be part of another domain than the claimed domain name. Using an ISP's mail server as a relay may solve the reverse DNS problem, because the requirement is the forward and reverse lookup for the sending relay have to match, it does not have to be related to the from-field or sending domain of messages it relays. Other methods for establishing a relation between an IP address and a domain in email are the Sender Policy Framework (SPF) and the MX record. ISPs that will not or cannot configure reverse DNS will generate problems for hosts on their networks, by virtue of being unable to support applications or protocols that require reverse DNS agree with the corresponding A (or AAAA) record. ISPs that cannot or will not provide reverse DNS ultimately will be limiting the ability of their client base to use Internet services they provide effectively and securely. Applications Most e-mail mail transfer a
https://en.wikipedia.org/wiki/Vortex%20shedding
In fluid dynamics, vortex shedding is an oscillating flow that takes place when a fluid such as air or water flows past a bluff (as opposed to streamlined) body at certain velocities, depending on the size and shape of the body. In this flow, vortices are created at the back of the body and detach periodically from either side of the body forming a Kármán vortex street. The fluid flow past the object creates alternating low-pressure vortices on the downstream side of the object. The object will tend to move toward the low-pressure zone. If the bluff structure is not mounted rigidly and the frequency of vortex shedding matches the resonance frequency of the structure, then the structure can begin to resonate, vibrating with harmonic oscillations driven by the energy of the flow. This vibration is the cause for overhead power line wires humming in the wind, and for the fluttering of automobile whip radio antennas at some speeds. Tall chimneys constructed of thin-walled steel tubes can be sufficiently flexible that, in air flow with a speed in the critical range, vortex shedding can drive the chimney into violent oscillations that can damage or destroy the chimney. Vortex shedding was one of the causes proposed for the failure of the original Tacoma Narrows Bridge (Galloping Gertie) in 1940, but was rejected because the frequency of the vortex shedding did not match that of the bridge. The bridge actually failed by aeroelastic flutter. A thrill ride, "VertiGo" at Cedar Point in Sandusky, Ohio suffered vortex shedding during the winter of 2001, causing one of the three towers to collapse. The ride was closed for the winter at the time. In northeastern Iran, the Hashemi-Nejad natural gas refinery's flare stacks suffered vortex shedding seven times from 1975 to 2003. Some simulation and analyses were done, which revealed that the main cause was the interaction of the pilot flame and flare stack. The problem was solved by removing the pilot. Governing equation The fr
https://en.wikipedia.org/wiki/Energy%20density
In physics, energy density is the amount of energy stored in a given system or region of space per unit volume. It is sometimes confused with energy per unit mass which is properly called specific energy or . Often only the useful or extractable energy is measured, which is to say that inaccessible energy (such as rest mass energy) is ignored. In cosmological and other general relativistic contexts, however, the energy densities considered are those that correspond to the elements of the stress-energy tensor and therefore do include mass energy as well as energy densities associated with pressure. Energy per unit volume has the same physical units as pressure and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. Likewise, the energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached. Overview There are different types of energy stored in materials, and it takes a particular type of reaction to release each type of energy. In order of the typical magnitude of the energy released, these types of reactions are: nuclear, chemical, electrochemical, and electrical. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles to derive energy from gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈15 kg of air). Electrochemical reactions are used by most mobile devices such as laptop
https://en.wikipedia.org/wiki/Xfire
Xfire was a proprietary freeware instant messaging service for gamers that also served as a game server browser with various other features. It was available for Microsoft Windows. Xfire was originally developed by Ultimate Arena based in Menlo Park, California. On January 3, 2014, it had over 24 million registered users. Xfire's partnership with Livestream allowed users to broadcast live video streams of their current game to an audience. The Xfire website also maintained a "Top Ten" games list, ranking games by the number of hours Xfire users spend playing each game every day. World of Warcraft had been the most played game for many years, but was surpassed by League of Legends on June 20, 2011. Social.xfire.com was a community site for Xfire users, allowing them to upload screenshots, photos and videos and to make contacts. Xfire hosted events every month, which included debates, game tournaments, machinima contests, and chat sessions with Xfire or game developers. Xfire's web based social media was discontinued on June 12, 2015, and the messaging function was shut down on June 27, 2015. The last of Xfire's services were shut down on April 30, 2016. History Xfire, Inc. was founded in 2002 by Dennis "Thresh" Fong, Mike Cassidy, Max Woon, and David Lawee. The company was formerly known as Ultimate Arena, but changed its name to Xfire when its desktop client Xfire became more popular and successful than its gaming website. The first version of the Xfire desktop client was code-named Scoville, which was first developed in 2003 by Garrett Blythe, Chris Kirmse, Mike Judge, and others. The services ability to track game play hours and quickly launch web games, compared to other services at the time quickly gained it popularity. On April 25, 2006, Xfire was acquired by Viacom in a US$102 million deal. In September 2006, Sony was misinterpreted to have announced that Xfire would be used for the PlayStation 3. The confusion came when one PlayStation 3 game, Untold
https://en.wikipedia.org/wiki/Ajax%20%28programming%29
Ajax (also AJAX ; short for "asynchronous JavaScript and XML") is a set of web development techniques that uses various web technologies on the client-side to create asynchronous web applications. With Ajax, web applications can send and retrieve data from a server asynchronously (in the background) without interfering with the display and behaviour of the existing page. By decoupling the data interchange layer from the presentation layer, Ajax allows web pages and, by extension, web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly utilize JSON instead of XML. Ajax is not a technology, but rather a programming concept. HTML and CSS can be used in combination to mark up and style information. The webpage can be modified by JavaScript to dynamically display—and allow the user to interact with the new information. The built-in XMLHttpRequest object is used to execute Ajax on webpages, allowing websites to load content onto the screen without refreshing the page. Ajax is not a new technology, nor is it a new language. Instead, it is existing technologies used in a new way. History In the early-to-mid 1990s, most Websites were based on complete HTML pages. Each user action required a complete new page to be loaded from the server. This process was inefficient, as reflected by the user experience: all page content disappeared, then the new page appeared. Each time the browser reloaded a page because of a partial change, all the content had to be re-sent, even though only some of the information had changed. This placed additional load on the server and made bandwidth a limiting factor in performance. In 1996, the iframe tag was introduced by Internet Explorer; like the object element, it can load or fetch content asynchronously. In 1998, the Microsoft Outlook Web Access team developed the concept behind the XMLHttpRequest scripting object. It appeared as XMLHTTP in the second version o
https://en.wikipedia.org/wiki/PowerBuilder
PowerBuilder is an integrated development environment owned by SAP since the acquisition of Sybase in 2010. On July 5, 2016, SAP and Appeon entered into an agreement whereby Appeon, an independent company, would be responsible for developing, selling, and supporting PowerBuilder. Over the years, PowerBuilder has been updated with new standards. In 2010, a major upgrade of PowerBuilder was released to provide support for the Microsoft .NET Framework. In 2014, support was added for OData, dockable windows, and 64-bit native applications. In 2019 support was added for rapidly creating RESTful Web APIs and non-visual .NET assemblies using the C# language and the .NET Core framework. And PowerScript client app development was revamped with new UI technologies and cloud architecture. Appeon has been releasing new features every 6-12 month cycles, which per the product roadmap focus on four key focus areas: sustaining core features, modernizing application UI, improving developer productivity, and incorporating more Cloud technology. Features PowerBuilder has a native data-handling object called a DataWindow, which can be used to create, edit, and display data from a database. This object gives the programmer a number of tools for specifying and controlling user interface appearance and behavior, and also provides simplified access to database content and JSON or XML from Web services. To some extent, the DataWindow frees the programmer from considering the differences between Database Management Systems from different vendors. DataWindow can display data using multiple presentation styles and can connect to various data sources. Usage PowerBuilder is used primarily for building business CRUD applications. Although new software products are rarely built with PowerBuilder, many client-server ERP products and line-of-business applications built in the late 1980s to early 2000s with PowerBuilder still provide core database functions for large enterprises in governmen
https://en.wikipedia.org/wiki/Index%20of%20genetics%20articles
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z References See also List of genetics research organizations List of geneticists & biochemists Articles Genetics-related topics Biotechnology
https://en.wikipedia.org/wiki/Planetary%20protection
Planetary protection is a guiding principle in the design of an interplanetary mission, aiming to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. Planetary protection reflects both the unknown nature of the space environment and the desire of the scientific community to preserve the pristine nature of celestial bodies until they can be studied in detail. There are two types of interplanetary contamination. Forward contamination is the transfer of viable organisms from Earth to another celestial body. Back contamination is the transfer of extraterrestrial organisms, if they exist, back to the Earth's biosphere. History The potential problem of lunar and planetary contamination was first raised at the International Astronautical Federation VIIth Congress in Rome in 1956. In 1958 the U.S. National Academy of Sciences (NAS) passed a resolution stating, “The National Academy of Sciences of the United States of America urges that scientists plan lunar and planetary studies with great care and deep concern so that initial operations do not compromise and make impossible forever after critical scientific experiments.” This led to creation of the ad hoc Committee on Contamination by Extraterrestrial Exploration (CETEX), which met for a year and recommended that interplanetary spacecraft be sterilized, and stated, “The need for sterilization is only temporary. Mars and possibly Venus need to remain uncontaminated only until study by manned ships becomes possible”. In 1959, planetary protection was transferred to the newly formed Committee on Space Research (COSPAR). COSPAR in 1964 issued Resolution 26 affirming that: In 1967, the US, USSR, and UK ratified the United Nations Outer Space Treaty. The legal basis for planetary protection lies in Article IX of this treaty: This treaty has since been signed and ratified by 104 nation-states. Another 24 have signed but not ratified. All the current spa
https://en.wikipedia.org/wiki/138%20%28number%29
138 (one hundred [and] thirty-eight) is the natural number following 137 and preceding 139. In mathematics 138 is a sphenic number, and the smallest product of three primes such that in base 10, the third prime is a concatenation of the other two: . It is also a one-step palindrome in decimal (138 + 831 = 969). 138 has eight total divisors that generate an arithmetic mean of 36, which is the eighth triangular number. While the sum of the digits of 138 is 12, the product of its digits is 24. 138 is an Ulam number, the thirty-first abundant number, and a primitive (square-free) congruent number. It is the third 47-gonal number. As an interprime, 138 lies between the eleventh pair of twin primes (137, 139), respectively the 33rd and 34th prime numbers. It is the sum of two consecutive primes (67 + 71), and the sum of four consecutive primes (29 + 31 + 37 + 41). There are a total of 44 numbers that are relatively prime with 138 (and up to), while 22 is its reduced totient. 138 is the denominator of the twenty-second Bernoulli number (whose respective numerator, is 854513). A magic sum of 138 is generated inside four magic circles that features the first thirty-three non-zero integers, with a 9 in the center (first constructed by Yang Hui). The simplest Catalan solid, the triakis tetrahedron, produces 138 stellations (depending on rules chosen), 44 of which are fully symmetric and 94 of which are enantiomorphs. Using two radii to divide a circle according to the golden ratio yields sectors of approximately 138 degrees (the golden angle), and 222 degrees. In science The Saros number of the solar eclipse series which began on June 6, 1472, and will end on July 11, 2716. The duration of Saros series 138 is 1244 years, and it contains 70 solar eclipses 138 Tolosa is a brightly colored, stony main belt asteroid The New General Catalogue object NGC 138, a spiral galaxy in the constellation Pisces 138P/Shoemaker-Levy is a periodic comet in the Solar Sy
https://en.wikipedia.org/wiki/Screen-door%20effect
The screen-door effect (SDE) is a visual artifact of displays, where the fine lines separating pixels (or subpixels) become visible in the displayed image. This can be seen in digital projector images and regular displays under magnification or at close range, but the increases in display resolutions have made this much less significant. More recently, the screen door effect has been an issue with virtual reality headsets and other head-mounted displays, because these are viewed at a much closer distance, and stretch a single display across a much wider field of view. SDE in projectors In LCD and DLP projectors, SDE can be seen because projector optics typically have significantly lower pixel density than the size of the image they project, enlarging these fine lines, which are much smaller than the pixels themselves, to be seen. This results in an image that appears as if viewed through a fine screen or mesh such as those used on anti-insect screen doors. The screen door effect was noticed on the first digital projector: an LCD projector made in 1984 by Gene Dolgoff. To eliminate this artifact, Dolgoff invented depixelization, which used various optical methods to eliminate the visibility of the spaces between the pixels. The dominant method made use of a microlens array, wherein each micro-lens caused a slightly magnified image of the pixel behind it, filling in the previously-visible spaces between pixels. In addition, when making a projector with a single, full-color LCD panel, an additional appearance of pixelation was visible due to the noticeability of green pixels (appearing bright) adjacent to red and blue pixels (appearing dark), forming a noticeable repeating light and dark pattern. Use of a micro-lens array at a slightly greater distance created new pixel images, with each "new" pixel being a summation of six neighboring sub-pixels (made up of two full color pixels, one above the other). Since there were as many micro-lenses as there were original pi
https://en.wikipedia.org/wiki/HyperZ
HyperZ is the brand for a set of processing techniques developed by ATI Technologies and later Advanced Micro Devices and implemented in their Radeon-GPUs. HyperZ was announced in November 2000 and was still available in the TeraScale-based Radeon HD 2000 Series and in current Graphics Core Next-based graphics products. On the Radeon R100-based cores, Radeon DDR through 7500, where HyperZ debuted, ATI claimed a 20% improvement in overall rendering efficiency. They stated that with HyperZ, Radeon could be said to offer 1.5 gigatexels per second fillrate performance instead of the card's apparent theoretical rate of 1.2 gigatexels. In testing it was shown that HyperZ did indeed offer a tangible performance improvement that allowed the less endowed Radeon to keep up with the less efficient GeForce 2 GTS. Functionality HyperZ consists of three mechanisms: Z compression The Z-buffer is stored in a lossless compressed format to minimize the Z-Buffer bandwidth as Z read or writes are taking place. The compression scheme ATI used on Radeon 8500 operated 20% more effectively than on the original Radeon and Radeon 7500. Fast Z clear Rather than writing zeros throughout the entire Z-buffer, and thus using the bandwidth of another Z-Buffer write, a Fast Z Clear technique is used that can tag entire blocks of the Z-Buffer as cleared, such that only each of these blocks need be tagged as cleared. On Radeon 8500, ATI claimed that this process could clear the Z-Buffer up to approximately 64 times faster than that of a card without fast Z clear. Hierarchical Z-buffer This feature allows for the pixel being rendered to be checked against the z-buffer before the pixel actually arrives in the rendering pipelines. This allows useless pixels to be thrown out early (early Z reject), before the Radeon has to render them. Versions of HyperZ With each new microarchitecture, ATI has revised and improved the technology. HyperZ – R100 HyperZ II – R200 (8500-9250) HyperZ III – R300 in
https://en.wikipedia.org/wiki/Hackenbush
Hackenbush is a two-player game invented by mathematician John Horton Conway. It may be played on any configuration of colored line segments connected to one another by their endpoints and to a "ground" line. Gameplay The game starts with the players drawing a "ground" line (conventionally, but not necessarily, a horizontal line at the bottom of the paper or other playing area) and several line segments such that each line segment is connected to the ground, either directly at an endpoint, or indirectly, via a chain of other segments connected by endpoints. Any number of segments may meet at a point and thus there may be multiple paths to ground. On their turn, a player "cuts" (erases) any line segment of their choice. Every line segment no longer connected to the ground by any path "falls" (i.e., gets erased). According to the normal play convention of combinatorial game theory, the first player who is unable to move loses. Hackenbush boards can consist of finitely many (in the case of a "finite board") or infinitely many (in the case of an "infinite board") line segments. The existence of an infinite number of line segments does not violate the game theory assumption that the game can be finished in a finite amount of time, provided that there are only finitely many line segments directly "touching" the ground. On an infinite board, based on the layout of the board the game can continue on forever, assuming there are infinitely many points touching the ground. Variants In the original folklore version of Hackenbush, any player is allowed to cut any edge: as this is an impartial game it is comparatively straightforward to give a complete analysis using the Sprague–Grundy theorem. Thus the versions of Hackenbush of interest in combinatorial game theory are more complex partisan games, meaning that the options (moves) available to one player would not necessarily be the ones available to the other player if it were their turn to move given the same position.
https://en.wikipedia.org/wiki/Word%20%28computer%20architecture%29
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture. The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used). Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (KW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (KW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes. Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bit
https://en.wikipedia.org/wiki/Federated%20Naming%20Service
In computing, the Federated Naming Service (FNS) or XFN (X/Open Federated Naming) is a system for uniting various name services under a single interface for the basic naming operations. It is produced by X/Open and included in various Unix operating systems, primarily Solaris versions 2.5 to 9. The purpose of XFN and FNS is to allow applications to use widely heterogeneous naming services (such as NIS, DNS and so on) via a single interface, to avoid duplication of programming effort. Unlike the similar LDAP, neither XFN nor FNS were ever popular nor widely used. FNS was last included in Solaris 9 and was not included with Solaris 10. External links and references Overview of FNS (Solaris 9 man page) Overview of the XFN interface (Solaris 9 man page) X/Open Federated Naming - specification for uniform naming interfaces between multiple naming systems (Elizabeth A. Martin, Hewlett-Packard Journal, December 1995) Federated Naming Service Programming Guide (Sun Microsystems 816–1470–10, September 2002) Sun Microsystems software Identity management Solaris software
https://en.wikipedia.org/wiki/Quality%20of%20results
Quality of Results (QoR) is a term used in evaluating technological processes. It is generally represented as a vector of components, with the special case of uni-dimensional value as a synthetic measure. History The term was coined by the Electronic Design Automation (EDA) industry in the late 1980s. QoR was meant to be an indicator of the performance of integrated circuits (chips), and initially measured the area and speed of a chip. As the industry evolved, new chip parameters were considered for coverage by the QoR, illustrating new areas of focus for chip designers (for example power dissipation, power efficiency, routing overhead, etc.). Because of the broad scope of quality assessment, QoR eventually evolved into a generic vector representation comprising a number of different values, where the meaning of each vector value was explicitly specified in the QoR analysis document. Currently the term is gaining popularity in other sectors of technology, with each sector using its own appropriate components. Current trends in EDA Originally, the QoR was used to specify absolute values such as chip area, power dissipation, speed, etc. (for example, a QoR could be specified as a {100 MHz, 1W, 1 mm²} vector), and could only be used for comparing the different achievements of a single design specification. The current trend among designers is to include normalized values in the QoR vector, such that they will remain meaningful for a longer period of time (as technologies change), and/or across broad classes of design. For example, one often uses – as a QoR component – a number representing the ratio between the area required by a combinational logic block and the area required by a simple logic gate, this number being often referred to as "relative density of combinational logic". In this case, a relative density of five will generally be accepted as a good quality of result – relative density of combinational logic component – while a relative density of fifty will
https://en.wikipedia.org/wiki/Life%20table
In actuarial science and demography, a life table (also called a mortality table or actuarial table) is a table which shows, for each age, what the probability is that a person of that age will die before their next birthday ("probability of death"). In other words, it represents the survivorship of people from a certain population. They can also be explained as a long-term mathematical way to measure a population's longevity. Tables have been created by demographers including John Graunt, Reed and Merrell, Keyfitz, and Greville. There are two types of life tables used in actuarial science. The period life table represents mortality rates during a specific time period for a certain population. A cohort life table, often referred to as a generation life table, is used to represent the overall mortality rates of a certain population's entire lifetime. They must have had to be born during the same specific time interval. A cohort life table is more frequently used because it is able to make a prediction of any expected changes in the mortality rates of a population in the future. This type of table also analyzes patterns in mortality rates that can be observed over time. Both of these types of life tables are created based on an actual population from the present, as well as an educated prediction of the experience of a population in the near future. In order to find the true life expectancy average, 100 years would need to pass and by then finding that data would be of no use as healthcare is continually advancing. Other life tables in historical demography may be based on historical records, although these often undercount infants and understate infant mortality, on comparison with other regions with better records, and on mathematical adjustments for varying mortality levels and life expectancies at birth. From this starting point, a number of inferences can be derived. The probability of surviving any particular year of age The remaining life expectancy for peo
https://en.wikipedia.org/wiki/Evolution%20and%20the%20Theory%20of%20Games
Evolution and the Theory of Games is a book by the British evolutionary biologist John Maynard Smith on evolutionary game theory. The book was initially published in December 1982 by Cambridge University Press. Overview In the book, John Maynard Smith summarises work on evolutionary game theory that had developed in the 1970s, to which he made several important contributions. The book is also noted for being well written and not overly mathematically challenging. The main contribution to be had from this book is the introduction of the Evolutionarily Stable Strategy, or ESS, concept, which states that for a set of behaviours to be conserved over evolutionary time, they must be the most profitable avenue of action when common, so that no alternative behaviour can invade. So, for instance, suppose that in a population of frogs, males fight to the death over breeding ponds. This would be an ESS if any one cowardly frog that does not fight to the death always fares worse (in fitness terms, of course). A more likely scenario is one where fighting to the death is not an ESS because a frog might arise that will stop fighting if it realises that it is going to lose. This frog would then reap the benefits of fighting, but not the ultimate cost. Hence, fighting to the death would easily be invaded by a mutation that causes this sort of "informed fighting." Much complexity can be built from this, and Maynard Smith is outstanding at explaining in clear prose and with simple math. Reception See also Evolutionary biology References External links Cambridge University Press 1982 non-fiction books Books about evolution Game theory Evolutionary game theory Mathematics books
https://en.wikipedia.org/wiki/Akira%20Yoshizawa
was a Japanese origamist, considered to be the grandmaster of origami. He is credited with raising origami from a craft to a living art. According to his own estimation made in 1989, he created more than 50,000 models, of which only a few hundred designs were presented as diagrams in his 18 books. Yoshizawa acted as an international cultural ambassador for Japan throughout his career. In 1983, Emperor Hirohito awarded him the Order of the Rising Sun, 5th class, one of the highest honors bestowed in Japan. Life Yoshizawa was born on 14 March 1911, in Kaminokawa, Japan, to the family of a dairy farmer. When he was a child, he took pleasure in teaching himself origami. He moved into a factory job in Tokyo when he was 13 years old. His passion for origami was rekindled in his early 20s, when he was promoted from factory worker to technical draftsman. His new job was to teach junior employees geometry. Yoshizawa used the traditional art of origami to understand and communicate geometrical problems. In 1937, he left factory work to pursue origami full-time. During the next 20 years, he lived in total poverty, earning his living by door-to-door selling of (a Japanese preserved condiment that is usually made of seaweed). During World War II, Yoshizawa served in the army medical corps in Hong Kong. He made origami models to cheer up the sick patients, but eventually fell ill himself and was sent back to Japan. His origami work was creative enough to be included in the 1944 book Origami Shuko, by . However, it was his work for the January 1952 issue of the magazine Asahi Graph that launched his career, which included the 12 zodiac signs commissioned by a magazine. In 1954, his first monograph, Atarashii Origami Geijutsu (New Origami Art) was published. In this work, he established the Yoshizawa–Randlett system of notation for origami folds (a system of symbols, arrows and diagrams), which has become the standard for most paperfolders. The publishing of this book helped
https://en.wikipedia.org/wiki/Time%20consistency%20%28finance%29
Time consistency in the context of finance is the property of not having mutually contradictory evaluations of risk at different points in time. This property implies that if investment A is considered riskier than B at some future time, then A will also be considered riskier than B at every prior time. Time consistency and financial risk Time consistency is a property in financial risk related to dynamic risk measures. The purpose of the time the consistent property is to categorize the risk measures which satisfy the condition that if portfolio (A) is riskier than portfolio (B) at some time in the future, then it is guaranteed to be riskier at any time prior to that point. This is an important property since if it were not to hold then there is an event (with probability of occurring greater than 0) such that B is riskier than A at time although it is certain that A is riskier than B at time . As the name suggests a time inconsistent risk measure can lead to inconsistent behavior in financial risk management. Mathematical definition A dynamic risk measure on is time consistent if and implies . Equivalent definitions Equality For all Recursive For all Acceptance Set For all where is the time acceptance set and Cocycle condition (for convex risk measures) For all where is the minimal penalty function (where is an acceptance set and denotes the essential supremum) at time and . Construction Due to the recursive property it is simple to construct a time consistent risk measure. This is done by composing one-period measures over time. This would mean that: Examples Value at risk and average value at risk Both dynamic value at risk and dynamic average value at risk are not a time consistent risk measures. Time consistent alternative The time consistent alternative to the dynamic average value at risk with parameter at time t is defined by such that . Dynamic superhedging price The dynamic superhedging price is a time consisten
https://en.wikipedia.org/wiki/Polarization%20in%20astronomy
Polarization is an important phenomenon in astronomy. Stars The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields. Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars). Sun Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields. Other sources Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers). The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization. Apart from providing in